Sample records for zeroth order regular

  1. Zeroth order regular approximation approach to electric dipole moment interactions of the electron.

    PubMed

    Gaul, Konstantin; Berger, Robert

    2017-07-07

    A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  2. Zeroth order regular approximation approach to electric dipole moment interactions of the electron

    NASA Astrophysics Data System (ADS)

    Gaul, Konstantin; Berger, Robert

    2017-07-01

    A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  3. A gauge-independent zeroth-order regular approximation to the exact relativistic Hamiltonian—Formulation and applications

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-01-01

    A simple modification of the zeroth-order regular approximation (ZORA) in relativistic theory is suggested to suppress its erroneous gauge dependence to a high level of approximation. The method, coined gauge-independent ZORA (ZORA-GI), can be easily installed in any existing nonrelativistic quantum chemical package by programming simple one-electron matrix elements for the quasirelativistic Hamiltonian. Results of benchmark calculations obtained with ZORA-GI at the Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory (MP2) level for dihalogens X2 (X=F,Cl,Br,I,At) are in good agreement with the results of four-component relativistic calculations (HF level) and experimental data (MP2 level). ZORA-GI calculations based on MP2 or coupled-cluster theory with single and double perturbations and a perturbative inclusion of triple excitations [CCSD(T)] lead to accurate atomization energies and molecular geometries for the tetroxides of group VIII elements. With ZORA-GI/CCSD(T), an improved estimate for the atomization energy of hassium (Z=108) tetroxide is obtained.

  4. Relativistic nuclear magnetic resonance J-coupling with ultrasoft pseudopotentials and the zeroth-order regular approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Timothy F. G., E-mail: tim.green@materials.ox.ac.uk; Yates, Jonathan R., E-mail: jonathan.yates@materials.ox.ac.uk

    2014-06-21

    We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing themore » heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, {sup 1}J(P-Ag) and {sup 2}J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW.« less

  5. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  6. Scalar relativistic computations of nuclear magnetic shielding and g-shifts with the zeroth-order regular approximation and range-separated hybrid density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aquino, Fredy W.; Govind, Niranjan; Autschbach, Jochen

    2011-10-01

    Density functional theory (DFT) calculations of NMR chemical shifts and molecular g-tensors with Gaussian-type orbitals are implemented via second-order energy derivatives within the scalar relativistic zeroth order regular approximation (ZORA) framework. Nonhybrid functionals, standard (global) hybrids, and range-separated (Coulomb-attenuated, long-range corrected) hybrid functionals are tested. Origin invariance of the results is ensured by use of gauge-including atomic orbital (GIAO) basis functions. The new implementation in the NWChem quantum chemistry package is verified by calculations of nuclear shielding constants for the heavy atoms in HX (X=F, Cl, Br, I, At) and H2X (X = O, S, Se, Te, Po), and Temore » chemical shifts in a number of tellurium compounds. The basis set and functional dependence of g-shifts is investigated for 14 radicals with light and heavy atoms. The problem of accurately predicting F NMR shielding in UF6-nCln, n = 1 to 6, is revisited. The results are sensitive to approximations in the density functionals, indicating a delicate balance of DFT self-interaction vs. correlation. For the uranium halides, the results with the range-separated functionals are mixed.« less

  7. Relativistic Zeroth-Order Regular Approximation Combined with Nonhybrid and Hybrid Density Functional Theory: Performance for NMR Indirect Nuclear Spin-Spin Coupling in Heavy Metal Compounds.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-01-12

    A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.

  8. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    NASA Astrophysics Data System (ADS)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  9. Noise is the new signal: Moving beyond zeroth-order geomorphology (Invited)

    NASA Astrophysics Data System (ADS)

    Jerolmack, D. J.

    2010-12-01

    The last several decades have witnessed a rapid growth in our understanding of landscape evolution, led by the development of geomorphic transport laws - time- and space-averaged equations relating mass flux to some physical process(es). In statistical mechanics this approach is called mean field theory (MFT), in which complex many-body interactions are replaced with an external field that represents the average effect of those interactions. Because MFT neglects all fluctuations around the mean, it has been described as a zeroth-order fluctuation model. The mean field approach to geomorphology has enabled the development of landscape evolution models, and led to a fundamental understanding of many landform patterns. Recent research, however, has highlighted two limitations of MFT: (1) The integral (averaging) time and space scales in geomorphic systems are sometimes poorly defined and often quite large, placing the mean field approximation on uncertain footing, and; (2) In systems exhibiting fractal behavior, an integral scale does not exist - e.g., properties like mass flux are scale-dependent. In both cases, fluctuations in sediment transport are non-negligible over the scales of interest. In this talk I will synthesize recent experimental and theoretical work that confronts these limitations. Discrete element models of fluid and grain interactions show promise for elucidating transport mechanics and pattern-forming instabilities, but require detailed knowledge of micro-scale processes and are computationally expensive. An alternative approach is to begin with a reasonable MFT, and then add higher-order terms that capture the statistical dynamics of fluctuations. In either case, moving beyond zeroth-order geomorphology requires a careful examination of the origins and structure of transport “noise”. I will attempt to show how studying the signal in noise can both reveal interesting new physics, and also help to formalize the applicability of geomorphic

  10. Exploration of zeroth-order wavefunctions and energies as a first step toward intramolecular symmetry-adapted perturbation theory

    NASA Astrophysics Data System (ADS)

    Gonthier, Jérôme F.; Corminboeuf, Clémence

    2014-04-01

    Non-covalent interactions occur between and within all molecules and have a profound impact on structural and electronic phenomena in chemistry, biology, and material science. Understanding the nature of inter- and intramolecular interactions is essential not only for establishing the relation between structure and properties, but also for facilitating the rational design of molecules with targeted properties. These objectives have motivated the development of theoretical schemes decomposing intermolecular interactions into physically meaningful terms. Among the various existing energy decomposition schemes, Symmetry-Adapted Perturbation Theory (SAPT) is one of the most successful as it naturally decomposes the interaction energy into physical and intuitive terms. Unfortunately, analogous approaches for intramolecular energies are theoretically highly challenging and virtually nonexistent. Here, we introduce a zeroth-order wavefunction and energy, which represent the first step toward the development of an intramolecular variant of the SAPT formalism. The proposed energy expression is based on the Chemical Hamiltonian Approach (CHA), which relies upon an asymmetric interpretation of the electronic integrals. The orbitals are optimized with a non-hermitian Fock matrix based on two variants: one using orbitals strictly localized on individual fragments and the other using canonical (delocalized) orbitals. The zeroth-order wavefunction and energy expression are validated on a series of prototypical systems. The computed intramolecular interaction energies demonstrate that our approach combining the CHA with strictly localized orbitals achieves reasonable interaction energies and basis set dependence in addition to producing intuitive energy trends. Our zeroth-order wavefunction is the primary step fundamental to the derivation of any perturbation theory correction, which has the potential to truly transform our understanding and quantification of non

  11. Exploration of zeroth-order wavefunctions and energies as a first step toward intramolecular symmetry-adapted perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonthier, Jérôme F.; Corminboeuf, Clémence, E-mail: clemence.corminboeuf@epfl.ch

    2014-04-21

    Non-covalent interactions occur between and within all molecules and have a profound impact on structural and electronic phenomena in chemistry, biology, and material science. Understanding the nature of inter- and intramolecular interactions is essential not only for establishing the relation between structure and properties, but also for facilitating the rational design of molecules with targeted properties. These objectives have motivated the development of theoretical schemes decomposing intermolecular interactions into physically meaningful terms. Among the various existing energy decomposition schemes, Symmetry-Adapted Perturbation Theory (SAPT) is one of the most successful as it naturally decomposes the interaction energy into physical and intuitivemore » terms. Unfortunately, analogous approaches for intramolecular energies are theoretically highly challenging and virtually nonexistent. Here, we introduce a zeroth-order wavefunction and energy, which represent the first step toward the development of an intramolecular variant of the SAPT formalism. The proposed energy expression is based on the Chemical Hamiltonian Approach (CHA), which relies upon an asymmetric interpretation of the electronic integrals. The orbitals are optimized with a non-hermitian Fock matrix based on two variants: one using orbitals strictly localized on individual fragments and the other using canonical (delocalized) orbitals. The zeroth-order wavefunction and energy expression are validated on a series of prototypical systems. The computed intramolecular interaction energies demonstrate that our approach combining the CHA with strictly localized orbitals achieves reasonable interaction energies and basis set dependence in addition to producing intuitive energy trends. Our zeroth-order wavefunction is the primary step fundamental to the derivation of any perturbation theory correction, which has the potential to truly transform our understanding and quantification of non

  12. Design of experiments for zeroth and first-order reaction rates.

    PubMed

    Amo-Salas, Mariano; Martín-Martín, Raúl; Rodríguez-Aragón, Licesio J

    2014-09-01

    This work presents optimum designs for reaction rates experiments. In these experiments, time at which observations are to be made and temperatures at which reactions are to be run need to be designed. Observations are performed along time under isothermal conditions. Each experiment needs a fixed temperature and so the reaction can be measured at the designed times. For these observations under isothermal conditions over the same reaction a correlation structure has been considered. D-optimum designs are the aim of our work for zeroth and first-order reaction rates. Temperatures for the isothermal experiments and observation times, to obtain the most accurate estimates of the unknown parameters, are provided in these designs. D-optimum designs for a single observation in each isothermal experiment or for several correlated observations have been obtained. Robustness of the optimum designs for ranges of the correlation parameter and comparisons of the information gathered by different designs are also shown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Zeroth Law, Entropy, Equilibrium, and All That

    ERIC Educational Resources Information Center

    Canagaratna, Sebastian G.

    2008-01-01

    The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the…

  14. Examples of the Zeroth Theorem of the History of Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, J.D.

    2007-08-24

    The zeroth theorem of the history of science, enunciated byE. P. Fischer, states that a discovery (rule,regularity, insight) namedafter someone (often) did not originate with that person. I present fiveexamples from physics: the Lorentz condition partial muAmu = 0 definingthe Lorentz gauge of the electromagnetic potentials; the Dirac deltafunction, delta(x); the Schumann resonances of the earth-ionospherecavity; the Weizsacker-Williams method of virtual quanta; the BMTequation of spin dynamics. I give illustrated thumbnail sketches of boththe true and reputed discoverers and quote from their "discovery"publications.

  15. Zeroth-order design report for the next linear collider. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raubenheimer, T.O.

    1996-05-01

    This Zeroth Order Design Report (ZDR) for the Next Linear Collider (NLC) has been completed as a feasibility study for a TeV-scale linear collider that incorporates a room-temperature accelerator powered by rf microwaves at 11.424 GHz--similar to that presently used in the SLC, but at four times the rf frequency. The purpose of this study is to examine the complete systems of such a collider, to understand how the parts fit together, and to make certain that every required piece has been included. The design presented here is not fully engineered in any sense, but to be assured that themore » NLC can be built, attention has been given to a number of critical components and issues that present special challenges. More engineering and development of a number of mechanical and electrical systems remain to be done, but the conclusion of this study is that indeed the NLC is technically feasible and can be expected to reach the performance levels required to perform research at the TeV energy scale. Volume one covers the following: the introduction; electron source; positron source; NLC damping rings; bunch compressors and prelinac; low-frequency linacs and compressors; main linacs; design and dynamics; and RF systems for main linacs.« less

  16. Zeroth Law, Entropy, Equilibrium, and All That

    NASA Astrophysics Data System (ADS)

    Canagaratna, Sebastian G.

    2008-05-01

    The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the traditional approach to thermodynamics is discussed. It is shown that the traditional approach does not need to appeal to the second law to solve with rigor the type of problems discussed by Gislason and Craig: in problems not involving chemical reaction, the zeroth law and the condition for mechanical equilibrium, complemented by the first law and any necessary equations of state, are sufficient to determine the final state. We have to invoke the second law only if we wish to calculate the change of entropy. Since most students are exposed to a traditional approach to thermodynamics, the examples of Gislason and Craig are re-examined in terms of the traditional formulation. The maximization of the entropy in the final state can be verified in the traditional approach quite directly by the use of the fundamental equations of thermodynamics. This approach uses relatively simple mathematics in as general a setting as possible.

  17. Representation of the exact relativistic electronic Hamiltonian within the regular approximation

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2003-12-01

    The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.

  18. Is the choice of a standard zeroth-order hamiltonian in CASPT2 ansatz optimal in calculations of excitation energies in protonated and unprotonated schiff bases of retinal?

    PubMed

    Wolański, Łukasz; Grabarek, Dawid; Andruniów, Tadeusz

    2018-04-10

    To account for systematic error of CASPT2 method empirical modification of the zeroth-order Hamiltonian with Ionization Potential-Electron Affinity (IPEA) shift was introduced. The optimized IPEA value (0.25 a.u.), called standard IPEA (S-IPEA), was recommended but due to its unsatisfactory performance in multiple metallic and organic compounds it has been questioned lately as a general parameter working properly for all molecules under CASPT2 study. As we are interested in Schiff bases of retinal, an important question emerging from this conflict of choice, to use or not to use S-IPEA, is whether the introduction of the modified zeroth-order Hamiltonian into CASPT2 ansatz does really improve their energetics. To achieve this goal, we assessed an impact of the IPEA shift value, in a range of 0-0.35 a.u., on vertical excitation energies to low-lying singlet states of two protonated (RPSBs) and two unprotonated (RSBs) Schiff bases of retinal for which experimental data in gas phase are available. In addition, an effect of geometry, basis set, and active space on computed VEEs is also reported. We find, that for these systems, the choice of S-IPEA significantly overestimates both S 0 →S 1 and S 0 →S 2 energies and the best theoretical estimate, in reference to the experimental data, is provided with either unmodified zeroth-order Hamiltonian or small value of the IPEA shift in a range of 0.05-0.15 a.u., depending on active space and basis set size, equilibrium geometry, and character of the excited state. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  19. Zeroth-order design report for the next linear collider. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raubenheimer, T.O.

    This Zeroth-Order Design Report (ZDR) for the Next Linear Collider (NLC) has been completed as a feasibility study for a TeV-scale linear collider that incorporates a room-temperature accelerator powered by rf microwaves at 11.424 GHz--similar to that presently used in the SLC, but at four times the rf frequency. The purpose of this study is to examine the complete systems of such a collider, to understand how the parts fit together, and to make certain that every required piece has been included. The ``design`` presented here is not fully engineered in any sense, but to be assured that the NLCmore » can be built, attention has been given to a number of critical components and issues that present special challenges. More engineering and development of a number of mechanical and electrical systems remain to be done, but the conclusion of this study is that indeed the NLC is technically feasible and can be expected to reach the performance levels required to perform research at the TeV energy scale. Volume II covers the following: collimation systems; IP switch and big bend; final focus; the interaction region; multiple bunch issues; control systems; instrumentation; machine protection systems; NLC reliability considerations; NLC conventional facilities. Also included are four appendices on the following topics: An RF power source upgrade to the NLC; a second interaction region for gamma-gamma, gamma-electron; ground motion: theory and measurement; and beam-based feedback: theory and implementation.« less

  20. Analysis of forward scattering of an acoustical zeroth-order Bessel beam from rigid complicated (aspherical) structures

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chai, Yingbin; Gong, Zhixiong; Marston, Philip L.

    2017-10-01

    The forward scattering from rigid spheroids and endcapped cylinders with finite length (even with a large aspect ratio) immersed in a non-viscous fluid under the illumination of an idealized zeroth-order acoustical Bessel beam (ABB) with arbitrary angles of incidence is calculated and analyzed in the implementation of the T-matrix method (TTM). Based on the present method, the incident coefficients of expansion for the incident ABB are derived and simplifying methods are proposed for the numerical accuracy and computational efficiency according to the geometrical symmetries. A home-made MATLAB software package is constructed accordingly, and then verified and validated for the ABB scattering from rigid aspherical obstacles. Several numerical examples are computed for the forward scattering from both rigid spheroids and finite cylinder, with particular emphasis on the aspect ratios, the half-cone angles of ABBs, the incident angles and the dimensionless frequencies. The rectangular patterns of target strength in the (β, θs) domain (where β is the half-cone angle of the ABB and θs is the scattered polar angle) and local/total forward scattering versus dimensionless frequency are exhibited, which could provide new insights into the physical mechanisms of Bessel beam scattering by rigid spheroids and finite cylinders. The ray diagrams in geometrical models for the scattering in the forward half-space and the optical cross-section theorem help to interpret the scattering mechanisms of ABBs. This research work may provide an alternative for the partial wave series solution under certain circumstances interacting with ABBs for complicated obstacles and benefit some related works in optics and electromagnetics.

  1. An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.

    PubMed

    Ilias, Miroslav; Saue, Trond

    2007-02-14

    The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.

  2. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  3. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  4. Regularities and irregularities in order flow data

    NASA Astrophysics Data System (ADS)

    Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas

    2017-11-01

    We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.

  5. Regularization in Short-Term Memory for Serial Order

    ERIC Educational Resources Information Center

    Botvinick, Matthew; Bylsma, Lauren M.

    2005-01-01

    Previous research has shown that short-term memory for serial order can be influenced by background knowledge concerning regularities of sequential structure. Specifically, it has been shown that recall is superior for sequences that fit well with familiar sequencing constraints. The authors report a corresponding effect pertaining to serial…

  6. Connection between the regular approximation and the normalized elimination of the small component in relativistic quantum theory

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-02-01

    The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.

  7. 25 CFR 11.1210 - Duration and renewal of a regular protection order.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Duration and renewal of a regular protection order. 11.1210 Section 11.1210 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11.1210...

  8. 25 CFR 11.1210 - Duration and renewal of a regular protection order.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Duration and renewal of a regular protection order. 11.1210 Section 11.1210 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11.1210...

  9. High-order regularization in lattice-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.

    2017-04-01

    A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.

  10. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... custody of any children involved when appropriate and provide for visitation rights, child support, and... 25 Indians 1 2012-04-01 2011-04-01 true Obtaining a regular (non-emergency) order of protection...

  11. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... custody of any children involved when appropriate and provide for visitation rights, child support, and... 25 Indians 1 2014-04-01 2014-04-01 false Obtaining a regular (non-emergency) order of protection...

  12. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... custody of any children involved when appropriate and provide for visitation rights, child support, and... 25 Indians 1 2013-04-01 2013-04-01 false Obtaining a regular (non-emergency) order of protection...

  13. Semilocal momentum-space regularized chiral two-nucleon potentials up to fifth order

    NASA Astrophysics Data System (ADS)

    Reinert, P.; Krebs, H.; Epelbaum, E.

    2018-05-01

    We introduce new semilocal two-nucleon potentials up to fifth order in the chiral expansion. We employ a simple regularization approach for the pion exchange contributions which i) maintains the long-range part of the interaction, ii) is implemented in momentum space and iii) can be straightforwardly applied to regularize many-body forces and current operators. We discuss in detail the two-nucleon contact interactions at fourth order and demonstrate that three terms out of fifteen used in previous calculations can be eliminated via suitably chosen unitary transformations. The removal of the redundant contact terms results in a drastic simplification of the fits to scattering data and leads to interactions which are much softer ( i.e., more perturbative) than our recent semilocal coordinate-space regularized potentials. Using the pion-nucleon low-energy constants from matching pion-nucleon Roy-Steiner equations to chiral perturbation theory, we perform a comprehensive analysis of nucleon-nucleon scattering and the deuteron properties up to fifth chiral order and study the impact of the leading F-wave two-nucleon contact interactions which appear at sixth order. The resulting chiral potentials at fifth order lead to an outstanding description of the proton-proton and neutron-proton scattering data from the self-consistent Granada-2013 database below the pion production threshold, which is significantly better than for any other chiral potential. For the first time, the chiral potentials match in precision and even outperform the available high-precision phenomenological potentials, while the number of adjustable parameters is, at the same time, reduced by about ˜ 40%. Last but not least, we perform a detailed error analysis and, in particular, quantify for the first time the statistical uncertainties of the fourth- and the considered sixth-order contact interactions.

  14. Recursive regularization step for high-order lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  15. Zeroth-order phase-contrast technique.

    PubMed

    Pizolato, José Carlos; Cirino, Giuseppe Antonio; Gonçalves, Cristhiane; Neto, Luiz Gonçalves

    2007-11-01

    What we believe to be a new phase-contrast technique is proposed to recover intensity distributions from phase distributions modulated by spatial light modulators (SLMs) and binary diffractive optical elements (DOEs). The phase distribution is directly transformed into intensity distributions using a 4f optical correlator and an iris centered in the frequency plane as a spatial filter. No phase-changing plates or phase dielectric dots are used as a filter. This method allows the use of twisted nematic liquid-crystal televisions (LCTVs) operating in the real-time phase-mostly regime mode between 0 and p to generate high-intensity multiple beams for optical trap applications. It is also possible to use these LCTVs as input SLMs for optical correlators to obtain high-intensity Fourier transform distributions of input amplitude objects.

  16. Second-order small disturbance theory for hypersonic flow over power-law bodies. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1974-01-01

    A mathematical method for determining the flow field about power-law bodies in hypersonic flow conditions is developed. The second-order solutions, which reflect the effects of the second-order terms in the equations, are obtained by applying the method of small perturbations in terms of body slenderness parameter to the zeroth-order solutions. The method is applied by writing each flow variable as the sum of a zeroth-order and a perturbation function, each multiplied by the axial variable raised to a power. The similarity solutions are developed for infinite Mach number. All results obtained are for no flow through the body surface (as a boundary condition), but the derivation indicates that small amounts of blowing or suction through the wall can be accommodated.

  17. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  18. Tunable rejection filters with ultra-wideband using zeroth shear mode plate wave resonators

    NASA Astrophysics Data System (ADS)

    Kadota, Michio; Sannomiya, Toshio; Tanaka, Shuji

    2017-07-01

    This paper reports wide band rejection filters and tunable rejection filters using ultra-wideband zeroth shear mode (SH0) plate wave resonators. The frequency range covers the digital TV band in Japan that runs from 470 to 710 MHz. This range has been chosen to meet the TV white space cognitive radio requirements of rejection filters. Wide rejection bands were obtained using several resonators with different frequencies. Tunable rejection filters were demonstrated using Si diodes connected to the band rejection filters. Wide tunable ranges as high as 31% were measured by applying a DC voltage to the Si diodes.

  19. Recovering fine details from under-resolved electron tomography data using higher order total variation ℓ 1 regularization

    DOE PAGES

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...

    2017-01-03

    Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less

  20. Regularized learning of linear ordered-statistic constant false alarm rate filters (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.

    2017-05-01

    The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.

  1. Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics

    NASA Astrophysics Data System (ADS)

    Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol

    2015-09-01

    The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.

  2. Optical diffraction by ordered 2D arrays of silica microspheres

    NASA Astrophysics Data System (ADS)

    Shcherbakov, A. A.; Shavdina, O.; Tishchenko, A. V.; Veillas, C.; Verrier, I.; Dellea, O.; Jourlin, Y.

    2017-03-01

    The article presents experimental and theoretical studies of angular dependent diffraction properties of 2D monolayer arrays of silica microspheres. High-quality large area defect-free monolayers of 1 μm diameter silica microspheres were deposited by the Langmuir-Blodgett technique under an accurate optical control. Measured angular dependencies of zeroth and one of the first order diffraction efficiencies produced by deposited samples were simulated by the rigorous Generalized Source Method taking into account particle size dispersion and lattice nonideality.

  3. Higher-order force moments of active particles

    NASA Astrophysics Data System (ADS)

    Nasouri, Babak; Elfring, Gwynn J.

    2018-04-01

    Active particles moving through fluids generate disturbance flows due to their activity. For simplicity, the induced flow field is often modeled by the leading terms in a far-field approximation of the Stokes equations, whose coefficients are the force, torque, and stresslet (zeroth- and first-order force moments) of the active particle. This level of approximation is quite useful, but may also fail to predict more complex behaviors that are observed experimentally. In this study, to provide a better approximation, we evaluate the contribution of the second-order force moments to the flow field and, by reciprocal theorem, present explicit formulas for the stresslet dipole, rotlet dipole, and potential dipole for an arbitrarily shaped active particle. As examples of this method, we derive modified Faxén laws for active spherical particles and resolve higher-order moments for active rod-like particles.

  4. On the zeroth-order hamiltonian for CASPT2 calculations of spin crossover compounds.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Ariño, Jordi; Robert, Vincent

    2016-04-15

    Complete active space self-consistent field theory (CASSCF) calculations and subsequent second-order perturbation theory treatment (CASPT2) are discussed in the evaluation of the spin-states energy difference (ΔH(elec)) of a series of seven spin crossover (SCO) compounds. The reference values have been extracted from a combination of experimental measurements and DFT + U calculations, as discussed in a recent article (Vela et al., Phys Chem Chem Phys 2015, 17, 16306). It is definitely proven that the critical IPEA parameter used in CASPT2 calculations of ΔH(elec), a key parameter in the design of SCO compounds, should be modified with respect to its default value of 0.25 a.u. and increased up to 0.50 a.u. The satisfactory agreement observed previously in the literature might result from an error cancellation originated in the default IPEA, which overestimates the stability of the HS state, and the erroneous atomic orbital basis set contraction of carbon atoms, which stabilizes the LS states. © 2015 Wiley Periodicals, Inc.

  5. Relativistic calculation of nuclear magnetic shielding using normalized elimination of the small component

    NASA Astrophysics Data System (ADS)

    Kudo, K.; Maeda, H.; Kawakubo, T.; Ootani, Y.; Funaki, M.; Fukui, H.

    2006-06-01

    The normalized elimination of the small component (NESC) theory, recently proposed by Filatov and Cremer [J. Chem. Phys. 122, 064104 (2005)], is extended to include magnetic interactions and applied to the calculation of the nuclear magnetic shielding in HX (X =F,Cl,Br,I) systems. The NESC calculations are performed at the levels of the zeroth-order regular approximation (ZORA) and the second-order regular approximation (SORA). The calculations show that the NESC-ZORA results are very close to the NESC-SORA results, except for the shielding of the I nucleus. Both the NESC-ZORA and NESC-SORA calculations yield very similar results to the previously reported values obtained using the relativistic infinite-order two-component coupled Hartree-Fock method. The difference between NESC-ZORA and NESC-SORA results is significant for the shieldings of iodine.

  6. Similarity-transformed perturbation theory on top of truncated local coupled cluster solutions: Theory and applications to intermolecular interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azar, Richard Julian, E-mail: julianazar2323@berkeley.edu; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu

    2015-05-28

    Your correspondents develop and apply fully nonorthogonal, local-reference perturbation theories describing non-covalent interactions. Our formulations are based on a Löwdin partitioning of the similarity-transformed Hamiltonian into a zeroth-order intramonomer piece (taking local CCSD solutions as its zeroth-order eigenfunction) plus a first-order piece coupling the fragments. If considerations are limited to a single molecule, the proposed intermolecular similarity-transformed perturbation theory represents a frozen-orbital variant of the “(2)”-type theories shown to be competitive with CCSD(T) and of similar cost if all terms are retained. Different restrictions on the zeroth- and first-order amplitudes are explored in the context of large-computation tractability and elucidationmore » of non-local effects in the space of singles and doubles. To accurately approximate CCSD intermolecular interaction energies, a quadratically growing number of variables must be included at zeroth-order.« less

  7. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  8. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  9. Slice regular functions of several Clifford variables

    NASA Astrophysics Data System (ADS)

    Ghiloni, R.; Perotti, A.

    2012-11-01

    We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.

  10. Enskog theory for polydisperse granular mixtures. I. Navier-Stokes order transport.

    PubMed

    Garzó, Vicente; Dufty, James W; Hrenya, Christine M

    2007-09-01

    A hydrodynamic description for an s -component mixture of inelastic, smooth hard disks (two dimensions) or spheres (three dimensions) is derived based on the revised Enskog theory for the single-particle velocity distribution functions. In this first part of the two-part series, the macroscopic balance equations for mass, momentum, and energy are derived. Constitutive equations are calculated from exact expressions for the fluxes by a Chapman-Enskog expansion carried out to first order in spatial gradients, thereby resulting in a Navier-Stokes order theory. Within this context of small gradients, the theory is applicable to a wide range of restitution coefficients and densities. The resulting integral-differential equations for the zeroth- and first-order approximations of the distribution functions are given in exact form. An approximate solution to these equations is required for practical purposes in order to cast the constitutive quantities as algebraic functions of the macroscopic variables; this task is described in the companion paper.

  11. How calibration and reference spectra affect the accuracy of absolute soft X-ray solar irradiance measured by the SDO/EVE/ESP during high solar activity

    NASA Astrophysics Data System (ADS)

    Didkovsky, Leonid; Wieman, Seth; Woods, Thomas

    2016-10-01

    The Extreme ultraviolet Spectrophotometer (ESP), one of the channels of SDO's Extreme ultraviolet Variability Experiment (EVE), measures solar irradiance in several EUV and soft x-ray (SXR) bands isolated using thin-film filters and a transmission diffraction grating, and includes a quad-diode detector positioned at the grating zeroth-order to observe in a wavelength band from about 0.1 to 7.0 nm. The quad diode signal also includes some contribution from shorter wavelength in the grating's first-order and the ratio of zeroth-order to first-order signal depends on both source geometry, and spectral distribution. For example, radiometric calibration of the ESP zeroth-order at the NIST SURF BL-2 with a near-parallel beam provides a different zeroth-to-first-order ratio than modeled for solar observations. The relative influence of "uncalibrated" first-order irradiance during solar observations is a function of the solar spectral irradiance and the locations of large Active Regions or solar flares. We discuss how the "uncalibrated" first-order "solar" component and the use of variable solar reference spectra affect determination of absolute SXR irradiance which currently may be significantly overestimated during high solar activity.

  12. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  13. A highly ordered mesostructured material containing regularly distributed phenols: preparation and characterization at a molecular level through ultra-fast magic angle spinning proton NMR spectroscopy.

    PubMed

    Roussey, Arthur; Gajan, David; Maishal, Tarun K; Mukerjee, Anhurada; Veyre, Laurent; Lesage, Anne; Emsley, Lyndon; Copéret, Christophe; Thieuleux, Chloé

    2011-03-14

    Highly ordered organic-inorganic mesostructured material containing regularly distributed phenols is synthesized by combining a direct synthesis of the functional material and a protection-deprotection strategy and characterized at a molecular level through ultra-fast magic angle spinning proton NMR spectroscopy.

  14. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a

  15. Relativistic (SR-ZORA) quantum theory of atoms in molecules properties.

    PubMed

    Anderson, James S M; Rodríguez, Juan I; Ayers, Paul W; Götz, Andreas W

    2017-01-15

    The Quantum Theory of Atoms in Molecules (QTAIM) is used to elucidate the effects of relativity on chemical systems. To do this, molecules are studied using density-functional theory at both the nonrelativistic level and using the scalar relativistic zeroth-order regular approximation. Relativistic effects on the QTAIM properties and topology of the electron density can be significant for chemical systems with heavy atoms. It is important, therefore, to use the appropriate relativistic treatment of QTAIM (Anderson and Ayers, J. Phys. Chem. 2009, 115, 13001) when treating systems with heavy atoms. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is

  17. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  18. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  19. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  20. Analysis of eccentric annular incompressible seals. II - Effects of eccentricity on rotordynamic coefficients

    NASA Technical Reports Server (NTRS)

    Nelson, C. C.; Nguyen, D. T.

    1987-01-01

    A new analysis procedure has been presented which solves for the flow variables of an annular pressure seal in which the rotor has a large static displacement (eccentricity) from the centered position. The present paper incorporates the solutions to investigate the effect of eccentricity on the rotordynamic coefficients. The analysis begins with a set of governing equations based on a turbulent bulk-flow model and Moody's friction factor equation. Perturbations of the flow variables yields a set of zeroth- and first-order equations. After integration of the zeroth-order equations, the resulting zeroth-order flow variables are used as input in the solution of the first-order equations. Further integration of the first order pressures yields the eccentric rotordynamic coefficients. The results from this procedure compare well with available experimental and theoretical data, with accuracy just as good or slightly better than the predictions based on a finite-element model.

  1. The Physics of Ultracold Sr2 Molecules: Optical Production and Precision Measurement

    DTIC Science & Technology

    2013-01-01

    causing stimulated emission. The wavelength of the feedback light is determined by the angle of the feedback mirror . The zeroth order is the output from...with representative mirror , diffraction grating and diode housing (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.14 Schematic of...of the feedback light is determined by the angle of the feedback mirror . The zeroth order is the output from the ECDL. . . . . . . . . . . . 23 2.15

  2. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  3. Structural characterization of the packings of granular regular polygons.

    PubMed

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  4. Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Dunca, Argus A.

    2017-12-01

    This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.

  5. The discovery of [Ni(NHC)RCN]2 species and their role as cycloaddition catalysts for the formation of pyridines.

    PubMed

    Stolley, Ryan M; Duong, Hung A; Thomas, David R; Louie, Janis

    2012-09-12

    The reaction of Ni(COD)(2), IPr, and nitrile affords dimeric [Ni(IPr)RCN](2) in high yields. X-ray analysis revealed these species display simultaneous η(1)- and η(2)-nitrile binding modes. These dimers are catalytically competent in the formation of pyridines from the cycloaddition of diynes and nitriles. Kinetic analysis showed the reaction to be first order in [Ni(IPr)RCN](2), zeroth order in added IPr, zeroth order in nitrile, and zeroth order in diyne. Extensive stoichiometric competition studies were performed, and selective incorporation of the exogenous, not dimer bound, nitrile was observed. Post cycloaddition, the dimeric state was found to be largely preserved. Nitrile and ligand exchange experiments were performed and found to be inoperative in the catalytic cycle. These observations suggest a mechanism whereby the catalyst is activated by partial dimer-opening followed by binding of exogenous nitrile and subsequent oxidative heterocoupling.

  6. The Discovery of [Ni(NHC)RCN]2 Species and their Role as Cycloaddition Catalysts for the Formation of Pyridines

    PubMed Central

    Stolley, Ryan M.; Duong, Hung A.; Thomas, David R.; Louie, Janis

    2012-01-01

    The reaction of Ni(COD)2, IPr, and nitrile affords dimeric [Ni(IPr)RCN]2 in high yields. X-ray analysis revealed these species display simultaneous η1- and η2-nitrile binding modes. These dimers are catalytically competent in the formation of pyridines from the cycloaddition of diynes and nitriles. Kinetic analysis showed the reaction to be first order in [Ni(IPr)RCN]2, zeroth order in added IPr, zeroth order in nitrile, and zeroth order in diyne. Extensive stoichiometric competition studies were performed, and selective incorporation of the exogenous, not dimer bound, nitrile was observed. Post cycloaddition, the dimeric state was found to be largely preserved. Nitrile and ligand exchange experiments were performed and found to be inoperative in the catalytic cycle. These observations suggest a mechanism whereby the catalyst is activated by partial dimer-opening followed by binding of exogenous nitrile and subsequent oxidative heterocoupling. PMID:22917161

  7. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B.G.

    1999-11-11

    The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.

  8. Processing SPARQL queries with regular expressions in RDF databases

    PubMed Central

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  9. Processing SPARQL queries with regular expressions in RDF databases.

    PubMed

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  10. Zeroth order Fabry-Perot resonance enabled ultra-thin perfect light absorber using percolation aluminum and silicon nanofilms

    DOE PAGES

    Mirshafieyan, Seyed Sadreddin; Luk, Ting S.; Guo, Junpeng

    2016-03-04

    Here, we demonstrated perfect light absorption in optical nanocavities made of ultra-thin percolation aluminum and silicon films deposited on an aluminum surface. The total layer thickness of the aluminum and silicon films is one order of magnitude less than perfect absorption wavelength in the visible spectral range. The ratio of silicon cavity layer thickness to perfect absorption wavelength decreases as wavelength decreases due to the increased phase delays at silicon-aluminum boundaries at shorter wavelengths. It is explained that perfect light absorption is due to critical coupling of incident wave to the fundamental Fabry-Perot resonance mode of the structure where themore » round trip phase delay is zero. Simulations were performed and the results agree well with the measurement results.« less

  11. The Effect of Concomitant Fields in Fast Spin Echo Acquisition on Asymmetric MRI Gradient Systems

    PubMed Central

    Tao, Shengzhen; Weavers, Paul T.; Trzasko, Joshua D.; Huston, John; Shu, Yunhong; Gray, Erin M.; Foo, Thomas K.F.; Bernstein, Matt A.

    2017-01-01

    Purpose To investigate the effect of the asymmetric gradient concomitant fields (CF) with zeroth and first-order spatial dependence on fast/turbo spin-echo acquisitions, and to demonstrate the effectiveness of their real-time compensation. Methods After briefly reviewing the CF produced by asymmetric gradients, the effects of the additional zeroth and first-order CFs on these systems are investigated using extended-phase graph simulations. Phantom and in vivo experiments are performed to corroborate the simulation. Experiments are performed before and after the real-time compensations using frequency tracking and gradient pre-emphasis to demonstrate their effectiveness in correcting the additional CFs. The interaction between the CFs and prescan-based correction to compensate for eddy currents is also investigated. Results It is demonstrated that, unlike the second-order CFs on conventional gradients, the additional zeroth/first-order CFs on asymmetric gradients cause substantial signal loss and dark banding in fast spin-echo acquisitions within a typical brain-scan field of view. They can confound the prescan correction for eddy currents and degrade image quality. Performing real-time compensation successfully eliminates the artifacts. Conclusions We demonstrate that the zeroth/first-order CFs specific to asymmetric gradients can cause substantial artifacts, including signal loss and dark bands for brain imaging. These effects can be corrected using real-time compensation. PMID:28643408

  12. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  13. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  14. Green operators for low regularity spacetimes

    NASA Astrophysics Data System (ADS)

    Sanchez Sanchez, Yafet; Vickers, James

    2018-02-01

    In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.

  15. On optimizing the treatment of exchange perturbations.

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Chipman, D. M.

    1972-01-01

    Most theories of exchange perturbations would give the exact energy and wave function if carried out to an infinite order. However, the different methods give different values for the second-order energy, and different values for E(1), the expectation value of the Hamiltonian corresponding to the zeroth- plus first-order wave function. In the presented paper, it is shown that the zeroth- plus first-order wave function obtained by optimizing the basic equation which is used in most exchange perturbation treatments is the exact wave function for the perturbation system and E(1) is the exact energy.

  16. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  17. Anomalous double-stripe charge ordering in β -NaFe2O3 with double triangular layers consisting of almost perfect regular Fe4 tetrahedra

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shintaro; Ueda, Hiroaki; Michioka, Chishiro; Yoshimura, Kazuyoshi; Nakamura, Shin; Katsufuji, Takuro; Sawa, Hiroshi

    2018-05-01

    The physical properties of the mixed-valent iron oxide β -NaFe2O3 were investigated by means of synchrotron radiation x-ray diffraction, magnetization, electrical resistivity, differential scanning calorimetry, 23Na NMR, and 57FeM o ̈ssbauer measurements. This compound has double triangular layers consisting of almost perfect regular Fe4 tetrahedra, which suggests geometrical frustration. We found that this compound exhibits an electrostatically unstable double-stripe-type charge ordering, which is stabilized by the cooperative compression of Fe3 +O6 octahedra, owing to a valence change and Fe2 +O6 octahedra due to Jahn-Teller distortion. Our results indicate the importance of electron-phonon coupling for charge ordering in the region of strong charge frustration.

  18. Quantum theory of atoms in molecules: results for the SR-ZORA Hamiltonian.

    PubMed

    Anderson, James S M; Ayers, Paul W

    2011-11-17

    The quantum theory of atoms in molecules (QTAIM) is generalized to include relativistic effects using the popular scalar-relativistic zeroth-order regular approximation (SR-ZORA). It is usually assumed that the definition of the atom as a volume bounded by a zero-flux surface of the electron density is closely linked to the form of the kinetic energy, so it is somewhat surprising that the atoms corresponding to the relativistic kinetic-energy operator in the SR-ZORA Hamiltonian are also bounded by zero-flux surfaces. The SR-ZORA Hamiltonian should be sufficient for qualitative descriptions of molecular electronic structure across the periodic table, which suggests that QTAIM-based analysis can be useful for molecules and solids containing heavy atoms.

  19. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.

    1991-01-01

    The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.

  20. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  1. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  2. A regularity result for fixed points, with applications to linear response

    NASA Astrophysics Data System (ADS)

    Sedro, Julien

    2018-04-01

    In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.

  3. Deforming regular black holes

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2017-06-01

    In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.

  4. Incompressible flow simulations on regularized moving meshfree grids

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2017-11-01

    A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.

  5. Construction of normal-regular decisions of Bessel typed special system

    NASA Astrophysics Data System (ADS)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  6. Rotordynamic coefficients for labyrinth seals calculated by means of a finite difference technique

    NASA Technical Reports Server (NTRS)

    Nordmann, R.; Weiser, P.

    1989-01-01

    The compressible, turbulent, time dependent and three dimensional flow in a labyrinth seal can be described by the Navier-Stokes equations in conjunction with a turbulence model. Additionally, equations for mass and energy conservation and an equation of state are required. To solve these equations, a perturbation analysis is performed yielding zeroth order equations for centric shaft position and first order equations describing the flow field for small motions around the seal center. For numerical solution a finite difference method is applied to the zeroth and first order equations resulting in leakage and dynamic seal coefficients respectively.

  7. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  8. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  9. Scalar field coupling to Einstein tensor in regular black hole spacetime

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Wu, Chen

    2018-02-01

    In this paper, we study the perturbation property of a scalar field coupling to Einstein's tensor in the background of the regular black hole spacetimes. Our calculations show that the the coupling constant η imprints in the wave equation of a scalar perturbation. We calculated the quasinormal modes of scalar field coupling to Einstein's tensor in the regular black hole spacetimes by the 3rd order WKB method.

  10. Order-parameter model for unstable multilane traffic flow

    NASA Astrophysics Data System (ADS)

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the ``free flow <--> synchronized mode <--> jam'' phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the ``many-body'' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the ``one-particle'' distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of ``fast'' drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the ``free flow <--> synchronized motion'' phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  11. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  12. Thermodynamics and glassy phase transition of regular black holes

    NASA Astrophysics Data System (ADS)

    Javed, Wajiha; Yousaf, Z.; Akhtar, Zunaira

    2018-05-01

    This paper is aimed to study thermodynamical properties of phase transition for regular charged black holes (BHs). In this context, we have considered two different forms of BH metrics supplemented with exponential and logistic distribution functions and investigated the recent expansion of phase transition through grand canonical ensemble. After exploring the corresponding Ehrenfest’s equation, we found the second-order background of phase transition at critical points. In order to check the critical behavior of regular BHs, we have evaluated some corresponding explicit relations for the critical temperature, pressure and volume and draw certain graphs with constant values of Smarr’s mass. We found that for the BH metric with exponential configuration function, the phase transition curves are divergent near the critical points, while glassy phase transition has been observed for the Ayón-Beato-García-Bronnikov (ABGB) BH in n = 5 dimensions.

  13. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  14. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  15. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  16. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  17. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  18. Power-law regularities in human language

    NASA Astrophysics Data System (ADS)

    Mehri, Ali; Lashkari, Sahar Mohammadpour

    2016-11-01

    Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.

  19. Regularizing the r-mode Problem for Nonbarotropic Relativistic Stars

    NASA Technical Reports Server (NTRS)

    Lockitch, Keith H.; Andersson, Nils; Watts, Anna L.

    2004-01-01

    We present results for r-modes of relativistic nonbarotropic stars. We show that the main differential equation, which is formally singular at lowest order in the slow-rotation expansion, can be regularized if one considers the initial value problem rather than the normal mode problem. However, a more physically motivated way to regularize the problem is to include higher order terms. This allows us to develop a practical approach for solving the problem and we provide results that support earlier conclusions obtained for uniform density stars. In particular, we show that there will exist a single r-mode for each permissible combination of 1 and m. We discuss these results and provide some caveats regarding their usefulness for estimates of gravitational-radiation reaction timescales. The close connection between the seemingly singular relativistic r-mode problem and issues arising because of the presence of co-rotation points in differentially rotating stars is also clarified.

  20. Hierarchical collapse of regular islands via dissipation

    NASA Astrophysics Data System (ADS)

    Jousseph, C. A. C.; Abdulack, S. A.; Manchein, C.; Beims, M. W.

    2018-03-01

    In this work we investigate how regular islands localized in a mixed phase-space of generic area-preserving Hamiltonian systems are affected by a small amount of dissipation. Mainly we search for a universality (hierarchy) in the convergence of higher-order resonances and their periods when dissipation increases. One very simple scenario is already known: when subjected to small dissipation, stable periodic points become sinks attracting almost all the surrounding orbits, destroying all invariant curves which divide the phase-space in chaotic and regular domains. However, performing numerical experiments with the paradigmatic Chirikov-Taylor standard mapping we show that this presumably simple scenario can be rather complicated. The first, not trivial, scenario is what happens to chaotic trajectories, since they can be attracted by the sinks or by chaotic attractors, in cases when they exist. We show that this depends very much on how basins of attraction are formed as dissipation increases. In addition, we demonstrate that higher-order resonances are usually first affected by small dissipation when compared to lower-order resonances from the conservative case. Nevertheless, this is not a generic behaviour. We show that a local hierarchical collapse of resonances, as dissipation increases, is related to the area of the islands from the conservative case surrounding the periodic orbits. All observed resonance destructions occur via the bifurcation phenomena and are quantified here by determining the largest finite-time Lyapunov exponent.

  1. Nonpolynomial Lagrangian approach to regular black holes

    NASA Astrophysics Data System (ADS)

    Colléaux, Aimeric; Chinaglia, Stefano; Zerbini, Sergio

    We present a review on Lagrangian models admitting spherically symmetric regular black holes (RBHs), and cosmological bounce solutions. Nonlinear electrodynamics, nonpolynomial gravity, and fluid approaches are explained in details. They consist respectively in a gauge invariant generalization of the Maxwell-Lagrangian, in modifications of the Einstein-Hilbert action via nonpolynomial curvature invariants, and finally in the reconstruction of density profiles able to cure the central singularity of black holes. The nonpolynomial gravity curvature invariants have the special property to be second-order and polynomial in the metric field, in spherically symmetric spacetimes. Along the way, other models and results are discussed, and some general properties that RBHs should satisfy are mentioned. A covariant Sakharov criterion for the absence of singularities in dynamical spherically symmetric spacetimes is also proposed and checked for some examples of such regular metric fields.

  2. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  3. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... act of domestic violence occurred, the court may issue an order of protection. The order must meet the... committed the act of domestic violence to refrain from acts or threats of violence against the petitioner or...

  4. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... act of domestic violence occurred, the court may issue an order of protection. The order must meet the... committed the act of domestic violence to refrain from acts or threats of violence against the petitioner or...

  5. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  6. Fermion-number violation in regularizations that preserve fermion-number symmetry

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Shamir, Yigal

    2003-01-01

    There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.

  7. Stochastic multi-reference perturbation theory with application to the linearized coupled cluster method

    NASA Astrophysics Data System (ADS)

    Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali

    2017-01-01

    In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.

  8. Proper time regularization and the QCD chiral phase transition

    PubMed Central

    Cui, Zhu-Fang; Zhang, Jin-Li; Zong, Hong-Shi

    2017-01-01

    We study the QCD chiral phase transition at finite temperature and finite quark chemical potential within the two flavor Nambu–Jona-Lasinio (NJL) model, where a generalization of the proper-time regularization scheme is motivated and implemented. We find that in the chiral limit the whole transition line in the phase diagram is of second order, whereas for finite quark masses a crossover is observed. Moreover, if we take into account the influence of quark condensate to the coupling strength (which also provides a possible way of how the effective coupling varies with temperature and quark chemical potential), it is found that a CEP may appear. These findings differ substantially from other NJL results which use alternative regularization schemes, some explanation and discussion are given at the end. This indicates that the regularization scheme can have a dramatic impact on the study of the QCD phase transition within the NJL model. PMID:28401889

  9. New vision based navigation clue for a regular colonoscope's tip

    NASA Astrophysics Data System (ADS)

    Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy

    2009-02-01

    Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.

  10. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  11. Block correlated second order perturbation theory with a generalized valence bond reference function.

    PubMed

    Xu, Enhua; Li, Shuhua

    2013-11-07

    The block correlated second-order perturbation theory with a generalized valence bond (GVB) reference (GVB-BCPT2) is proposed. In this approach, each geminal in the GVB reference is considered as a "multi-orbital" block (a subset of spin orbitals), and each occupied or virtual spin orbital is also taken as a single block. The zeroth-order Hamiltonian is set to be the summation of the individual Hamiltonians of all blocks (with explicit two-electron operators within each geminal) so that the GVB reference function and all excited configuration functions are its eigenfunctions. The GVB-BCPT2 energy can be directly obtained without iteration, just like the second order Mo̸ller-Plesset perturbation method (MP2), both of which are size consistent. We have applied this GVB-BCPT2 method to investigate the equilibrium distances and spectroscopic constants of 7 diatomic molecules, conformational energy differences of 8 small molecules, and bond-breaking potential energy profiles in 3 systems. GVB-BCPT2 is demonstrated to have noticeably better performance than MP2 for systems with significant multi-reference character, and provide reasonably accurate results for some systems with large active spaces, which are beyond the capability of all CASSCF-based methods.

  12. On singlet s-wave electron-hydrogen scattering.

    NASA Technical Reports Server (NTRS)

    Madan, R. N.

    1973-01-01

    Discussion of various zeroth-order approximations to s-wave scattering of electrons by hydrogen atoms below the first excitation threshold. The formalism previously developed by the author (1967, 1968) is applied to Feshbach operators to derive integro-differential equations, with the optical-potential set equal to zero, for the singlet and triplet cases. Phase shifts of s-wave scattering are computed in the zeroth-order approximation of the Feshbach operator method and in the static-exchange approximation. It is found that the convergence of numerical computations is faster in the former approximation than in the latter.

  13. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  14. 75 FR 7480 - Farm Credit Administration Board; Sunshine Act; Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-19

    ... weather in the Washington DC metropolitan area. Date and Time: The regular meeting of the Board will now... INFORMATION: This meeting of the Board will be open to the public (limited space available). In order to...

  15. Genus Ranges of 4-Regular Rigid Vertex Graphs

    PubMed Central

    Buck, Dorothy; Dolzhenko, Egor; Jonoska, Nataša; Saito, Masahico; Valencia, Karin

    2016-01-01

    A rigid vertex of a graph is one that has a prescribed cyclic order of its incident edges. We study orientable genus ranges of 4-regular rigid vertex graphs. The (orientable) genus range is a set of genera values over all orientable surfaces into which a graph is embedded cellularly, and the embeddings of rigid vertex graphs are required to preserve the prescribed cyclic order of incident edges at every vertex. The genus ranges of 4-regular rigid vertex graphs are sets of consecutive integers, and we address two questions: which intervals of integers appear as genus ranges of such graphs, and what types of graphs realize a given genus range. For graphs with 2n vertices (n > 1), we prove that all intervals [a, b] for all a < b ≤ n, and singletons [h, h] for some h ≤ n, are realized as genus ranges. For graphs with 2n − 1 vertices (n ≥ 1), we prove that all intervals [a, b] for all a < b ≤ n except [0, n], and [h, h] for some h ≤ n, are realized as genus ranges. We also provide constructions of graphs that realize these ranges. PMID:27807395

  16. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  17. Global Regularity for the Fractional Euler Alignment System

    NASA Astrophysics Data System (ADS)

    Do, Tam; Kiselev, Alexander; Ryzhik, Lenya; Tan, Changhui

    2018-04-01

    We study a pressureless Euler system with a non-linear density-dependent alignment term, originating in the Cucker-Smale swarming models. The alignment term is dissipative in the sense that it tends to equilibrate the velocities. Its density dependence is natural: the alignment rate increases in the areas of high density due to species discomfort. The diffusive term has the order of a fractional Laplacian {(-partial _{xx})^{α/2}, α \\in (0, 1)}. The corresponding Burgers equation with a linear dissipation of this type develops shocks in a finite time. We show that the alignment nonlinearity enhances the dissipation, and the solutions are globally regular for all {α \\in (0, 1)}. To the best of our knowledge, this is the first example of such regularization due to the non-local nonlinear modulation of dissipation.

  18. A comparative study of Laplacians and Schroedinger- Lichnerowicz-Weitzenboeck identities in Riemannian and antisymplectic geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batalin, Igor A.; I.E. Tamm Theory Division, P.N. Lebedev Physics Institute, Russian Academy of Sciences, 53 Leninsky Prospect, Moscow 119991; Bering, Klaus

    2009-07-15

    We introduce an antisymplectic Dirac operator and antisymplectic gamma matrices. We explore similarities between, on one hand, the Schroedinger-Lichnerowicz formula for spinor bundles in Riemannian spin geometry, which contains a zeroth-order term proportional to the Levi-Civita scalar curvature, and, on the other hand, the nilpotent, Grassmann-odd, second-order {delta} operator in antisymplectic geometry, which, in general, has a zeroth-order term proportional to the odd scalar curvature of an arbitrary antisymplectic and torsion-free connection that is compatible with the measure density. Finally, we discuss the close relationship with the two-loop scalar curvature term in the quantum Hamiltonian for a particle in amore » curved Riemannian space.« less

  19. Anomalous resonances of an optical microcavity with a hyperbolic metamaterial core

    NASA Astrophysics Data System (ADS)

    Travkin, Evgenij; Kiel, Thomas; Sadofev, Sergey; Busch, Kurt; Benson, Oliver; Kalusniak, Sascha

    2018-05-01

    We embed a hyperbolic metamaterial based on stacked layer pairs of epitaxially grown ZnO/ZnO:Ga in a monolithic optical microcavity, and we investigate the arising unique resonant effects experimentally and theoretically. Unlike traditional metals, the semiconductor-based approach allows us to utilize all three permittivity regions of the hyperbolic metamaterial in the near-infrared spectral range. This configuration gives rise to modes of identical orders appearing at different frequencies, a zeroth-order resonance in an all-positive permittivity region, and a continuum of high-order modes. In addition, an unusual lower cutoff frequency is introduced to the resonator mode spectrum. The observed effects expand the possibilities for customization of optical resonators; in particular, the zeroth-order and high-order modes hold strong potential for the realization of deeply subwavelength cavity sizes.

  20. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  1. Morphing Continuum Theory: A First Order Approximation to the Balance Laws

    NASA Astrophysics Data System (ADS)

    Wonnell, Louis; Cheikh, Mohamad Ibrahim; Chen, James

    2017-11-01

    Morphing Continuum Theory is constructed under the framework of Rational Continuum Mechanics (RCM) for fluid flows with inner structure. This multiscale theory has been successfully emplyed to model turbulent flows. The framework of RCM ensures the mathematical rigor of MCT, but contains new material constants related to the inner structure. The physical meanings of these material constants have yet to be determined. Here, a linear deviation from the zeroth-order Boltzmann-Curtiss distribution function is derived. When applied to the Boltzmann-Curtiss equation, a first-order approximation of the MCT governing equations is obtained. The integral equations are then related to the appropriate material constants found in the heat flux, Cauchy stress, and moment stress terms in the governing equations. These new material properties associated with the inner structure of the fluid are compared with the corresponding integrals, and a clearer physical interpretation of these coefficients emerges. The physical meanings of these material properties is determined by analyzing previous results obtained from numerical simulations of MCT for compressible and incompressible flows. The implications for the physics underlying the MCT governing equations will also be discussed. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-17-1-0154.

  2. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  3. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  4. Statistical Regularities Attract Attention when Task-Relevant.

    PubMed

    Alamia, Andrea; Zénon, Alexandre

    2016-01-01

    Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e., the effect of color predictability on reaction times (RTs), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the two colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.

  5. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  6. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  7. Unified formalism for higher order non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2012-03-01

    This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.

  8. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  9. Generalized Second-Order Partial Derivatives of 1/r

    ERIC Educational Resources Information Center

    Hnizdo, V.

    2011-01-01

    The generalized second-order partial derivatives of 1/r, where r is the radial distance in three dimensions (3D), are obtained using a result of the potential theory of classical analysis. Some non-spherical-regularization alternatives to the standard spherical-regularization expression for the derivatives are derived. The utility of a…

  10. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  11. Charged plate in asymmetric electrolytes: One-loop renormalization of surface charge density and Debye length due to ionic correlations.

    PubMed

    Ding, Mingnan; Lu, Bing-Sui; Xing, Xiangjun

    2016-10-01

    Self-consistent field theory (SCFT) is used to study the mean potential near a charged plate inside a m:-n electrolyte. A perturbation series is developed in terms of g=4πκb, where band1/κ are Bjerrum length and bare Debye length, respectively. To the zeroth order, we obtain the nonlinear Poisson-Boltzmann theory. For asymmetric electrolytes (m≠n), the first order (one-loop) correction to mean potential contains a secular term, which indicates the breakdown of the regular perturbation method. Using a renormalizaton group transformation, we remove the secular term and obtain a globally well-behaved one-loop approximation with a renormalized Debye length and a renormalized surface charge density. Furthermore, we find that if the counterions are multivalent, the surface charge density is renormalized substantially downwards and may undergo a change of sign, if the bare surface charge density is sufficiently large. Our results agrees with large MC simulation even when the density of electrolytes is relatively high.

  12. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  13. 1 / n Expansion for the Number of Matchings on Regular Graphs and Monomer-Dimer Entropy

    NASA Astrophysics Data System (ADS)

    Pernici, Mario

    2017-08-01

    Using a 1 / n expansion, that is an expansion in descending powers of n, for the number of matchings in regular graphs with 2 n vertices, we study the monomer-dimer entropy for two classes of graphs. We study the difference between the extensive monomer-dimer entropy of a random r-regular graph G (bipartite or not) with 2 n vertices and the average extensive entropy of r-regular graphs with 2 n vertices, in the limit n → ∞. We find a series expansion for it in the numbers of cycles; with probability 1 it converges for dimer density p < 1 and, for G bipartite, it diverges as |ln(1-p)| for p → 1. In the case of regular lattices, we similarly expand the difference between the specific monomer-dimer entropy on a lattice and the one on the Bethe lattice; we write down its Taylor expansion in powers of p through the order 10, expressed in terms of the number of totally reducible walks which are not tree-like. We prove through order 6 that its expansion coefficients in powers of p are non-negative.

  14. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  15. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  16. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  17. Reaction kinetics and critical phenomena: iodination of acetone in isobutyric acid + water near the consolute point.

    PubMed

    Hu, Baichuan; Baird, James K

    2010-01-14

    The rate of iodination of acetone has been measured as a function of temperature in the binary solvent isobutyric acid (IBA) + water near the upper consolute point. The reaction mixture was prepared by the addition of acetone, iodine, and potassium iodide to IBA + water at its critical composition of 38.8 mass % IBA. The value of the critical temperature determined immediately after mixing was 25.43 degrees C. Aliquots were extracted from the mixture at regular intervals in order to follow the time course of the reaction. After dilution of the aliquot with water to quench the reaction, the concentration of triiodide ion was determined by the measurement of the optical density at a wavelength of 565 nm. These measurements showed that the kinetics were zeroth order. When at the end of 24 h the reaction had come to equilibrium, the critical temperature was determined again and found to be 24.83 degrees C. An Arrhenius plot of the temperature dependence of the observed rate constant, k(obs), was linear over the temperature range 27.00-38.00 degrees C, but between 25.43 and 27.00 degrees C, the values of k(obs) fell below the extrapolation of the Arrhenius line. This behavior is evidence in support of critical slowing down. Our experimental method and results are significant in three ways: (1) In contrast to in situ measurements of optical density, the determination of the optical density of diluted aliquots avoided any interference from critical opalescence. (2) The measured reaction rate exhibited critical slowing down. (3) The rate law was pseudo zeroth order both inside and outside the critical region, indicating that the reaction mechanism was unaffected by the presence of the critical point.

  18. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Zeta Function Regularization in Casimir Effect Calculations and J. S. DOWKER's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-06-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so-called operator regularization procedure are presented.

  20. Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-07-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.

  1. Analysis of the Tikhonov regularization to retrieve thermal conductivity depth-profiles from infrared thermography data

    NASA Astrophysics Data System (ADS)

    Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo

    2010-09-01

    We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.

  2. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  3. Regular scattering patterns from near-cloaking devices and their implications for invisibility cloaking

    NASA Astrophysics Data System (ADS)

    Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng

    2013-04-01

    In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy.

  4. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    NASA Astrophysics Data System (ADS)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  5. Verbal Working Memory Is Related to the Acquisition of Cross-Linguistic Phonological Regularities.

    PubMed

    Bosma, Evelyn; Heeringa, Wilbert; Hoekstra, Eric; Versloot, Arjen; Blom, Elma

    2017-01-01

    Closely related languages share cross-linguistic phonological regularities, such as Frisian -âld [ͻ:t] and Dutch -oud [ʱut], as in the cognate pairs kâld [kͻ:t] - koud [kʱut] 'cold' and wâld [wͻ:t] - woud [wʱut] 'forest'. Within Bybee's (1995, 2001, 2008, 2010) network model, these regularities are, just like grammatical rules within a language, generalizations that emerge from schemas of phonologically and semantically related words. Previous research has shown that verbal working memory is related to the acquisition of grammar, but not vocabulary. This suggests that verbal working memory supports the acquisition of linguistic regularities. In order to test this hypothesis we investigated whether verbal working memory is also related to the acquisition of cross-linguistic phonological regularities. For three consecutive years, 5- to 8-year-old Frisian-Dutch bilingual children ( n = 120) were tested annually on verbal working memory and a Frisian receptive vocabulary task that comprised four cognate categories: (1) identical cognates, (2) non-identical cognates that either do or (3) do not exhibit a phonological regularity between Frisian and Dutch, and (4) non-cognates. The results showed that verbal working memory had a significantly stronger effect on cognate category (2) than on the other three cognate categories. This suggests that verbal working memory is related to the acquisition of cross-linguistic phonological regularities. More generally, it confirms the hypothesis that verbal working memory plays a role in the acquisition of linguistic regularities.

  6. Between disorder and order: A case study of power law

    NASA Astrophysics Data System (ADS)

    Cao, Yong; Zhao, Youjie; Yue, Xiaoguang; Xiong, Fei; Sun, Yongke; He, Xin; Wang, Lichao

    2016-08-01

    Power law is an important feature of phenomena in long memory behaviors. Zipf ever found power law in the distribution of the word frequencies. In physics, the terms order and disorder are Thermodynamic or statistical physics concepts originally and a lot of research work has focused on self-organization of the disorder ingredients of simple physical systems. It is interesting what make disorder-order transition. We devise an experiment-based method about random symbolic sequences to research regular pattern between disorder and order. The experiment results reveal power law is indeed an important regularity in transition from disorder to order. About these results the preliminary study and analysis has been done to explain the reasons.

  7. An iwatsubo-based solution for labyrinth seals - comparison with experimental results

    NASA Technical Reports Server (NTRS)

    Childs, D. W.; Scharrer, J. K.

    1984-01-01

    The basic equations are derived for compressible flow in a labyrinth seal. The flow is assumed to be completely turbulent in the circumferential direction where the friction factor is determined by the Blasius relation. Linearized zeroth and first-order perturbation equations are developed for small motion about a centered position by an expansion in the eccentricity ratio. The zeroth-order pressure distribution is found by satisfying the leakage equation while the circumferential velocity distribution is determined by satisfying the momentum equation. The first-order equations are solved by a separation of variables solution. Integration of the resultant pressure distribution along and around the seal defines the reaction force developed by the seal and the corresponding dynamic coefficients. The results of this analysis are compared to published test results.

  8. Regular Pentagons and the Fibonacci Sequence.

    ERIC Educational Resources Information Center

    French, Doug

    1989-01-01

    Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)

  9. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  10. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  11. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  12. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  13. Approximate optimal guidance for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Feeley, T. S.; Speyer, J. L.

    1993-01-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  14. B0 concomitant field compensation for MRI systems employing asymmetric transverse gradient coils.

    PubMed

    Weavers, Paul T; Tao, Shengzhen; Trzasko, Joshua D; Frigo, Louis M; Shu, Yunhong; Frick, Matthew A; Lee, Seung-Kyun; Foo, Thomas K-F; Bernstein, Matt A

    2018-03-01

    Imaging gradients result in the generation of concomitant fields, or Maxwell fields, which are of increasing importance at higher gradient amplitudes. These time-varying fields cause additional phase accumulation, which must be compensated for to avoid image artifacts. In the case of gradient systems employing symmetric design, the concomitant fields are well described with second-order spatial variation. Gradient systems employing asymmetric design additionally generate concomitant fields with global (zeroth-order or B 0 ) and linear (first-order) spatial dependence. This work demonstrates a general solution to eliminate the zeroth-order concomitant field by applying the correct B 0 frequency shift in real time to counteract the concomitant fields. Results are demonstrated for phase contrast, spiral, echo-planar imaging (EPI), and fast spin-echo imaging. A global phase offset is reduced in the phase-contrast exam, and blurring is virtually eliminated in spiral images. The bulk image shift in the phase-encode direction is compensated for in EPI, whereas signal loss, ghosting, and blurring are corrected in the fast-spin echo images. A user-transparent method to compensate the zeroth-order concomitant field term by center frequency shifting is proposed and implemented. This solution allows all the existing pulse sequences-both product and research-to be retained without any modifications. Magn Reson Med 79:1538-1544, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. A hybrid inventory management system respondingto regular demand and surge demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu

    2014-06-01

    This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less

  16. Regularization of instabilities in gravity theories

    NASA Astrophysics Data System (ADS)

    Ramazanoǧlu, Fethi M.

    2018-01-01

    We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.

  17. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Regular rate. 779.18 Section 779.18 Labor Regulations... OR SERVICES General Some Basic Definitions § 779.18 Regular rate. As explained in the interpretative... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  18. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Regular rate. 779.18 Section 779.18 Labor Regulations... OR SERVICES General Some Basic Definitions § 779.18 Regular rate. As explained in the interpretative... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  19. 29 CFR 779.18 - Regular rate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Regular rate. 779.18 Section 779.18 Labor Regulations... OR SERVICES General Some Basic Definitions § 779.18 Regular rate. As explained in the interpretative... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  20. Electronic excitation spectra of molecules in solution calculated using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model with perturbative approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries

    A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less

  1. Thermohydrodynamic Analysis of Cryogenic Liquid Turbulent Flow Fluid Film Bearings

    NASA Technical Reports Server (NTRS)

    SanAndres, Luis

    1996-01-01

    Computational programs developed for the thermal analysis of tilting and flexure-pad hybrid bearings, and the unsteady flow and transient response of a point mass rotor supported on fluid film bearings are described. The motion of a cryogenic liquid on the thin film annular region of a fluid film bearing is described by a set of mass and momentum conservation, and energy transport equations for the turbulent bulk-flow velocities and pressure, and accompanied by thermophysical state equations for evaluation of the fluid material properties. Zeroth-order equations describe the fluid flow field for a journal static equilibrium position, while first-order (linear) equations govern the fluid flow for small amplitude-journal center translational motions. Solution to the zeroth-order flow field equations provides the bearing flow rate, load capacity, drag torque and temperature rise. Solution to the first-order equations determines the rotordynamic force coefficients due to journal radial motions.

  2. Twistor interpretation of slice regular functions

    NASA Astrophysics Data System (ADS)

    Altavilla, Amedeo

    2018-01-01

    Given a slice regular function f : Ω ⊂ H → H, with Ω ∩ R ≠ ∅, it is possible to lift it to surfaces in the twistor space CP3 of S4 ≃ H ∪ { ∞ } (see Gentili et al., 2014). In this paper we show that the same result is true if one removes the hypothesis Ω ∩ R ≠ ∅ on the domain of the function f. Moreover we find that if a surface S ⊂CP3 contains the image of the twistor lift of a slice regular function, then S has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in CP3 that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in Gr2(C4) which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside Gr2(C4) , showing the role of slice regular functions not defined on R. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.

  3. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    NASA Astrophysics Data System (ADS)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  4. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    PubMed

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  5. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  6. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  7. Effects of amphipathic profile regularization on structural order and interaction with membrane models of two highly cationic branched peptides with β-sheet propensity.

    PubMed

    Serra, Ilaria; Casu, Mariano; Ceccarelli, Matteo; Gameiro, Paula; Rinaldi, Andrea C; Scorciapino, Mariano Andrea

    2018-07-01

    Antimicrobial peptides attracted increasing interest in last decades due to the rising concern of multi-drug resistant pathogens. Dendrimeric peptides are branched molecules with multiple copies of one peptide functional unit bound to the central core. Compared to linear analogues, they usually show improved activity and lower susceptibility to proteases. Knowledge of structure-function relationship is fundamental to tailor their properties. This work is focused on SB056, the smallest example of dendrimeric peptide, whose amino acid sequence is WKKIRVRLSA. Two copies are bound to the α- and ε- nitrogen of one lysine core. An 8-aminooctanamide was added at the C-terminus to improve membrane affinity. Its propensity for β-type structures is also interesting, since helical peptides were already thoroughly studied. Moreover, SB056 maintains activity at physiological osmolarity, a typical limitation of natural peptides. An optimized analogue with improved performance was designed, β-SB056, which differs only in the relative position of the first two residues (KWKIRVRLSA). This produced remarkable differences. Structure order and aggregation behavior were characterized by using complementary techniques and membrane models with different negative charge. Infrared spectroscopy showed different propensity for ordered β-sheets. Lipid monolayers' surface pressure was measured to estimate the area/peptide and the ability to perturb lipid packing. Fluorescence spectroscopy was applied to compare peptide insertion into the lipid bilayer. Such small change in primary structure produced fundamental differences in their aggregation behavior. A regular amphipathic peptide's primary structure was responsible for ordered β-sheets in a charge independent fashion, in contrast to unordered aggregates formed by the former analogue. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Fokker-Planck electron diffusion caused by an obliquely propagating electromagnetic wave packet of narrow bandwidth

    NASA Technical Reports Server (NTRS)

    Hizanidis, Kyriakos

    1989-01-01

    The relativistic motion of electrons in an intense electromagnetic wave packet propagating obliquely to a uniform magnetic field is analytically studied on the basis of the Fokker-Planck-Kolmogorov (FPK) approach. The wavepacket consists of circularly polarized electron-cyclotron waves. The dynamical system in question is shown to be reducible to one with three degrees of freedom. Within the framework of the Hamiltonian analysis the nonlinear diffusion tensor is derived, and it is shown that this tensor can be separated into zeroth-, first-, and second-order parts with respect to the relative bandwidth. The zeroth-order part describes diffusive acceleration along lines of constant unperturbed Hamiltonian. The second-order part, which corresponds to the longest time scale, describes diffusion across those lines. A possible transport theory is outlined on the basis of this separation of the time scales.

  9. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  10. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  11. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less

  12. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  13. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  14. 75 FR 53966 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  15. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    PubMed

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  16. Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations

    NASA Astrophysics Data System (ADS)

    Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian

    2018-04-01

    Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.

  17. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  18. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  19. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  20. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  1. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  2. Accretion onto some well-known regular black holes

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  3. Scintillation analysis of truncated Bessel beams via numerical turbulence propagation simulation.

    PubMed

    Eyyuboğlu, Halil T; Voelz, David; Xiao, Xifeng

    2013-11-20

    Scintillation aspects of truncated Bessel beams propagated through atmospheric turbulence are investigated using a numerical wave optics random phase screen simulation method. On-axis, aperture averaged scintillation and scintillation relative to a classical Gaussian beam of equal source power and scintillation per unit received power are evaluated. It is found that in almost all circumstances studied, the zeroth-order Bessel beam will deliver the lowest scintillation. Low aperture averaged scintillation levels are also observed for the fourth-order Bessel beam truncated by a narrower source window. When assessed relative to the scintillation of a Gaussian beam of equal source power, Bessel beams generally have less scintillation, particularly at small receiver aperture sizes and small beam orders. Upon including in this relative performance measure the criteria of per unit received power, this advantageous position of Bessel beams mostly disappears, but zeroth- and first-order Bessel beams continue to offer some advantage for relatively smaller aperture sizes, larger source powers, larger source plane dimensions, and intermediate propagation lengths.

  4. Quantum properties of supersymmetric theories regularized by higher covariant derivatives

    NASA Astrophysics Data System (ADS)

    Stepanyantz, Konstantin

    2018-02-01

    We investigate quantum corrections in \\mathscr{N} = 1 non-Abelian supersymmetric gauge theories, regularized by higher covariant derivatives. In particular, by the help of the Slavnov-Taylor identities we prove that the vertices with two ghost legs and one leg of the quantum gauge superfield are finite in all orders. This non-renormalization theorem is confirmed by an explicit one-loop calculation. By the help of this theorem we rewrite the exact NSVZ β-function in the form of the relation between the β-function and the anomalous dimensions of the matter superfields, of the quantum gauge superfield, and of the Faddeev-Popov ghosts. Such a relation has simple qualitative interpretation and allows suggesting a prescription producing the NSVZ scheme in all loops for the theories regularized by higher derivatives. This prescription is verified by the explicit three-loop calculation for the terms quartic in the Yukawa couplings.

  5. On the regularity of the covariance matrix of a discretized scalar field on the sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao-Ahedo, J.D.; Barreiro, R.B.; Herranz, D.

    2017-02-01

    We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizationsmore » that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.« less

  6. Regular treatment with formoterol versus regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Lasserson, Toby J

    2014-01-01

    Background An increase in serious adverse events with both regular formoterol and regular salmeterol in chronic asthma has been demonstrated in previous Cochrane reviews. Objectives We set out to compare the risks of mortality and non-fatal serious adverse events in trials which have randomised patients with chronic asthma to regular formoterol versus regular salmeterol. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked manufacturers’ websites of clinical trial registers for unpublished trial data and also checked Food and Drug Administration (FDA) submissions in relation to formoterol and salmeterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel-design clinical trials on patients of any age and with any severity of asthma if they randomised patients to treatment with regular formoterol versus regular salmeterol (without randomised inhaled corticosteroids), and were of at least 12 weeks’ duration. Data collection and analysis Two authors independently selected trials for inclusion in the review and extracted outcome data. We sought unpublished data on mortality and serious adverse events from the sponsors and authors. Main results The review included four studies (involving 1116 adults and 156 children). All studies were open label and recruited patients who were already taking inhaled corticosteroids for their asthma, and all studies contributed data on serious adverse events. All studies compared formoterol 12 μg versus salmeterol 50 μg twice daily. The adult studies were all comparing Foradil Aerolizer with Serevent Diskus, and the children’s study compared Oxis Turbohaler to Serevent Accuhaler. There was only one death in an adult (which was unrelated to asthma) and none in children, and there were no significant differences in non-fatal serious adverse events comparing formoterol to salmeterol in adults (Peto odds ratio (OR) 0.77; 95

  7. 42 CFR 61.3 - Purpose of regular fellowships.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Purpose of regular fellowships. 61.3 Section 61.3..., TRAINING FELLOWSHIPS Regular Fellowships § 61.3 Purpose of regular fellowships. Regular fellowships are... sciences and communication of information. (b) Special scientific projects for the compilation of existing...

  8. Analytic model of a multi-electron atom

    NASA Astrophysics Data System (ADS)

    Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.

    2017-12-01

    A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.

  9. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  10. Quantum transport in graphene in presence of strain-induced pseudo-Landau levels

    NASA Astrophysics Data System (ADS)

    Settnes, Mikkel; Leconte, Nicolas; Barrios-Vargas, Jose E.; Jauho, Antti-Pekka; Roche, Stephan

    2016-09-01

    We report on mesoscopic transport fingerprints in disordered graphene caused by strain-field induced pseudomagnetic Landau levels (pLLs). Efficient numerical real space calculations of the Kubo formula are performed for an ordered network of nanobubbles in graphene, creating pseudomagnetic fields up to several hundreds of Tesla, values inaccessible by real magnetic fields. Strain-induced pLLs yield enhanced scattering effects across the energy spectrum resulting in lower mean free path and enhanced localization effects. In the vicinity of the zeroth order pLL, we demonstrate an anomalous transport regime, where the mean free paths increases with disorder. We attribute this puzzling behavior to the low-energy sub-lattice polarization induced by the zeroth order pLL, which is unique to pseudomagnetic fields preserving time-reversal symmetry. These results, combined with the experimental feasibility of reversible deformation fields, open the way to tailor a metal-insulator transition driven by pseudomagnetic fields.

  11. A Comparison of Experimental and Theoretical Results for Labyrinth Gas Seals. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Scharrer, Joseph Kirk

    1987-01-01

    The basic equations are derived for a two control volume model for compressible flow in a labyrinth seal. The flow is assumed to be completely turbulent and isoenergetic. The wall friction factors are determined using the Blasius formula. Jet flow theory is used for the calculation of the recirculation velocity in the cavity. Linearized zeroth and first order perturbation equations are developed for small motion about a centered position by an expansion in the eccentricity ratio. The zeroth order pressure distribution is found by satisfying the leakage equation. The circumferential velocity distribution is determined by satisfying the momentum equations. The first order equations are solved by a separation of variable solution. Integration of the resultant pressure distribution along and around the seal defines the reaction force developed by the seal and the corresponding dynamic coefficients. The results of this analysis are compared to experimental test results.

  12. Transmission-line model to design matching stage for light coupling into two-dimensional photonic crystals.

    PubMed

    Miri, Mehdi; Khavasi, Amin; Mehrany, Khashayar; Rashidian, Bizhan

    2010-01-15

    The transmission-line analogy of the planar electromagnetic reflection problem is exploited to obtain a transmission-line model that can be used to design effective, robust, and wideband interference-based matching stages. The proposed model based on a new definition for a scalar impedance is obtained by using the reflection coefficient of the zeroth-order diffracted plane wave outside the photonic crystal. It is shown to be accurate for in-band applications, where the normalized frequency is low enough to ensure that the zeroth-order diffracted plane wave is the most important factor in determining the overall reflection. The frequency limitation of employing the proposed approach is explored, highly dispersive photonic crystals are considered, and wideband matching stages based on binomial impedance transformers are designed to work at the first two photonic bands.

  13. The super-NFW model: an analytic dynamical model for cold dark matter haloes and elliptical galaxies

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Evans, N. Wyn; Sanders, Jason L.

    2018-05-01

    An analytic galaxy model with ρ ˜ r-1 at small radii and ρ ˜ r-3.5 at large radii is presented. The asymptotic density fall-off is slower than the Hernquist model, but faster than the Navarro-Frenk-White (NFW) profile for dark matter haloes, and so in accord with recent evidence from cosmological simulations. The model provides the zeroth-order term in a biorthornomal basis function expansion, meaning that axisymmetric, triaxial, and lopsided distortions can easily be added (much like the Hernquist model itself which is the zeroth-order term of the Hernquist-Ostriker expansion). The properties of the spherical model, including analytic distribution functions which are either isotropic, radially anisotropic, or tangentially anisotropic, are discussed in some detail. The analogue of the mass-concentration relation for cosmological haloes is provided.

  14. 5 CFR 551.421 - Regular working hours.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  15. On optimizing the treatment of exchange perturbations

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Chipman, D. M.

    1972-01-01

    A method using the zeroth plus first order wave functions, obtained by optimizing the basic equation used in exchange perturbation treatments, is utilized in an attempt to determine the exact energy and wave function in the exchange process. Attempts to determine the first order perturbation solution by optimizing the sum of the first and second order energies were unsuccessful.

  16. Two variants of minimum discarded fill ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, E.F.; Forsyth, P.A.; Tang, Wei-Pai

    1991-01-01

    It is well known that the ordering of the unknowns can have a significant effect on the convergence of Preconditioned Conjugate Gradient (PCG) methods. There has been considerable experimental work on the effects of ordering for regular finite difference problems. In many cases, good results have been obtained with preconditioners based on diagonal, spiral or natural row orderings. However, for finite element problems having unstructured grids or grids generated by a local refinement approach, it is difficult to define many of the orderings for more regular problems. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in findingmore » high quality Incomplete LU (ILU) preconditioners, especially for problems arising from unstructured finite element grids. Testing indicates this algorithm can identify a rather complicated physical structure in an anisotropic problem and orders the unknowns in the preferred'' direction. The MDF technique may be viewed as the numerical analogue of the minimum deficiency algorithm in sparse matrix technology. At any stage of the partial elimination, the MDF technique chooses the next pivot node so as to minimize the amount of discarded fill. In this work, two efficient variants of the MDF technique are explored to produce cost-effective high-order ILU preconditioners. The Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques identify an ordering that encourages fast decay of the entries in the ILU factorization. The Minimum Update Matrix (MUM) ordering technique is a simplification of the MDF ordering and is closely related to the minimum degree algorithm. The MUM ordering is especially for large problems arising from Navier-Stokes problems. Some interesting pictures of the orderings are presented using a visualization tool. 22 refs., 4 figs., 7 tabs.« less

  17. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  18. The Adler D-function for N = 1 SQCD regularized by higher covariant derivatives in the three-loop approximation

    NASA Astrophysics Data System (ADS)

    Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.

    2018-01-01

    We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.

  19. Simultaneous regularization method for the determination of radius distributions from experimental multiangle correlation functions

    NASA Astrophysics Data System (ADS)

    Buttgereit, R.; Roths, T.; Honerkamp, J.; Aberle, L. B.

    2001-10-01

    Dynamic light scattering experiments have become a powerful tool in order to investigate the dynamical properties of complex fluids. In many applications in both soft matter research and industry so-called ``real world'' systems are subject of great interest. Here, the dilution of the investigated system often cannot be changed without getting measurement artifacts, so that one often has to deal with highly concentrated and turbid media. The investigation of such systems requires techniques that suppress the influence of multiple scattering, e.g., cross correlation techniques. However, measurements at turbid as well as highly diluted media lead to data with low signal-to-noise ratio, which complicates data analysis and leads to unreliable results. In this article a multiangle regularization method is discussed, which copes with the difficulties arising from such samples and enhances enormously the quality of the estimated solution. In order to demonstrate the efficiency of this multiangle regularization method we applied it to cross correlation functions measured at highly turbid samples.

  20. Regular Decompositions for H(div) Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Vassilevski, Panayot

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  1. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  2. Evaluation of breathing patterns for respiratory-gated radiation therapy using the respiration regularity index

    NASA Astrophysics Data System (ADS)

    Cheong, Kwang-Ho; Lee, MeYeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, SoAh; Hwang, Taejin; Kim, Haeyoung; Kim, KyoungJu; Han, Tae Jin; Bae, Hoonsik

    2015-01-01

    Despite the considerable importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, not to mention the necessity of maintaining that regularity through the following sessions, an effective and simply applicable method by which those goals can be accomplished has rarely been reported. The authors herein propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a cos4( ω( t) · t) wave form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: the sample standard deviation of respiration period ( s f ), the sample standard deviation of amplitude ( s a ) and the results of a simple regression of the baseline drift (slope as β, and standard deviation of residuals as σ r ) of a respiration signal. The overall irregularity ( δ) was defined as , where is a variable newly-derived by using principal component analysis (PCA) for the four fluctuation parameters and has two principal components ( ω 1, ω 2). The proposed respiration regularity index was defined as ρ = ln(1 + (1/ δ))/2, a higher ρ indicating a more regular breathing pattern. We investigated its clinical relevance by comparing it with other known parameters. Subsequently, we applied it to 110 respiration signals acquired from five liver and five lung cancer patients by using real-time position management (RPM; Varian Medical Systems, Palo Alto, CA). Correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Additionally, the respiration regularity was compared between the liver and lung cancer patient groups. The respiration regularity was determined based on ρ; patients with ρ < 0.3 showed

  3. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  4. Predictability in Pathological Gambling? Applying the Duplication of Purchase Law to the Understanding of Cross-Purchases Between Regular and Pathological Gamblers.

    PubMed

    Lam, Desmond; Mizerski, Richard

    2017-06-01

    The objective of this study is to explore the gambling participations and game purchase duplication of light regular, heavy regular and pathological gamblers by applying the Duplication of Purchase Law. Current study uses data collected by the Australian Productivity Commission for eight different types of games. Key behavioral statistics on light regular, heavy regular, and pathological gamblers were computed and compared. The key finding is that pathological gambling, just like regular gambling, follows the Duplication of Purchase Law, which states that the dominant factor of purchase duplication between two brands is their market shares. This means that gambling between any two games at pathological level, like any regular consumer purchases, exhibits "law-like" regularity based on the pathological gamblers' participation rate of each game. Additionally, pathological gamblers tend to gamble more frequently across all games except lotteries and instant as well as make greater cross-purchases compared to heavy regular gamblers. A better understanding of the behavioral traits between regular (particularly heavy regular) and pathological gamblers can be useful to public policy makers and social marketers in order to more accurately identify such gamblers and better manage the negative impacts of gambling.

  5. On the regularized fermionic projector of the vacuum

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  6. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  7. 12 CFR 725.3 - Regular membership.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Regular membership. 725.3 Section 725.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit...

  8. Newton's Zeroth Law: Learning from Listening to Our Students

    NASA Astrophysics Data System (ADS)

    Scherr, Rachel E.; Redish, Edward F.

    2005-01-01

    Modern instructional advice encourages us to not just tell our students what we want them to know, but to listen to them carefully. This helps us to find out "where they are" in order to better understand what tasks to offer them that might help them learn the physics most effectively. Sometimes, listening to students and trying to understand their intuitions not only helps them, it helps us—giving us new insights into the physics we are teaching. We had such an experience in the fall of 2003 in our algebra-based physics class at the University of Maryland.

  9. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  10. Potential Energy Surface of the Chromium Dimer Re-re-revisited with Multiconfigurational Perturbation Theory.

    PubMed

    Vancoillie, Steven; Malmqvist, Per Åke; Veryazov, Valera

    2016-04-12

    The chromium dimer has long been a benchmark molecule to evaluate the performance of different computational methods ranging from density functional theory to wave function methods. Among the latter, multiconfigurational perturbation theory was shown to be able to reproduce the potential energy surface of the chromium dimer accurately. However, for modest active space sizes, it was later shown that different definitions of the zeroth-order Hamiltonian have a large impact on the results. In this work, we revisit the system for the third time with multiconfigurational perturbation theory, now in order to increase the active space of the reference wave function. This reduces the impact of the choice of zeroth-order Hamiltonian and improves the shape of the potential energy surface significantly. We conclude by comparing our results of the dissocation energy and vibrational spectrum to those obtained from several highly accurate multiconfigurational methods and experiment. For a meaningful comparison, we used the extrapolation to the complete basis set for all methods involved.

  11. Chemical association in simple models of molecular and ionic fluids. III. The cavity function

    NASA Astrophysics Data System (ADS)

    Zhou, Yaoqi; Stell, George

    1992-01-01

    Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.

  12. On regularizing the MCTDH equations of motion

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Dieter; Wang, Haobin

    2018-03-01

    The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.

  13. Unified formalism for the generalized kth-order Hamilton-Jacobi problem

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; de Léon, Manuel; Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2014-08-01

    The geometric formulation of the Hamilton-Jacobi theory enables us to generalize it to systems of higher-order ordinary differential equations. In this work we introduce the unified Lagrangian-Hamiltonian formalism for the geometric Hamilton-Jacobi theory on higher-order autonomous dynamical systems described by regular Lagrangian functions.

  14. The impact of the inclusion of students with handicaps and disabilities in the regular education science classroom

    NASA Astrophysics Data System (ADS)

    Donald, Cathey Nolan

    This study was conducted to determine the impact of the inclusion of students with handicaps and disabilities in the regular education science classroom. Surveys were mailed to the members of the Alabama Science Teachers Association to obtain information from teachers in inclusive classrooms. Survey responses from teachers provide insight into these classrooms. This study reports the results of the teachers surveyed. Results indicate multiple changes occur in the educational opportunities presented to regular education students when students with handicaps and disabilities are included in the regular science classroom. Responding teachers (60%) report omitting activities that formerly provided experiences for students, such as laboratory activities using dangerous materials, field activities, and some group activities. Also omitted, in many instances (64.1%), are skill building opportunities of word problems and higher order thinking skills. Regular education students participate in classes where discipline problems related to included students are reported as the teachers most time consuming task. In these classrooms, directions are repeated frequently, reteaching of material already taught occurs, and the pace of instruction has been slowed. These changes to the regular classroom occur across school levels. Many teachers (44.9%) report they do not see benefits associated with the inclusion of students with special needs in the regular classroom.

  15. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, K; Lee, M; Kang, S

    2014-06-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a

  17. High-speed manufacturing of highly regular femtosecond laser-induced periodic surface structures: physical origin of regularity.

    PubMed

    Gnilitskyi, Iaroslav; Derrien, Thibault J-Y; Levy, Yoann; Bulgakova, Nadezhda M; Mocek, Tomáš; Orazi, Leonardo

    2017-08-16

    Highly regular laser-induced periodic surface structures (HR-LIPSS) have been fabricated on surfaces of Mo, steel alloy and Ti at a record processing speed on large areas and with a record regularity in the obtained sub-wavelength structures. The physical mechanisms governing LIPSS regularity are identified and linked with the decay length (i.e. the mean free path) of the excited surface electromagnetic waves (SEWs). The dispersion of the LIPSS orientation angle well correlates with the SEWs decay length: the shorter this length, the more regular are the LIPSS. A material dependent criterion for obtaining HR-LIPSS is proposed for a large variety of metallic materials. It has been found that decreasing the spot size close to the SEW decay length is a key for covering several cm 2 of material surface by HR-LIPSS in a few seconds. Theoretical predictions suggest that reducing the laser wavelength can provide the possibility of HR-LIPSS production on principally any metal. This new achievement in the unprecedented level of control over the laser-induced periodic structure formation makes this laser-writing technology to be flexible, robust and, hence, highly competitive for advanced industrial applications based on surface nanostructuring.

  18. Investigation of wall-bounded turbulence over regularly distributed roughness

    NASA Astrophysics Data System (ADS)

    Placidi, Marco; Ganapathisubramani, Bharathram

    2012-11-01

    The effects of regularly distributed roughness elements on the structure of a turbulent boundary layer are examined by performing a series of Planar (high resolution l+ ~ 30) and Stereoscopic Particle Image Velocimetry (PIV) experiments in a wind tunnel. An adequate description of how to best characterise a rough wall, especially one where the density of roughness elements is sparse, is yet to be developed. In this study, rough surfaces consisting of regularly and uniformly distributed LEGO® blocks are used. Twelve different patterns are adopted in order to systematically examine the effects of frontal solidity (λf, frontal area of the roughness elements per unit wall-parallel area) and plan solidity (λp, plan area of roughness elements per unit wall-parallel area), on the turbulence structure. The Karman number, Reτ , is approximately 4000 across the different cases. Spanwise 3D vector fields at two different wall-normal locations (top of the canopy and within the log-region) are also compared to examine the spanwise homogeneity of the flow across different surfaces. In the talk, a detailed analysis of mean and rms velocity profiles, Reynolds stresses, and quadrant decomposition for the different patterns will be presented.

  19. Training Regular Education Personnel To Be Special Education Consultants to Other Regular Education Personnel in Rural Settings.

    ERIC Educational Resources Information Center

    McIntosh, Dean K.; Raymond, Gail I.

    The Program for Exceptional Children of the University of South Carolina developed a project to address the need for an improved service delivery model for handicapped students in rural South Carolina. The project trained regular elementary teachers at the master's degree level to function as consultants to other regular classroom teachers with…

  20. 20 CFR 226.14 - Employee regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  1. Ultrastructure Processing of Ordered Polymers

    DTIC Science & Technology

    1990-01-18

    from regenerated cellulose , then from synthetic polymer consisting of chemical raw materials derived from oils and coal. Since then, some scientists have...ordered crystal- line material, crystallite, throughout the fiber, which is composed of microfibrils and fibrils. The small crystallites are regularly...these flat ribbons appears to consist of smaller " microfibrils " of lateral dimension varying from 50-80 A, as described before(Figs. 15 and 16). These

  2. 39 CFR 6.1 - Regular meetings, annual meeting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  3. Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2016-01-01

    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…

  4. Construction of higher order accurate vortex and particle methods

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.

    1986-01-01

    The standard point vortex method has recently been shown to be of high order of accuracy for problems on the whole plane, when using a uniform initial subdivision for assigning the vorticity to the points. If obstacles are present in the flow, this high order deteriorates to first or second order. New vortex methods are introduced which are of arbitrary accuracy (under regularity assumptions) regardless of the presence of bodies and the uniformity of the initial subdivision.

  5. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  6. Optimal behaviour can violate the principle of regularity.

    PubMed

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  7. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    PubMed

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  8. 29 CFR 784.16 - “Regular rate.”

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false âRegular rate.â 784.16 Section 784.16 Labor Regulations... FISHING AND OPERATIONS ON AQUATIC PRODUCTS General Some Basic Definitions § 784.16 “Regular rate.” As... in the Act not less than one and one-half times their regular rates of pay. Section 7(e) of the Act...

  9. 29 CFR 784.16 - “Regular rate.”

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false âRegular rate.â 784.16 Section 784.16 Labor Regulations... FISHING AND OPERATIONS ON AQUATIC PRODUCTS General Some Basic Definitions § 784.16 “Regular rate.” As... in the Act not less than one and one-half times their regular rates of pay. Section 7(e) of the Act...

  10. 29 CFR 784.16 - “Regular rate.”

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false âRegular rate.â 784.16 Section 784.16 Labor Regulations... FISHING AND OPERATIONS ON AQUATIC PRODUCTS General Some Basic Definitions § 784.16 “Regular rate.” As... in the Act not less than one and one-half times their regular rates of pay. Section 7(e) of the Act...

  11. Deviance detection based on regularity encoding along the auditory hierarchy: electrophysiological evidence in humans.

    PubMed

    Escera, Carles; Leung, Sumie; Grimm, Sabine

    2014-07-01

    Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory

  12. NATURAL GRADIENT EXPERIMENT ON SOLUTE TRANSPORT IN A SAND AQUIFER. 2. SPATIAL MOMENTS AND THE ADVECTION AND DISPERSION OF NONREACTIVE TRACERS

    EPA Science Inventory

    The three-dimensional movement of a tracer plume containing bromide and chloride is investigated using the data base from a large-scale natural gradient field experiment on groundwater solute transport. The analysis focuses on the zeroth-, first-, and second-order spatial moments...

  13. Optimal behaviour can violate the principle of regularity

    PubMed Central

    Trimmer, Pete C.

    2013-01-01

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory—based on axioms, including transitivity, regularity and the independence of irrelevant alternatives—is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision. PMID:23740781

  14. Second-order perturbation theory with a density matrix renormalization group self-consistent field reference function: theory and application to the study of chromium dimer.

    PubMed

    Kurashige, Yuki; Yanai, Takeshi

    2011-09-07

    We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics

  15. Surface spin-electron acoustic waves in magnetically ordered metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreev, Pavel A., E-mail: andreevpa@physics.msu.ru; Kuz'menkov, L. S., E-mail: lsk@phys.msu.ru

    2016-05-09

    Degenerate plasmas with motionless ions show existence of three surface waves: the Langmuir wave, the electromagnetic wave, and the zeroth sound. Applying the separated spin evolution quantum hydrodynamics to half-space plasma, we demonstrate the existence of the surface spin-electron acoustic wave (SSEAW). We study dispersion of the SSEAW. We show that there is hybridization between the surface Langmuir wave and the SSEAW at rather small spin polarization. In the hybridization area, the dispersion branches are located close to each other. In this area, there is a strong interaction between these waves leading to the energy exchange. Consequently, generating the Langmuirmore » waves with the frequencies close to hybridization area we can generate the SSEAWs. Thus, we report a method of creation of the spin-electron acoustic waves.« less

  16. A two-parameter family of double-power-law biorthonormal potential-density expansions

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn

    2018-07-01

    We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in a closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.

  17. A two-parameter family of double-power-law biorthonormal potential-density expansions

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn

    2018-05-01

    We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. (2017a) expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.

  18. All-temperature magnon theory of ferromagnetism

    NASA Astrophysics Data System (ADS)

    Datta, Sambhu N.; Panda, Anirban

    2009-08-01

    We present an all-temperature magnon formalism for ferromagnetic solids. To our knowledge, this is the first time that all-temperature spin statistics have been calculated. The general impression up to now is that the magnon formalism breaks down at the Curie point as it introduces a series expansion and unphysical states. Our treatment is based on an accurate quantum mechanical representation of the Holstein-Primakoff transformation. To achieve this end, we introduce the 'Kubo operator'. The treatment is valid for all 14 types of Bravais lattices, and not limited to simple cubic unit cells. In the present work, we carry out a zeroth-order treatment involving all possible spin states, and leaving out all unphysical states. In a subsequent paper we will show that the perturbed energy values are very different, but the magnetic properties undergo only small modifications from the zeroth-order results.

  19. A two-parameter family of double-power-law biorthonormal potential-density expansions

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn

    2018-05-01

    We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. (2018b) expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.

  20. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Regularity and dimensional salience in temporal grouping.

    PubMed

    Prince, Jon B; Rice, Tim

    2018-04-30

    How do pitch and duration accents combine to influence the perceived grouping of musical sequences? Sequence context influences the relative importance of these accents; for example, the presence of learned structure in pitch exaggerates the effect of pitch accents at the expense of duration accents despite being irrelevant to the task and not attributable to attention (Prince, 2014b). In the current study, two experiments examined whether the presence of temporal structure has the opposite effect. Experiment 1 tested baseline conditions, in which participants (N = 30) heard sequences with various sizes of either pitch or duration accents, which implied either duple or triple groupings (accent every two or three notes, respectively). Sequences either had regular temporal structure (isochronous) or not (irregular, via using random interonset intervals). Regularity enhanced the effect of duration accents but had negligible influence on pitch accents. The accent sizes that gave the most equivalent ratings across dimension and regularity levels were used in Experiment 2 (N = 33), in which sequences contained both pitch and duration accents that suggested either duple, triple, or neutral groupings. Despite controlling for the baseline effect of regularity by selecting equally effective accent sizes, regularity had additional effects on duration accents, but only for duple groupings. Regularity did not influence the effectiveness of pitch accents when combined with duration accents. These findings offer some support for a dimensional salience hypothesis, which proposes that the presence of temporal structure should foster duration accent effectiveness at the expense of pitch accents. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  3. Compact perturbative expressions for neutrino oscillations in matter

    DOE PAGES

    Denton, Peter B.; Minakata, Hisakazu; Parke, Stephen J.

    2016-06-08

    We further develop and extend a recent perturbative framework for neutrino oscillations in uniform matter density so that the resulting oscillation probabilities are accurate for the complete matter potential versus baseline divided by neutrino energy plane. This extension also gives the exact oscillation probabilities in vacuum for all values of baseline divided by neutrino energy. The expansion parameter used is related to the ratio of the solar to the atmosphericmore » $$\\Delta m^2$$ scales but with a unique choice of the atmospheric $$\\Delta m^2$$ such that certain first-order effects are taken into account in the zeroth-order Hamiltonian. Using a mixing matrix formulation, this framework has the exceptional feature that the neutrino oscillation probability in matter has the same structure as in vacuum, to all orders in the expansion parameter. It also contains all orders in the matter potential and $$\\sin\\theta_{13}$$. It facilitates immediate physical interpretation of the analytic results, and makes the expressions for the neutrino oscillation probabilities extremely compact and very accurate even at zeroth order in our perturbative expansion. Furthermore, the first and second order results are also given which improve the precision by approximately two or more orders of magnitude per perturbative order.« less

  4. Phase ordering in disordered and inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Corberi, Federico; Zannetti, Marco; Lippiello, Eugenio; Burioni, Raffaella; Vezzani, Alessandro

    2015-06-01

    We study numerically the coarsening dynamics of the Ising model on a regular lattice with random bonds and on deterministic fractal substrates. We propose a unifying interpretation of the phase-ordering processes based on two classes of dynamical behaviors characterized by different growth laws of the ordered domain size, namely logarithmic or power law, respectively. It is conjectured that the interplay between these dynamical classes is regulated by the same topological feature that governs the presence or the absence of a finite-temperature phase transition.

  5. Spiking and bursting patterns of fractional-order Izhikevich model

    NASA Astrophysics Data System (ADS)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  6. Consistent Partial Least Squares Path Modeling via Regularization

    PubMed Central

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491

  7. Consistent Partial Least Squares Path Modeling via Regularization.

    PubMed

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  8. Regular Patterns in Cerebellar Purkinje Cell Simple Spike Trains

    PubMed Central

    Shin, Soon-Lim; Hoebeek, Freek E.; Schonewille, Martijn; De Zeeuw, Chris I.; Aertsen, Ad; De Schutter, Erik

    2007-01-01

    Background Cerebellar Purkinje cells (PC) in vivo are commonly reported to generate irregular spike trains, documented by high coefficients of variation of interspike-intervals (ISI). In strong contrast, they fire very regularly in the in vitro slice preparation. We studied the nature of this difference in firing properties by focusing on short-term variability and its dependence on behavioral state. Methodology/Principal Findings Using an analysis based on CV2 values, we could isolate precise regular spiking patterns, lasting up to hundreds of milliseconds, in PC simple spike trains recorded in both anesthetized and awake rodents. Regular spike patterns, defined by low variability of successive ISIs, comprised over half of the spikes, showed a wide range of mean ISIs, and were affected by behavioral state and tactile stimulation. Interestingly, regular patterns often coincided in nearby Purkinje cells without precise synchronization of individual spikes. Regular patterns exclusively appeared during the up state of the PC membrane potential, while single ISIs occurred both during up and down states. Possible functional consequences of regular spike patterns were investigated by modeling the synaptic conductance in neurons of the deep cerebellar nuclei (DCN). Simulations showed that these regular patterns caused epochs of relatively constant synaptic conductance in DCN neurons. Conclusions/Significance Our findings indicate that the apparent irregularity in cerebellar PC simple spike trains in vivo is most likely caused by mixing of different regular spike patterns, separated by single long intervals, over time. We propose that PCs may signal information, at least in part, in regular spike patterns to downstream DCN neurons. PMID:17534435

  9. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  10. Regularized solution of a nonlinear problem in electromagnetic sounding

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe

    2014-12-01

    Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.

  11. The Behavior of Regular Satellites During the Planetary Migration

    NASA Astrophysics Data System (ADS)

    Nogueira, Erica Cristina; Gomes, R. S.; Brasser, R.

    2013-05-01

    Abstract (2,250 Maximum Characters): The behavior of the regular satellites of the giant planets during the instability phase of the Nice model needs to be better understood. In order to explain this behavior, we used numerical simulations to investigate the evolution of the regular satellite systems of the ice giants when these two planets experienced encounters with the gas giants. For the initial conditions we placed an ice planet in between Jupiter and Saturn, according to the evolution of Nice model simulations in a ‘jumping Jupiter’ scenario (Brasser et al. 2009). We used the MERCURY integrator (Chambers 1999) and cloned simulations by slightly modifying the Hybrid integrator changeover parameter. We obtained 101 successful runs which kept all planets, of which 24 were jumping Jupiter cases. Subsequently we performed additional numerical integrations in which the ice giant that encountered a gas giant was started on the same orbit but with its regular satellites included. This is done as follows: For each of the 101 basic runs, we save the orbital elements of all objects in the integration at all close encounter events. Then we performed a backward integration to start the system 100 years before the encounter and re-enacted the forward integration with the regular satellites around the ice giant. These integrations ran for 1000 years. The final orbital elements of the satellites with respect to the ice planet were used to restart the integration for the next planetary encounter (if any). If we assume that Uranus is the ice planet that had encounters with a gas giant, we considered the satellites Miranda, Ariel, Umbriel, Titania and Oberon with their present orbits around the planet. For Neptune we introduced Triton with an orbit with a 15% larger than the actual semi-major axis to account for the tidal decay from the LHB to present time. We also assume that Triton was captured through binary disruption (Agnor and Hamilton 2006, Nogueira et al. 2011) and

  12. Optimal order policy in response to announced price increase for deteriorating items with limited special order quantity

    NASA Astrophysics Data System (ADS)

    Ouyang, Liang-Yuh; Wu, Kun-Shan; Yang, Chih-Te; Yen, Hsiu-Feng

    2016-02-01

    When a supplier announces an impending price increase due to take effect at a certain time in the future, it is important for each retailer to decide whether to purchase additional stock to take advantage of the present lower price. This study explores the possible effects of price increases on a retailer's replenishment policy when the special order quantity is limited and the rate of deterioration of the goods is assumed to be constant. The two situations discussed in this study are as follows: (1) when the special order time coincides with the retailer's replenishment time and (2) when the special order time occurs during the retailer's sales period. By analysing the total cost savings between special and regular orders during the depletion time of the special order quantity, the optimal order policy for each situation can be determined. We provide several numerical examples to illustrate the theories in practice. Additionally, we conduct a sensitivity analysis on the optimal solution with respect to the main parameters.

  13. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Cable television system regular monitoring. 76... system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by...

  14. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Cable television system regular monitoring. 76... system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by...

  15. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Cable television system regular monitoring. 76... system regular monitoring. Cable television operators transmitting carriers in the frequency bands 108-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by...

  16. Guidance law development for aeroassisted transfer vehicles using matched asymptotic expansions

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Melamed, Nahum

    1993-01-01

    This report addresses and clarifies a number of issues related to the Matched Asymptotic Expansion (MAE) analysis of skip trajectories, or any class of problems that give rise to inner layers that are not associated directly with satisfying boundary conditions. The procedure for matching inner and outer solutions, and using the composite solution to satisfy boundary conditions is developed and rigorously followed to obtain a set of algebraic equations for the problem of inclination change with minimum energy loss. A detailed evaluation of the zeroth order guidance algorithm for aeroassisted orbit transfer is performed. It is shown that by exploiting the structure of the MAE solution procedure, the original problem, which requires the solution of a set of 20 implicit algebraic equations, can be reduced to a problem of 6 implicit equations in 6 unknowns. A solution that is near optimal, requires a minimum of computation, and thus can be implemented in real time and on-board the vehicle, has been obtained. Guidance law implementation entails treating the current state as a new initial state and repetitively solving the zeroth order MAE problem to obtain the feedback controls. Finally, a general procedure is developed for constructing a MAE solution up to first order, of the Hamilton-Jacobi-Bellman equation based on the method of characteristics. The development is valid for a class of perturbation problems whose solution exhibits two-time-scale behavior. A regular expansion for problems of this type is shown to be inappropriate since it is not valid over a narrow range of the independent variable. That is, it is not uniformly valid. Of particular interest here is the manner in which matching and boundary conditions are enforced when the expansion is carried out to first order. Two cases are distinguished-one where the left boundary condition coincides with, or lies to the right of, the singular region, and another one where the left boundary condition lies to the left

  17. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  18. Regularized Moment Equations and Shock Waves for Rarefied Granular Gas

    NASA Astrophysics Data System (ADS)

    Reddy, Lakshminarayana; Alam, Meheboob

    2016-11-01

    It is well-known that the shock structures predicted by extended hydrodynamic models are more accurate than the standard Navier-Stokes model in the rarefied regime, but they fail to predict continuous shock structures when the Mach number exceeds a critical value. Regularization or parabolization is one method to obtain smooth shock profiles at all Mach numbers. Following a Chapman-Enskog-like method, we have derived the "regularized" version 10-moment equations ("R10" moment equations) for inelastic hard-spheres. In order to show the advantage of R10 moment equations over standard 10-moment equations, the R10 moment equations have been employed to solve the Riemann problem of plane shock waves for both molecular and granular gases. The numerical results are compared between the 10-moment and R10-moment models and it is found that the 10-moment model fails to produce continuous shock structures beyond an upstream Mach number of 1 . 34 , while the R10-moment model predicts smooth shock profiles beyond the upstream Mach number of 1 . 34 . The density and granular temperature profiles are found to be asymmetric, with their maxima occurring within the shock-layer.

  19. Regularized Biot-Savart Laws for Modeling Magnetic Flux Ropes

    NASA Astrophysics Data System (ADS)

    Titov, Viacheslav; Downs, Cooper; Mikic, Zoran; Torok, Tibor; Linker, Jon A.

    2017-08-01

    Many existing models assume that magnetic flux ropes play a key role in solar flares and coronal mass ejections (CMEs). It is therefore important to develop efficient methods for constructing flux-rope configurations constrained by observed magnetic data and the initial morphology of CMEs. As our new step in this direction, we have derived and implemented a compact analytical form that represents the magnetic field of a thin flux rope with an axis of arbitrary shape and a circular cross-section. This form implies that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is a curl of the sum of toroidal and poloidal vector potentials proportional to I and F, respectively. The vector potentials are expressed in terms of Biot-Savart laws whose kernels are regularized at the rope axis. We regularized them in such a way that for a straight-line axis the form provides a cylindrical force-free flux rope with a parabolic profile of the axial current density. So far, we set the shape of the rope axis by tracking the polarity inversion lines of observed magnetograms and estimating its height and other parameters of the rope from a calculated potential field above these lines. In spite of this heuristic approach, we were able to successfully construct pre-eruption configurations for the 2009 February13 and 2011 October 1 CME events. These applications demonstrate that our regularized Biot-Savart laws are indeed a very flexible and efficient method for energizing initial configurations in MHD simulations of CMEs. We discuss possible ways of optimizing the axis paths and other extensions of the method in order to make it more useful and robust.Research supported by NSF, NASA's HSR and LWS Programs, and AFOSR.

  20. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  1. Patients Reading Their Medical Records: Differences in Experiences and Attitudes between Regular and Inexperienced Readers

    ERIC Educational Resources Information Center

    Huvila, Isto; Daniels, Mats; Cajander, Åsa; Åhlfeldt, Rose-Mharie

    2016-01-01

    Introduction: We report results of a study of how ordering and reading of printouts of medical records by regular and inexperienced readers relate to how the records are used, to the health information practices of patients, and to their expectations of the usefulness of new e-Health services and online access to medical records. Method: The study…

  2. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  3. Critical behavior of the XY-rotor model on regular and small-world networks

    NASA Astrophysics Data System (ADS)

    De Nigris, Sarah; Leoncini, Xavier

    2013-07-01

    We study the XY rotors model on small networks whose number of links scales with the system size Nlinks˜Nγ, where 1≤γ≤2. We first focus on regular one-dimensional rings in the microcanonical ensemble. For γ<1.5 the model behaves like a short-range one and no phase transition occurs. For γ>1.5, the system equilibrium properties are found to be identical to the mean field, which displays a second-order phase transition at a critical energy density ɛ=E/N,ɛc=0.75. Moreover, for γc≃1.5 we find that a nontrivial state emerges, characterized by an infinite susceptibility. We then consider small-world networks, using the Watts-Strogatz mechanism on the regular networks parametrized by γ. We first analyze the topology and find that the small-world regime appears for rewiring probabilities which scale as pSW∝1/Nγ. Then considering the XY-rotors model on these networks, we find that a second-order phase transition occurs at a critical energy ɛc which logarithmically depends on the topological parameters p and γ. We also define a critical probability pMF, corresponding to the probability beyond which the mean field is quantitatively recovered, and we analyze its dependence on γ.

  4. Regular sun exposure benefits health.

    PubMed

    van der Rhee, H J; de Vries, E; Coebergh, J W

    2016-12-01

    Since it was discovered that UV radiation was the main environmental cause of skin cancer, primary prevention programs have been started. These programs advise to avoid exposure to sunlight. However, the question arises whether sun-shunning behaviour might have an effect on general health. During the last decades new favourable associations between sunlight and disease have been discovered. There is growing observational and experimental evidence that regular exposure to sunlight contributes to the prevention of colon-, breast-, prostate cancer, non-Hodgkin lymphoma, multiple sclerosis, hypertension and diabetes. Initially, these beneficial effects were ascribed to vitamin D. Recently it became evident that immunomodulation, the formation of nitric oxide, melatonin, serotonin, and the effect of (sun)light on circadian clocks, are involved as well. In Europe (above 50 degrees north latitude), the risk of skin cancer (particularly melanoma) is mainly caused by an intermittent pattern of exposure, while regular exposure confers a relatively low risk. The available data on the negative and positive effects of sun exposure are discussed. Considering these data we hypothesize that regular sun exposure benefits health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Enumeration of Extended m-Regular Linear Stacks.

    PubMed

    Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2016-12-01

    The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].

  6. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  7. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    PubMed

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  8. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  9. A perturbative correction for electron-inertia in magnetized sheath structures

    NASA Astrophysics Data System (ADS)

    Gohain, Munmi; Karmakar, Pralay K.

    2016-10-01

    We propose a hydrodynamic model to study the equilibrium properties of planar plasma sheaths in two-component quasi-neutral magnetized plasmas. It includes weak but finite electron-inertia incorporated via a regular perturbation of the electronic fluid dynamics only relative to a new smallness parameter, δ, assessing the weak inertial-to-electromagnetic strengths. The zeroth-order perturbation around δ leads to the usual Boltzmann distribution law, which describes inertialess thermalized electrons. The forthwith next higher-order yields the modified Boltzmann law describing the putative lowest-order electron-inertial correction, which is applied meticulously to derive the local Bohm criterion for sheath formation. It is found to be influenced jointly by electron-inertial corrective effects, magnetic field and field orientation relative to the bulk plasma flow. We establish that the mutualistic action of electron-inertia amid gyro-kinetic effects slightly enhances the ion-flow Mach threshold value (typically, M i0 ⩾ 1.140), against the normal value of unity, confrontationally towards the sheath entrance. A numerical illustrative scheme is methodically constructed to see the parametric dependence of the new sheath properties on diverse problem arguments. The merits and demerits are highlighted in the light of the existing results conjointly with clear indication to future ameliorations.

  10. Force and moment rotordynamic coefficients for pump-impeller shroud surfaces

    NASA Technical Reports Server (NTRS)

    Childs, Dara W.

    1987-01-01

    Governing equations of motion are derived for a bulk-flow model of the leakage path between an impeller shroud and a pump housing. The governing equations consist of a path-momentum, a circumferential - momentum, and a continuity equation. The fluid annulus between the impeller shroud and pump housing is assumed to be circumferentially symmetric when the impeller is centered; i.e., the clearance can vary along the pump axis but does not vary in the circumferential direction. A perturbation expansion of the governing equations in the eccentricity ratio yields a set of zeroth and first-order governing equations. The zeroth-order equations define the leaking rate and the circumferential and path velocity distributions and pressure distributions for a centered impeller position. The first-order equations define the perturbations in the velocity and pressure distributions due to either a radial-displacement perturbation or a tilt perturbation of the impeller. Integration of the perturbed pressure and shear-stress distribution acting on the rotor yields the reaction forces and moments acting on the impeller face.

  11. Cylinder and metal grating polarization beam splitter

    NASA Astrophysics Data System (ADS)

    Yang, Junbo; Xu, Suzhi

    2017-08-01

    We propose a novel and compact metal grating polarization beam splitter (PBS) based on its different reflected and transmitted orders. The metal grating exhibits a broadband high reflectivity and polarization dependence. The rigorous coupled wave analysis is used to calculate the reflectivity and the transmitting spectra and optimize the structure parameters to realize the broadband PBS. The finite-element method is used to calculate the field distribution. The characteristics of the broadband high reflectivity, transmitting and the polarization dependence are investigated including wavelength, period, refractive index and the radius of circle grating. When grating period d = 400 nm, incident wavelength λ = 441 nm, incident angle θ = 60° and radius of circle d/5, then the zeroth reflection order R0 = 0.35 and the transmission zeroth order T0 = 0.08 for TE polarization, however, T0 = 0.34 and R0 = 0.01 for TM mode. The simple fabrication method involves only single etch step and good compatibility with complementary metal oxide semiconductor technology. PBS designed here is particularly suited for optical communication and optical information processing.

  12. Class of regular bouncing cosmologies

    NASA Astrophysics Data System (ADS)

    Vasilić, Milovan

    2017-06-01

    In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.

  13. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  14. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING

    PubMed Central

    Liu, Meizhu; Vemuri, Baba C.

    2011-01-01

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  15. Nonlinear second order evolution inclusions with noncoercive viscosity term

    NASA Astrophysics Data System (ADS)

    Papageorgiou, Nikolaos S.; Rădulescu, Vicenţiu D.; Repovš, Dušan D.

    2018-04-01

    In this paper we deal with a second order nonlinear evolution inclusion, with a nonmonotone, noncoercive viscosity term. Using a parabolic regularization (approximation) of the problem and a priori bounds that permit passing to the limit, we prove that the problem has a solution.

  16. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  17. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  18. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  19. Stress stiffening and approximate equations in flexible multibody dynamics

    NASA Technical Reports Server (NTRS)

    Padilla, Carlos E.; Vonflotow, Andreas H.

    1993-01-01

    A useful model for open chains of flexible bodies undergoing large rigid body motions, but small elastic deformations, is one in which the equations of motion are linearized in the small elastic deformations and deformation rates. For slow rigid body motions, the correctly linearized, or consistent, set of equations can be compared to prematurely linearized, or inconsistent, equations and to 'oversimplified,' or ruthless, equations through the use of open loop dynamic simulations. It has been shown that the inconsistent model should never be used, while the ruthless model should be used whenever possible. The consistent and inconsistent models differ by stress stiffening terms. These are due to zeroth-order stresses effecting virtual work via nonlinear strain-displacement terms. In this paper we examine in detail the nature of these stress stiffening terms and conclude that they are significant only when the associated zeroth-order stresses approach 'buckling' stresses. Finally it is emphasized that when the stress stiffening terms are negligible the ruthlessly linearized equations should be used.

  20. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  1. Regular Motions of Resonant Asteroids

    NASA Astrophysics Data System (ADS)

    Ferraz-Mello, S.

    1990-11-01

    RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS

  2. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  3. Generalized Israel junction conditions for a fourth-order brane world

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcerzak, Adam; Dabrowski, Mariusz P.

    2008-01-15

    We discuss a general fourth-order theory of gravity on the brane. In general, the formulation of the junction conditions (except for Euler characteristics such as Gauss-Bonnet term) leads to the higher powers of the delta function and requires regularization. We suggest the way to avoid such a problem by imposing the metric and its first derivative to be regular at the brane, while the second derivative to have a kink, the third derivative of the metric to have a step function discontinuity, and no sooner as the fourth derivative of the metric to give the delta function contribution to themore » field equations. Alternatively, we discuss the reduction of the fourth-order gravity to the second-order theory by introducing an extra tensor field. We formulate the appropriate junction conditions on the brane. We prove the equivalence of both theories. In particular, we prove the equivalence of the junction conditions with different assumptions related to the continuity of the metric along the brane.« less

  4. Marangoni bubble motion in zero gravity. [Lewis zero gravity drop tower

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.; Dewitt, K. J.

    1979-01-01

    It was shown experimentally that the Marangoni phenomenon is a primary mechanism for the movement of a gas bubble in a nonisothermal liquid in a low gravity environment. A mathematical model consisting of the Navier-Stokes and thermal energy equations, together with the appropriate boundary conditions for both media, is presented. Parameter perturbation theory is used to solve this boundary value problem; the expansion parameter is the Marangoni number. The zeroth, first, and second order approximations for the velocity, temperature and pressure distributions in the liquid and in the bubble, and the deformation and terminal velocity of the bubble are determined. Experimental zero gravity data for a nitrogen bubble in ethylene glycol, ethanol, and silicone oil subjected to a linear temperature gradient were obtained using the NASA Lewis zero gravity drop tower. Comparison of the zeroth order analytical results for the bubble terminal velocity showed good agreement with the experimental measurements. The first and second order solutions for the bubble deformation and bubble terminal velocity are valid for liquids having Prandtl numbers on the order of one, but there is a lack of appropriate data to test the theory fully.

  5. Real-time approximate optimal guidance laws for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Speyer, Jason L.; Feeley, Timothy; Hull, David G.

    1989-01-01

    An approach to optimal ascent guidance for a launch vehicle is developed using an expansion technique. The problem is to maximize the payload put into orbit subject to the equations of motion of a rocket over a rotating spherical earth. It is assumed that the thrust and gravitational forces dominate over the aerodynamic forces. It is shown that these forces can be separated by a small parameter epsilon, where epsilon is the ratio of the atmospheric scale height to the radius of the earth. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in a series where the zeroth-order term (epsilon = 0) can be obtained in closed form. The zeroth-order problem is that of putting maximum payload into orbit subject to the equations of motion of a rocket in a vacuum over a flat earth. The neglected inertial and aerodynamic terms are included in higher order terms of the expansion, which are determined from the solution of first-order linear partial differential equations requiring only quadrature integrations. These quadrature integrations can be performed rapidly, so that real-time approximate optimization can be used to construct the launch guidance law.

  6. Joint MR-PET reconstruction using a multi-channel image regularizer

    PubMed Central

    Koesters, Thomas; Otazo, Ricardo; Bredies, Kristian; Sodickson, Daniel K

    2016-01-01

    While current state of the art MR-PET scanners enable simultaneous MR and PET measurements, the acquired data sets are still usually reconstructed separately. We propose a new multi-modality reconstruction framework using second order Total Generalized Variation (TGV) as a dedicated multi-channel regularization functional that jointly reconstructs images from both modalities. In this way, information about the underlying anatomy is shared during the image reconstruction process while unique differences are preserved. Results from numerical simulations and in-vivo experiments using a range of accelerated MR acquisitions and different MR image contrasts demonstrate improved PET image quality, resolution, and quantitative accuracy. PMID:28055827

  7. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  8. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  9. Generalization Analysis of Fredholm Kernel Regularized Classifiers.

    PubMed

    Gong, Tieliang; Xu, Zongben; Chen, Hong

    2017-07-01

    Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.

  10. Using lean methodology to improve efficiency of electronic order set maintenance in the hospital.

    PubMed

    Idemoto, Lori; Williams, Barbara; Blackmore, Craig

    2016-01-01

    Order sets, a series of orders focused around a diagnosis, condition, or treatment, can reinforce best practice, help eliminate outdated practice, and provide clinical guidance. However, order sets require regular updates as evidence and care processes change. We undertook a quality improvement intervention applying lean methodology to create a systematic process for order set review and maintenance. Root cause analysis revealed challenges with unclear prioritization of requests, lack of coordination between teams, and lack of communication between producers and requestors of order sets. In March of 2014, we implemented a systematic, cyclical order set review process, with a set schedule, defined responsibilities for various stakeholders, formal meetings and communication between stakeholders, and transparency of the process. We first identified and deactivated 89 order sets which were infrequently used. Between March and August 2014, 142 order sets went through the new review process. Processing time for the build duration of order sets decreased from a mean of 79.6 to 43.2 days (p<.001, CI=22.1, 50.7). Applying Lean production principles to the order set review process resulted in significant improvement in processing time and increased quality of orders. As use of order sets and other forms of clinical decision support increase, regular evidence and process updates become more critical.

  11. Using volcano plots and regularized-chi statistics in genetic association studies.

    PubMed

    Li, Wentian; Freudenberg, Jan; Suh, Young Ju; Yang, Yaning

    2014-02-01

    Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Phase-shifting point diffraction interferometer grating designs

    DOEpatents

    Naulleau, Patrick; Goldberg, Kenneth Alan; Tejnil, Edita

    2001-01-01

    In a phase-shifting point diffraction interferometer, by sending the zeroth-order diffraction to the reference pinhole of the mask and the first-order diffraction to the test beam window of the mask, the test and reference beam intensities can be balanced and the fringe contrast improved. Additionally, using a duty cycle of the diffraction grating other than 50%, the fringe contrast can also be improved.

  13. Cooperation in the noisy case: Prisoner's dilemma game on two types of regular random graphs

    NASA Astrophysics Data System (ADS)

    Vukov, Jeromos; Szabó, György; Szolnoki, Attila

    2006-06-01

    We have studied an evolutionary prisoner’s dilemma game with players located on two types of random regular graphs with a degree of 4. The analysis is focused on the effects of payoffs and noise (temperature) on the maintenance of cooperation. When varying the noise level and/or the highest payoff, the system exhibits a second-order phase transition from a mixed state of cooperators and defectors to an absorbing state where only defectors remain alive. For the random regular graph (and Bethe lattice) the behavior of the system is similar to those found previously on the square lattice with nearest neighbor interactions, although the measure of cooperation is enhanced by the absence of loops in the connectivity structure. For low noise the optimal connectivity structure is built up from randomly connected triangles.

  14. A space-frequency multiplicative regularization for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.

  15. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  16. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  17. Exploring the spectrum of regularized bosonic string theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambjørn, J., E-mail: ambjorn@nbi.dk; Makeenko, Y., E-mail: makeenko@nbi.dk

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  18. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  19. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  20. 77 FR 52771 - Self-Regulatory Organizations; EDGA Exchange, Inc.; Order Approving a Proposed Rule Change To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ..., for stocks listed on the New York Stock Exchange LLC (the ``NYSE''), regular session orders can be... relative to other orders on the EDGA Book. The proposed rule change was published for comment in the... Exchange proposed to add a new order type, the Route Peg Order.\\5\\ A Route Peg Order would be a non...

  1. 77 FR 52773 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Order Approving a Proposed Rule Change To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... listed on the New York Stock Exchange LLC (the ``NYSE''), regular session orders can be posted to the... relative to other orders on the EDGX Book. The proposed rule change was published for comment in the... to add a new order type, the Route Peg Order.\\5\\ A Route Peg Order would be a non-displayed limit...

  2. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2016-07-01

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  3. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  4. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    ERIC Educational Resources Information Center

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  5. 32 CFR 724.211 - Regularity of government affairs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Regularity of government affairs. 724.211 Section 724.211 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY PERSONNEL NAVAL DISCHARGE REVIEW BOARD Authority/Policy for Departmental Discharge Review § 724.211 Regularity of government...

  6. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  7. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data

  8. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Disability for any regular employment, defined... RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is disabled...

  9. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Disability for any regular employment, defined... RETIREMENT ACT DETERMINING DISABILITY Disability Under the Railroad Retirement Act for Any Regular Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is disabled...

  10. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  11. Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brambilla, N.; Prosperi, G.M.

    1992-08-01

    We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less

  12. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  13. Selected Characteristics, Classified & Unclassified (Regular) Students; Community Colleges, Fall 1978.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. Community Coll. System.

    Fall 1978 enrollment data for Hawaii's community colleges and data on selected characteristics of students enrolled in regular credit programs are presented. Of the 27,880 registrants, 74% were regular students, 1% were early admittees, 6% were registered in non-credit apprenticeship programs, and 18% were in special programs. Regular student…

  14. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  15. The Essential Special Education Guide for the Regular Education Teacher

    ERIC Educational Resources Information Center

    Burns, Edward

    2007-01-01

    The Individuals with Disabilities Education Act (IDEA) of 2004 has placed a renewed emphasis on the importance of the regular classroom, the regular classroom teacher and the general curriculum as the primary focus of special education. This book contains over 100 topics that deal with real issues and concerns regarding the regular classroom and…

  16. Revisiting HgCl 2: A solution- and solid-state 199Hg NMR and ZORA-DFT computational study

    NASA Astrophysics Data System (ADS)

    Taylor, R. E.; Carver, Colin T.; Larsen, Ross E.; Dmitrenko, Olga; Bai, Shi; Dybowski, C.

    2009-07-01

    The 199Hg chemical-shift tensor of solid HgCl 2 was determined from spectra of polycrystalline materials, using static and magic-angle spinning (MAS) techniques at multiple spinning frequencies and field strengths. The chemical-shift tensor of solid HgCl 2 is axially symmetric ( η = 0) within experimental error. The 199Hg chemical-shift anisotropy (CSA) of HgCl 2 in a frozen solution in dimethylsulfoxide (DMSO) is significantly smaller than that of the solid, implying that the local electronic structure in the solid is different from that of the material in solution. The experimental chemical-shift results (solution and solid state) are compared with those predicted by density functional theory (DFT) calculations using the zeroth-order regular approximation (ZORA) to account for relativistic effects. 199Hg spin-lattice relaxation of HgCl 2 dissolved in DMSO is dominated by a CSA mechanism, but a second contribution to relaxation arises from ligand exchange. Relaxation in the solid state is independent of temperature, suggesting relaxation by paramagnetic impurities or defects.

  17. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of the...

  18. 20 CFR 216.13 - Regular current connection test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Regular current connection test. 216.13... ELIGIBILITY FOR AN ANNUITY Current Connection With the Railroad Industry § 216.13 Regular current connection test. An employee has a current connection with the railroad industry if he or she meets one of the...

  19. Borderline personality disorder and regularly drinking alcohol before sex.

    PubMed

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  20. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  1. Manufacture of Regularly Shaped Sol-Gel Pellets

    NASA Technical Reports Server (NTRS)

    Leventis, Nicholas; Johnston, James C.; Kinder, James D.

    2006-01-01

    An extrusion batch process for manufacturing regularly shaped sol-gel pellets has been devised as an improved alternative to a spray process that yields irregularly shaped pellets. The aspect ratio of regularly shaped pellets can be controlled more easily, while regularly shaped pellets pack more efficiently. In the extrusion process, a wet gel is pushed out of a mold and chopped repetitively into short, cylindrical pieces as it emerges from the mold. The pieces are collected and can be either (1) dried at ambient pressure to xerogel, (2) solvent exchanged and dried under ambient pressure to ambigels, or (3) supercritically dried to aerogel. Advantageously, the extruded pellets can be dropped directly in a cross-linking bath, where they develop a conformal polymer coating around the skeletal framework of the wet gel via reaction with the cross linker. These pellets can be dried to mechanically robust X-Aerogel.

  2. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  3. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  4. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  5. On split regular Hom-Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Albuquerque, Helena; Barreiro, Elisabete; Calderón, A. J.; Sánchez, José M.

    2018-06-01

    We introduce the class of split regular Hom-Lie superalgebras as the natural extension of the one of split Hom-Lie algebras and Lie superalgebras, and study its structure by showing that an arbitrary split regular Hom-Lie superalgebra L is of the form L = U +∑jIj with U a linear subspace of a maximal abelian graded subalgebra H and any Ij a well described (split) ideal of L satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.

  6. 20 CFR 226.33 - Spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  7. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2017-03-09

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  8. Female non-regular workers in Japan: their current status and health.

    PubMed

    Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu

    2016-12-07

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.

  9. Female non-regular workers in Japan: their current status and health

    PubMed Central

    INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu

    2016-01-01

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453

  10. The relationship between lifestyle regularity and subjective sleep quality

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p < 0.001) correlation between SRM (both metrics) and PSQI, indicating that subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  11. Boundary Regularity for the Porous Medium Equation

    NASA Astrophysics Data System (ADS)

    Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana

    2018-05-01

    We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.

  12. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  13. Broadband mode conversion via gradient index metamaterials

    PubMed Central

    Wang, HaiXiao; Xu, YaDong; Genevet, Patrice; Jiang, Jian-Hua; Chen, HuanYang

    2016-01-01

    We propose a design for broadband waveguide mode conversion based on gradient index metamaterials (GIMs). Numerical simulations demonstrate that the zeroth order of transverse magnetic mode or the first order of transverse electric mode (TM0/TE1) can be converted into the first order of transverse magnetic mode or the second order of transverse electric mode (TM1/TE2) for a broadband of frequencies. As an application, an asymmetric propagation is achieved by integrating zero index metamaterials inside the GIM waveguide. PMID:27098456

  14. The United States Regular Education Initiative: Flames of Controversy.

    ERIC Educational Resources Information Center

    Lowenthal, Barbara

    1990-01-01

    Arguments in favor of and against the Regular Education Initiative (REI) are presented. Lack of appropriate qualifications of regular classroom teachers and a lack of empirical evidence on REI effectiveness are cited as some of the problems with the approach. (JDD)

  15. Molecular heterotopy in the expression of Brachyury orthologs in order Clypeasteroida (irregular sea urchins) and order Echinoida (regular sea urchins).

    PubMed

    Hibino, Taku; Harada, Yoshito; Minokawa, Takuya; Nonaka, Masaru; Amemiya, Shonan

    2004-11-01

    The expression patterns of Brachyury (Bra) orthologs in the development of four species of sand dollars (order: Clypeasteroida), including a direct-developing species, and of a sea urchin species (order: Echinoida) were investigated during the period from blastula to the pluteus stage, with special attention paid to the relationship between the expression pattern and the mode of gastrulation. The sand dollar species shared two expression domains of the Bra orthologs with the Echinoida species, in the vegetal ring (the first domain) and the oral ectoderm (the second domain). The following heterotopic changes in the expression of the Bra genes were found among the sand dollar species and between the sand dollars and the Echinoida species. (1) The vegetal ring expressing Bra in the sand dollars was much wider and was located at a higher position along the AV axis, compared with that in the Echinoida species. The characteristic Bra expression in the vegetal ring of the sand dollar embryos was thought to be involved in the mode of gastrulation, in which involution continues from the beginning of invagination until the end of gastrulation. (2) Two of the three indirect-developing sand dollar species that were examined exhibited a third domain, in which Bra was expressed on the oral side of the archenteron. (3) In the direct-developing sand dollar embryos, Bra was expressed with an oral-aboral asymmetry in the vegetal ring and with a left-right asymmetry in the oral ectoderm. In the Echinoida species, Bra was expressed in the vestibule at the six-armed pluteus stage.

  16. Gene selection in cancer classification using sparse logistic regression with Bayesian regularization.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2006-10-01

    Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than

  17. Remarks on regular black holes

    NASA Astrophysics Data System (ADS)

    Nicolini, Piero; Smailagic, Anais; Spallucci, Euro

    Recently, it has been claimed by Chinaglia and Zerbini that the curvature singularity is present even in the so-called regular black hole solutions of the Einstein equations. In this brief note, we show that this criticism is devoid of any physical content.

  18. Self-Organized Bistability Associated with First-Order Phase Transitions

    NASA Astrophysics Data System (ADS)

    di Santo, Serena; Burioni, Raffaella; Vezzani, Alessandro; Muñoz, Miguel A.

    2016-06-01

    Self-organized criticality elucidates the conditions under which physical and biological systems tune themselves to the edge of a second-order phase transition, with scale invariance. Motivated by the empirical observation of bimodal distributions of activity in neuroscience and other fields, we propose and analyze a theory for the self-organization to the point of phase coexistence in systems exhibiting a first-order phase transition. It explains the emergence of regular avalanches with attributes of scale invariance that coexist with huge anomalous ones, with realizations in many fields.

  19. Quasinormal modes of gravitational perturbation around regular Bardeen black hole surrounded by quintessence

    NASA Astrophysics Data System (ADS)

    Saleh, Mahamat; Thomas, Bouetou Bouetou; Kofane, Timoleon Crepin

    2018-04-01

    In this paper, Quasinormal modes of gravitational perturbation are investigated for the regular Bardeen black hole surrounded by quintessence. Considering the metric of the Bardeen spacetime surrounded by quintessence, we derived the perturbation equation for gravitational perturbation using Regge-Wheeler gauge. The third order Wentzel-Kramers-Brillouin (WKB) approximation method is used to evaluate quasinormal frequencies. Explicitly, the behaviors of the black hole potential and quasinormal modes were plotted. The results show that, due to the presence of quintessence, the gravitational perturbation around the black hole damps more slowly and oscillates more slowly.

  20. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  1. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  2. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  3. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  4. 20 CFR 220.100 - Evaluation of disability for any regular employment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Railroad Retirement Act based on disability for any regular employment. Regular employment means... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Evaluation of disability for any regular employment. 220.100 Section 220.100 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE...

  5. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  6. Lipschitz regularity for integro-differential equations with coercive Hamiltonians and application to large time behavior

    NASA Astrophysics Data System (ADS)

    Barles, Guy; Ley, Olivier; Topp, Erwin

    2017-02-01

    In this paper, we provide suitable adaptations of the ‘weak version of Bernstein method’ introduced by the first author in 1991, in order to obtain Lipschitz regularity results and Lipschitz estimates for nonlinear integro-differential elliptic and parabolic equations set in the whole space. Our interest is to obtain such Lipschitz results to possibly degenerate equations, or to equations which are indeed ‘uniformly elliptic’ (maybe in the nonlocal sense) but which do not satisfy the usual ‘growth condition’ on the gradient term allowing to use (for example) the Ishii-Lions’ method. We treat the case of a model equation with a superlinear coercivity on the gradient term which has a leading role in the equation. This regularity result together with comparison principle provided for the problem allow to obtain the ergodic large time behavior of the evolution problem in the periodic setting.

  7. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  8. Low-rank regularization for learning gene expression programs.

    PubMed

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.

  9. Soliton solutions to the fifth-order Korteweg-de Vries equation and their applications to surface and internal water waves

    NASA Astrophysics Data System (ADS)

    Khusnutdinova, K. R.; Stepanyants, Y. A.; Tranter, M. R.

    2018-02-01

    We study solitary wave solutions of the fifth-order Korteweg-de Vries equation which contains, besides the traditional quadratic nonlinearity and third-order dispersion, additional terms including cubic nonlinearity and fifth order linear dispersion, as well as two nonlinear dispersive terms. An exact solitary wave solution to this equation is derived, and the dependence of its amplitude, width, and speed on the parameters of the governing equation is studied. It is shown that the derived solution can represent either an embedded or regular soliton depending on the equation parameters. The nonlinear dispersive terms can drastically influence the existence of solitary waves, their nature (regular or embedded), profile, polarity, and stability with respect to small perturbations. We show, in particular, that in some cases embedded solitons can be stable even with respect to interactions with regular solitons. The results obtained are applicable to surface and internal waves in fluids, as well as to waves in other media (plasma, solid waveguides, elastic media with microstructure, etc.).

  10. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    PubMed

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings

  11. Subcortical processing of speech regularities underlies reading and music aptitude in children

    PubMed Central

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input

  12. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  13. Combustion Instability in Solid Propellant Rockets

    DTIC Science & Technology

    1989-03-21

    adverse pressure gradients may arise. As suggested in Figure 5.1, the volume behind a submerged nozzle is especially likely to exhibit recircu- lation...ranges of interest. Therefore, the axial vortical velocity is governed to zeroth order in the mean flow Mach number by auz a2 2 U b+ Mbr -&r2 +O(Mb

  14. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  15. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  16. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  17. A Jeziorski-Monkhorst fully uncontracted multi-reference perturbative treatment. I. Principles, second-order versions, and tests on ground state potential energy curves

    NASA Astrophysics Data System (ADS)

    Giner, Emmanuel; Angeli, Celestino; Garniron, Yann; Scemama, Anthony; Malrieu, Jean-Paul

    2017-06-01

    The present paper introduces a new multi-reference perturbation approach developed at second order, based on a Jeziorski-Mokhorst expansion using individual Slater determinants as perturbers. Thanks to this choice of perturbers, an effective Hamiltonian may be built, allowing for the dressing of the Hamiltonian matrix within the reference space, assumed here to be a CAS-CI. Such a formulation accounts then for the coupling between the static and dynamic correlation effects. With our new definition of zeroth-order energies, these two approaches are strictly size-extensive provided that local orbitals are used, as numerically illustrated here and formally demonstrated in the Appendix. Also, the present formalism allows for the factorization of all double excitation operators, just as in internally contracted approaches, strongly reducing the computational cost of these two approaches with respect to other determinant-based perturbation theories. The accuracy of these methods has been investigated on ground-state potential curves up to full dissociation limits for a set of six molecules involving single, double, and triple bond breaking together with an excited state calculation. The spectroscopic constants obtained with the present methods are found to be in very good agreement with the full configuration interaction results. As the present formalism does not use any parameter or numerically unstable operation, the curves obtained with the two methods are smooth all along the dissociation path.

  18. 5 CFR 532.203 - Structure of regular wage schedules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Structure of regular wage schedules. 532.203 Section 532.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.203 Structure of regular wage schedules. (a...

  19. Endemic infections are always possible on regular networks

    NASA Astrophysics Data System (ADS)

    Del Genio, Charo I.; House, Thomas

    2013-10-01

    We study the dependence of the largest component in regular networks on the clustering coefficient, showing that its size changes smoothly without undergoing a phase transition. We explain this behavior via an analytical approach based on the network structure, and provide an exact equation describing the numerical results. Our work indicates that intrinsic structural properties always allow the spread of epidemics on regular networks.

  20. Extreme depth-of-field intraocular lenses

    NASA Astrophysics Data System (ADS)

    Baker, Kenneth M.

    1996-05-01

    A new technology brings the full aperture single vision pseudophakic eye's effective hyperfocal distance within the half-meter range. A modulated index IOL containing a subsurface zeroth order coherent microlenticular mosaic defined by an index gradient adds a normalizing function to the vergences or parallactic angles of incoming light rays subtended from field object points and redirects them, in the case of near-field images, to that of far-field images. Along with a scalar reduction of the IOL's linear focal range, this results in an extreme depth of field with a narrow depth of focus and avoids the focal split-up, halo, and inherent reduction in contrast of multifocal IOLs. A high microlenticular spatial frequency, which, while still retaining an anisotropic medium, results in a nearly total zeroth order propagation throughout the visible spectrum. The curved lens surfaces still provide most of the refractive power of the IOL, and the unique holographic fabrication technology is especially suitable not only for IOLs but also for contact lenses, artificial corneas, and miniature lens elements for cameras and other optical devices.

  1. Kinetic theory of binary particles with unequal mean velocities and non-equipartition energies

    NASA Astrophysics Data System (ADS)

    Chen, Yanpei; Mei, Yifeng; Wang, Wei

    2017-03-01

    The hydrodynamic conservation equations and constitutive relations for a binary granular mixture composed of smooth, nearly elastic spheres with non-equipartition energies and different mean velocities are derived. This research is aimed to build three-dimensional kinetic theory to characterize the behaviors of two species of particles suffering different forces. The standard Enskog method is employed assuming a Maxwell velocity distribution for each species of particles. The collision components of the stress tensor and the other parameters are calculated from the zeroth- and first-order approximation. Our results demonstrate that three factors, namely the differences between two granular masses, temperatures and mean velocities all play important roles in the stress-strain relation of the binary mixture, indicating that the assumption of energy equipartition and the same mean velocity may not be acceptable. The collision frequency and the solid viscosity increase monotonously with each granular temperature. The zeroth-order approximation to the energy dissipation varies greatly with the mean velocities of both species of spheres, reaching its peak value at the maximum of their relative velocity.

  2. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less

  3. Influence of spatial configurations on electromagnetic interference shielding of ordered mesoporous carbon/ordered mesoporous silica/silica composites

    PubMed Central

    Wang, Jiacheng; Zhou, Hu; Zhuang, Jiandong; Liu, Qian

    2013-01-01

    Ordered mesoporous carbons (OMCs), obtained by nanocasting using ordered mesoporous silicas (OMSs) as hard templates, exhibit unique arrangements of ordered regular nanopore/nanowire mesostructures. Here, we used nanocasting combined with hot-pressing to prepare 10 wt% OMC/OMS/SiO2 ternary composites possessing various carbon mesostructure configurations of different dimensionalities (1D isolated CS41 carbon nanowires, 2D hexagonal CMK-3 carbon, and 3D cubic CMK-1 carbon). The electric/dielectric properties and electromagnetic interference (EMI) shielding efficiency (SE) of the composites were influenced by spatial configurations of carbon networks. The complex permittivity and the EMI SE of the composites in the X-band frequency range decreased for the carbon mesostructures in the following order: CMK-3-filled > CMK-1-filled > CS41-filled. Our study provides technical directions for designing and preparing high-performance EMI shielding materials. Our OMC-based silica composites can be used for EMI shielding, especially in high-temperature or corrosive environments, owing to the high stability of the OMC/OMS fillers and the SiO2 matrix. Related shielding mechanisms are also discussed. PMID:24248277

  4. Characterization of Regular Wave, Irregular Wave, and Large-Amplitude Wave Group Kinematics in an Experimental Basin

    DTIC Science & Technology

    2011-02-01

    seakeeping was the transient wave technique, developed analytically by Davis and Zarnick (1964). At the David Taylor Model Basin, Davis and Zarnick, and...Gersten and Johnson (1969) applied the transient wave technique to regular wave model experiments for heave and pitch, at zero forward speed. These...tests demonstrated a potential reduction by an order of magnitude of the total necessary testing time. The transient wave technique was also applied to

  5. 76 FR 3629 - Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630

  6. Production of Supra-regular Spatial Sequences by Macaque Monkeys.

    PubMed

    Jiang, Xinjian; Long, Tenghai; Cao, Weicong; Li, Junru; Dehaene, Stanislas; Wang, Liping

    2018-06-18

    Understanding and producing embedded sequences in language, music, or mathematics, is a central characteristic of our species. These domains are hypothesized to involve a human-specific competence for supra-regular grammars, which can generate embedded sequences that go beyond the regular sequences engendered by finite-state automata. However, is this capacity truly unique to humans? Using a production task, we show that macaque monkeys can be trained to produce time-symmetrical embedded spatial sequences whose formal description requires supra-regular grammars or, equivalently, a push-down stack automaton. Monkeys spontaneously generalized the learned grammar to novel sequences, including longer ones, and could generate hierarchical sequences formed by an embedding of two levels of abstract rules. Compared to monkeys, however, preschool children learned the grammars much faster using a chunking strategy. While supra-regular grammars are accessible to nonhuman primates through extensive training, human uniqueness may lie in the speed and learning strategy with which they are acquired. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Optimal boundary regularity for a singular Monge-Ampère equation

    NASA Astrophysics Data System (ADS)

    Jian, Huaiyu; Li, You

    2018-06-01

    In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.

  8. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  9. 75 FR 1057 - Farm Credit Administration Board; Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... FARM CREDIT ADMINISTRATION Farm Credit Administration Board; Regular Meeting AGENCY: Farm Credit Administration. SUMMARY: Notice is hereby given, pursuant to the Government in the Sunshine Act (5 U.S.C. 552b(e)(3)), of the regular meeting of the Farm Credit Administration Board (Board). Date and Time: The...

  10. Lipschitz regularity results for nonlinear strictly elliptic equations and applications

    NASA Astrophysics Data System (ADS)

    Ley, Olivier; Nguyen, Vinh Duc

    2017-10-01

    Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.

  11. Stark broadening parameter regularities and interpolation and critical evaluation of data for CP star atmospheres research: Stark line shifts

    NASA Astrophysics Data System (ADS)

    Dimitrijevic, M. S.; Tankosic, D.

    1998-04-01

    In order to find out if regularities and systematic trends found to be apparent among experimental Stark line shifts allow the accurate interpolation of new data and critical evaluation of experimental results, the exceptions to the established regularities are analysed on the basis of critical reviews of experimental data, and reasons for such exceptions are discussed. We found that such exceptions are mostly due to the situations when: (i) the energy gap between atomic energy levels within a supermultiplet is equal or comparable to the energy gap to the nearest perturbing levels; (ii) the most important perturbing level is embedded between the energy levels of the supermultiplet; (iii) the forbidden transitions have influence on Stark line shifts.

  12. Adding statistical regularity results in a global slowdown in visual search.

    PubMed

    Vaskevich, Anna; Luria, Roy

    2018-05-01

    Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  14. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  15. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  16. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  17. 12 CFR 311.5 - Regular procedure for closing meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Regular procedure for closing meetings. 311.5 Section 311.5 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE RULES GOVERNING PUBLIC OBSERVATION OF MEETINGS OF THE CORPORATION'S BOARD OF DIRECTORS § 311.5 Regular...

  18. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  19. Singular perturbation and time scale approaches in discrete control systems

    NASA Technical Reports Server (NTRS)

    Naidu, D. S.; Price, D. B.

    1988-01-01

    After considering a singularly perturbed discrete control system, a singular perturbation approach is used to obtain outer and correction subsystems. A time scale approach is then applied via block diagonalization transformations to decouple the system into slow and fast subsystems. To a zeroth-order approximation, the singular perturbation and time-scale approaches are found to yield equivalent results.

  20. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low

  1. The neural substrates of impaired finger tapping regularity after stroke.

    PubMed

    Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude

    2010-03-01

    Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  2. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  3. Regular transport dynamics produce chaotic travel times

    NASA Astrophysics Data System (ADS)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  4. Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.

    PubMed

    Kim, Hwi; Hahn, Joonku; Lee, Byoungho

    2009-04-13

    Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.

  5. Four Data Based Objections to the Regular Education Initiative.

    ERIC Educational Resources Information Center

    Anderegg, M. L.; Vergason, Glenn A.

    One of the changes advocated by the Regular Education Initiative (REI) is the placement of all students with disabilities in regular education classes. This paper analyzes this REI proposal and discusses four objections, with citations to relevant literature: (1) restriction of the continuum of services, which may result in students being put…

  6. Inclusion Professional Development Model and Regular Middle School Educators

    ERIC Educational Resources Information Center

    Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo

    2014-01-01

    The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…

  7. Metric for strong intrinsic fourth-order phonon anharmonicity

    NASA Astrophysics Data System (ADS)

    Yue, Sheng-Ying; Zhang, Xiaoliang; Qin, Guangzhao; Phillpot, Simon R.; Hu, Ming

    2017-05-01

    Under the framework of Taylor series expansion for potential energy, we propose a simple and robust metric, dubbed "regular residual analysis," to measure the fourth-order phonon anharmonicity in crystals. The method is verified by studying the intrinsic strong higher-order anharmonic effects in UO2 and CeO2. Comparison of the thermal conductivity results, which calculated by the anharmonic lattice dynamics method coupled with the Boltzmann transport equation and the spectral energy density method coupled with ab initio molecular dynamics simulation further validates our analysis. Analysis of the bulk Si and Ge systems confirms that the fourth-order phonon anharmonicity is enhanced and cannot be neglected at high enough temperatures, which agrees with a previous study where the four-phonon scattering was explicitly determined. This metric will facilitate evaluating and interpreting the lattice thermal conductivity of crystals with strong fourth-order phonon anharmonicity.

  8. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  9. Who uses height-adjustable desks? - Sociodemographic, health-related, and psycho-social variables of regular users.

    PubMed

    Wallmann-Sperlich, Birgit; Bipp, Tanja; Bucksch, Jens; Froboese, Ingo

    2017-03-06

    Sit-to-stand height-adjustable desks (HAD) may promote workplace standing, as long as workers use them on a regular basis. The aim of this study was to investigate (i) how common HAD in German desk-based workers are, and how frequently HADs are used, (ii) to identify sociodemographic, health-related, and psycho-social variables of workday sitting including having a HAD, and (iii) to analyse sociodemographic, health-related, and psycho-social variables of users and non-users of HADs. A cross-sectional sample of 680 participants (51.9% men; 41.0 ± 13.1 years) in a desk-based occupation was interviewed by telephone about their occupational sitting and standing proportions, having and usage of a HAD, and answered questions concerning psycho-social variables of occupational sitting. The proportion of workday sitting was calculated for participants having an HAD (n = 108) and not-having an HAD (n = 573), as well as for regular users of HAD (n = 54), and irregular/non-users of HAD (n = 54). Linear regressions were conducted to calculate associations between socio-demographic, health-related, psychosocial variables and having/not having an HAD, and the proportion of workday sitting. Logistic regressions were executed to examine the association of mentioned variables and participants' usage of HADs. Sixteen percent report that they have an HAD, and 50% of these report regular use of HAD. Having an HAD is not a correlate of the proportion of workday sitting. Further analysis restricted to participants having available a HAD highlights that only the 'perceived advantages of sitting less' was significantly associated with HAD use in the fully adjusted model (OR 1.75 [1.09; 2.81], p < 0.05). The present findings indicate that accompanying behavioral action while providing an HAD is promising to increase the regular usage of HAD. Hence, future research needs to address the specificity of behavioral actions in order to enhance regular HAD use, and needs

  10. A simple way to measure daily lifestyle regularity

    NASA Technical Reports Server (NTRS)

    Monk, Timothy H.; Frank, Ellen; Potts, Jaime M.; Kupfer, David J.

    2002-01-01

    A brief diary instrument to quantify daily lifestyle regularity (SRM-5) is developed and compared with a much longer version of the instrument (SRM-17) described and used previously. Three studies are described. In Study 1, SRM-17 scores (2 weeks) were collected from a total of 293 healthy control subjects (both genders) aged between 19 and 92 years. Five items (1) Get out of bed, (2) First contact with another person, (3) Start work, housework or volunteer activities, (4) Have dinner, and (5) Go to bed were then selected from the 17 items and SRM-5 scores calculated as if these five items were the only ones collected. Comparisons were made with SRM-17 scores from the same subject-weeks, looking at correlations between the two SRM measures, and the effects of age and gender on lifestyle regularity as measured by the two instruments. In Study 2 this process was repeated in a group of 27 subjects who were in remission from unipolar depression after treatment with psychotherapy and who completed SRM-17 for at least 20 successive weeks. SRM-5 and SRM-17 scores were then correlated within an individual using time as the random variable, allowing an indication of how successful SRM-5 was in tracking changes in lifestyle regularity (within an individual) over time. In Study 3 an SRM-5 diary instrument was administered to 101 healthy control subjects (both genders, aged 20-59 years) for two successive weeks to obtain normative measures and to test for correlations with age and morningness. Measures of lifestyle regularity from SRM-5 correlated quite well (about 0.8) with those from SRM-17 both between subjects, and within-subjects over time. As a detector of irregularity as defined by SRM-17, the SRM-5 instrument showed acceptable values of kappa (0.69), sensitivity (74%) and specificity (95%). There were, however, differences in mean level, with SRM-5 scores being about 0.9 units [about one standard deviation (SD)] above SRM-17 scores from the same subject-weeks. SRM-5

  11. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  12. Extension of Strongly Regular Graphs

    DTIC Science & Technology

    2008-02-11

    E.R. van Dam, W.H. Haemers. Graphs with constant µ and µ. Discrete Math . 182 (1998), no. 1-3, 293–307. [11] E.R. van Dam, E. Spence. Small regular...graphs with four eigenvalues. Discrete Math . 189 (1998), 233-257. the electronic journal of combinatorics 15 (2008), #N3 5

  13. Academic Improvement through Regular Assessment

    ERIC Educational Resources Information Center

    Wolf, Patrick J.

    2007-01-01

    Media reports are rife with claims that students in the United States are overtested and that they and their education are suffering as result. Here I argue the opposite--that students would benefit in numerous ways from more frequent assessment, especially of diagnostic testing. The regular assessment of students serves critical educational and…

  14. Disconjugacy, regularity of multi-indexed rationally extended potentials, and Laguerre exceptional polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandati, Y.; Quesne, C.

    2013-07-15

    The power of the disconjugacy properties of second-order differential equations of Schrödinger type to check the regularity of rationally extended quantum potentials connected with exceptional orthogonal polynomials is illustrated by re-examining the extensions of the isotonic oscillator (or radial oscillator) potential derived in kth-order supersymmetric quantum mechanics or multistep Darboux-Bäcklund transformation method. The function arising in the potential denominator is proved to be a polynomial with a nonvanishing constant term, whose value is calculated by induction over k. The sign of this term being the same as that of the already known highest degree term, the potential denominator has themore » same sign at both extremities of the definition interval, a property that is shared by the seed eigenfunction used in the potential construction. By virtue of disconjugacy, such a property implies the nodeless character of both the eigenfunction and the resulting potential.« less

  15. On solvability of boundary value problems for hyperbolic fourth-order equations with nonlocal boundary conditions of integral type

    NASA Astrophysics Data System (ADS)

    Popov, Nikolay S.

    2017-11-01

    Solvability of some initial-boundary value problems for linear hyperbolic equations of the fourth order is studied. A condition on the lateral boundary in these problems relates the values of a solution or the conormal derivative of a solution to the values of some integral operator applied to a solution. Nonlocal boundary-value problems for one-dimensional hyperbolic second-order equations with integral conditions on the lateral boundary were considered in the articles by A.I. Kozhanov. Higher-dimensional hyperbolic equations of higher order with integral conditions on the lateral boundary were not studied earlier. The existence and uniqueness theorems of regular solutions are proven. The method of regularization and the method of continuation in a parameter are employed to establish solvability.

  16. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  17. I Am The One And Only: Regular Magnetic Field In The Igm Of The Stepan'S Quintet

    NASA Astrophysics Data System (ADS)

    Nikiel-Wroczyński, Błażej

    2017-10-01

    Ordered magnetic fields are generally believed not to exist in the intergalactic space of galaxy groups; on the one hand, it is known that groups undergo violent interactions that could easily dirupt the delicate fabric of a non-turbulent field, on the other hand - it was never said that survival of such a field is an impossible occurence. The most well-known galaxy group, the Stephan's Quintet, once again turns to be an amazing object, this time in regards to the matter of the existence of a regular magnetic field. Our new study, done with the high fidelity WSRT data, shows strong hints that non-negligible field is present in the volume inhabited by the Quintet, and it is a large-scale, strong, and regular one. As for the moment, no other group was found to host similar magnetic fields, as the Quintet hosts.

  18. C1,1 regularity for degenerate elliptic obstacle problems

    NASA Astrophysics Data System (ADS)

    Daskalopoulos, Panagiota; Feehan, Paul M. N.

    2016-03-01

    The Heston stochastic volatility process is a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order, degenerate-elliptic partial differential operator, where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. In mathematical finance, solutions to the obstacle problem for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset. With the aid of weighted Sobolev spaces and weighted Hölder spaces, we establish the optimal C 1 , 1 regularity (up to the boundary of the half-plane) for solutions to obstacle problems for the elliptic Heston operator when the obstacle functions are sufficiently smooth.

  19. Particle motion and Penrose processes around rotating regular black hole

    NASA Astrophysics Data System (ADS)

    Abdujabbarov, Ahmadjon

    2016-07-01

    The neutral particle motion around rotating regular black hole that was derived from the Ayón-Beato-García (ABG) black hole solution by the Newman-Janis algorithm in the preceding paper (Toshmatov et al., Phys. Rev. D, 89:104017, 2014) has been studied. The dependencies of the ISCO (innermost stable circular orbits along geodesics) and unstable orbits on the value of the electric charge of the rotating regular black hole have been shown. Energy extraction from the rotating regular black hole through various processes has been examined. We have found expression of the center of mass energy for the colliding neutral particles coming from infinity, based on the BSW (Baňados-Silk-West) mechanism. The electric charge Q of rotating regular black hole decreases the potential of the gravitational field as compared to the Kerr black hole and the particles demonstrate less bound energy at the circular geodesics. This causes an increase of efficiency of the energy extraction through BSW process in the presence of the electric charge Q from rotating regular black hole. Furthermore, we have studied the particle emission due to the BSW effect assuming that two neutral particles collide near the horizon of the rotating regular extremal black hole and produce another two particles. We have shown that efficiency of the energy extraction is less than the value 146.6 % being valid for the Kerr black hole. It has been also demonstrated that the efficiency of the energy extraction from the rotating regular black hole via the Penrose process decreases with the increase of the electric charge Q and is smaller in comparison to 20.7 % which is the value for the extreme Kerr black hole with the specific angular momentum a= M.

  20. Correction of engineering servicing regularity of transporttechnological machines in operational process

    NASA Astrophysics Data System (ADS)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  1. 47 CFR 76.614 - Cable television system regular monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-137 and 225-400 MHz shall provide for a program of regular monitoring for signal leakage by... in these bands of 20 uV/m or greater at a distance of 3 meters. During regular monitoring, any leakage source which produces a field strength of 20 uV/m or greater at a distance of 3 meters in the...

  2. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    NASA Astrophysics Data System (ADS)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  3. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  5. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  6. Global regularizing flows with topology preservation for active contours and polygons.

    PubMed

    Sundaramoorthi, Ganesh; Yezzi, Anthony

    2007-03-01

    Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature.

  7. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  8. Regular aspirin use and lung cancer risk.

    PubMed

    Moysich, Kirsten B; Menezes, Ravi J; Ronsani, Adrienne; Swede, Helen; Reid, Mary E; Cummings, K Michael; Falkner, Karen L; Loewen, Gregory M; Bepler, Gerold

    2002-11-26

    Although a large number of epidemiological studies have examined the role of aspirin in the chemoprevention of colon cancer and other solid tumors, there is a limited body of research focusing on the association between aspirin and lung cancer risk. We conducted a hospital-based case-control study to evaluate the role of regular aspirin use in lung cancer etiology. Study participants included 868 cases with primary, incident lung cancer and 935 hospital controls with non-neoplastic conditions who completed a comprehensive epidemiological questionnaire. Participants were classified as regular aspirin users if they had taken the drug at least once a week for at least one year. Results indicated that lung cancer risk was significantly lower for aspirin users compared to non-users (adjusted OR = 0.57; 95% CI 0.41-0.78). Although there was no clear evidence of a dose-response relationship, we observed risk reductions associated with greater frequency of use. Similarly, prolonged duration of use and increasing tablet years (tablets per day x years of use) was associated with reduced lung cancer risk. Risk reductions were observed in both sexes, but significant dose response relationships were only seen among male participants. When the analyses were restricted to former and current smokers, participants with the lowest cigarette exposure tended to benefit most from the potential chemopreventive effect of aspirin. After stratification by histology, regular aspirin use was significantly associated with reduced risk of small cell lung cancer and non-small cell lung cancer. Overall, results from this hospital-based case-control study suggest that regular aspirin use may be associated with reduced risk of lung cancer.

  9. Regular aspirin use and lung cancer risk

    PubMed Central

    Moysich, Kirsten B; Menezes, Ravi J; Ronsani, Adrienne; Swede, Helen; Reid, Mary E; Cummings, K Michael; Falkner, Karen L; Loewen, Gregory M; Bepler, Gerold

    2002-01-01

    Background Although a large number of epidemiological studies have examined the role of aspirin in the chemoprevention of colon cancer and other solid tumors, there is a limited body of research focusing on the association between aspirin and lung cancer risk. Methods We conducted a hospital-based case-control study to evaluate the role of regular aspirin use in lung cancer etiology. Study participants included 868 cases with primary, incident lung cancer and 935 hospital controls with non-neoplastic conditions who completed a comprehensive epidemiological questionnaire. Participants were classified as regular aspirin users if they had taken the drug at least once a week for at least one year. Results Results indicated that lung cancer risk was significantly lower for aspirin users compared to non-users (adjusted OR = 0.57; 95% CI 0.41–0.78). Although there was no clear evidence of a dose-response relationship, we observed risk reductions associated with greater frequency of use. Similarly, prolonged duration of use and increasing tablet years (tablets per day × years of use) was associated with reduced lung cancer risk. Risk reductions were observed in both sexes, but significant dose response relationships were only seen among male participants. When the analyses were restricted to former and current smokers, participants with the lowest cigarette exposure tended to benefit most from the potential chemopreventive effect of aspirin. After stratification by histology, regular aspirin use was significantly associated with reduced risk of small cell lung cancer and non-small cell lung cancer. Conclusions Overall, results from this hospital-based case-control study suggest that regular aspirin use may be associated with reduced risk of lung cancer. PMID:12453317

  10. Catalytic micromotor generating self-propelled regular motion through random fluctuation.

    PubMed

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-21

    Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  11. Catalytic micromotor generating self-propelled regular motion through random fluctuation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-01

    Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  12. Policy Perspective: Meeting the Challenge of the DOE Order 436.1 Departmental Sustainability - 12527

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, Jennifer C.

    2012-07-01

    DOE's Sustainability Performance Office is working to meet sustainability goals at DOE by implementing Executive Orders, Departmental policy, the DOE Strategic Sustainability Performance Plan (SSPP) and legislation related to sustainability. Through implementation of Executive Orders, Departmental policy, the SSPP, statutory requirements and regular reporting, analysis and communication, DOE's SPO is working to maintain and expand DOE's leadership in sustainability. (authors)

  13. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  14. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  15. Regularized Laplacian determinants of self-similar fractals

    NASA Astrophysics Data System (ADS)

    Chen, Joe P.; Teplyaev, Alexander; Tsougkas, Konstantinos

    2018-06-01

    We study the spectral zeta functions of the Laplacian on fractal sets which are locally self-similar fractafolds, in the sense of Strichartz. These functions are known to meromorphically extend to the entire complex plane, and the locations of their poles, sometimes referred to as complex dimensions, are of special interest. We give examples of locally self-similar sets such that their complex dimensions are not on the imaginary axis, which allows us to interpret their Laplacian determinant as the regularized product of their eigenvalues. We then investigate a connection between the logarithm of the determinant of the discrete graph Laplacian and the regularized one.

  16. Regularity and Tresse's theorem for geometric structures

    NASA Astrophysics Data System (ADS)

    Sarkisyan, R. A.; Shandra, I. G.

    2008-04-01

    For any non-special bundle P\\to X of geometric structures we prove that the k-jet space J^k of this bundle with an appropriate k contains an open dense domain U_k on which Tresse's theorem holds. For every s\\geq k we prove that the pre-image \\pi^{-1}(k,s)(U_k) of U_k under the natural projection \\pi(k,s)\\colon J^s\\to J^k consists of regular points. (A point of J^s is said to be regular if the orbits of the group of diffeomorphisms induced from X have locally constant dimension in a neighbourhood of this point.)

  17. More academics in regular schools? The effect of regular versus special school placement on academic skills in Dutch primary school students with Down syndrome.

    PubMed

    de Graaf, G; van Hove, G; Haveman, M

    2013-01-01

    Studies from the UK have shown that children with Down syndrome acquire more academic skills in regular education. Does this likewise hold true for the Dutch situation, even after the effect of selective placement has been taken into account? In 2006, an extensive questionnaire was sent to 160 parents of (specially and regularly placed) children with Down syndrome (born 1993-2000) in primary education in the Netherlands with a response rate of 76%. Questions were related to the child's school history, academic and non-academic skills, intelligence quotient, parental educational level, the extent to which parents worked on academics with their child at home, and the amount of academic instructional time at school. Academic skills were predicted with the other variables as independents. For the children in regular schools much more time proved to be spent on academics. Academic performance appeared to be predicted reasonably well on the basis of age, non-academic skills, parental educational level and the extent to which parents worked at home on academics. However, more variance could be predicted when the total amount of years that the child spent in regular education was added, especially regarding reading and to a lesser extent regarding writing and math. In addition, we could prove that this finding could not be accounted for by endogenity. Regularly placed children with Down syndrome learn more academics. However, this is not a straight consequence of inclusive placement and age alone, but is also determined by factors such as cognitive functioning, non-academic skills, parental educational level and the extent to which parents worked at home on academics. Nevertheless, it could be proven that the more advanced academic skills of the regularly placed children are not only due to selective placement. The positive effect of regular school on academics appeared to be most pronounced for reading skills. © 2011 The Authors. Journal of Intellectual Disability

  18. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  19. On the Distinction between Regular and Irregular Inflectional Morphology: Evidence from Dinka

    ERIC Educational Resources Information Center

    Ladd, D. Robert; Remijsen, Bert; Manyang, Caguor Adong

    2009-01-01

    Discussions of the psycholinguistic significance of regularity in inflectional morphology generally deal with languages in which regular forms can be clearly identified and revolve around whether there are distinct processing mechanisms for regular and irregular forms. We present a detailed description of Dinka's notoriously irregular noun number…

  20. Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2005-01-01

    For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.

  1. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  2. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  3. The Behavior of Regular Satellites during the Nice Model's Planetary Close Encounters

    NASA Astrophysics Data System (ADS)

    Nogueira, E. C.; Gomes, R. S.; Brasser, R.

    2014-10-01

    In order to explain the behavior of the regular satellites of the ice planets during the instability phase of the Nice model, we used numerical simulations to investigate the evolution of the satellite systems when these two planets experienced encounters with the gas giants. For the initial conditions we placed an ice planet in between Jupiter and Saturn, according to the evolution of Nice model simulations in a jumping Jupiter scenario (Brasser et al. 2009). We used the MERCURY integrator (Chambers 1999) and we obtained 101 successful runs which kept all planets, of which 24 were jumping Jupiter cases. Subsequently we performed additional numerical integrations in which the ice giant that encountered a gas giant was started on the same orbit but with its regular satellites included. This is done as follows: For each of the 101 basic runs, we save the orbital elements of all objects in the integration at all close encounter events. Then we performed a backward integration to start the system 100 years before the encounter and re-enacted the forward integration with the regular satellites around the ice giant. The final orbital elements of the satellites with respect to the ice planet were used to restart the integration for the next planetary encounter. If we assume that Uranus is the ice planet that had encounters with a gas giant, we considered the satellites Miranda, Ariel, Umbriel, Titania and Oberon with their present orbits. For Neptune we introduced Triton with an orbit with a 15% larger than the actual semi-major axis to account for the tidal decay from the LHB to present time. We also assume that Triton was captured through binary disruption (Agnor and Hamilton 2006, Nogueira et al. 2011) and its orbit was circularized by tides during the 500 million years before the LHB.

  4. Pointwise regularity of parameterized affine zipper fractal curves

    NASA Astrophysics Data System (ADS)

    Bárány, Balázs; Kiss, Gergely; Kolossváry, István

    2018-05-01

    We study the pointwise regularity of zipper fractal curves generated by affine mappings. Under the assumption of dominated splitting of index-1, we calculate the Hausdorff dimension of the level sets of the pointwise Hölder exponent for a subinterval of the spectrum. We give an equivalent characterization for the existence of regular pointwise Hölder exponent for Lebesgue almost every point. In this case, we extend the multifractal analysis to the full spectrum. In particular, we apply our results for de Rham’s curve.

  5. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  6. Predictors of regular cigarette smoking among adolescent females: Does body image matter?

    PubMed Central

    Kaufman, Annette R.; Augustson, Erik M.

    2013-01-01

    This study examined how factors associated with body image predict regular smoking in adolescent females. Data were from the National Longitudinal Study of Adolescent Health (Add Health), a study of health-related behaviors in a nationally representative sample of adolescents in grades 7 through 12. Females in Waves I and II (n=6,956) were used for this study. Using SUDAAN to adjust for the sampling frame, univariate and multivariate analyses were performed to investigate if baseline body image factors, including perceived weight, perceived physical development, trying to lose weight, and self-esteem, were predictive of regular smoking status 1 year later. In univariate analyses, perceived weight (p<.01), perceived physical development (p<.0001), trying to lose weight (p<.05), and self-esteem (p<.0001) significantly predicted regular smoking 1 year later. In the logistic regression model, perceived physical development (p<.05), and self-esteem (p<.001) significantly predicted regular smoking. The more developed a female reported being in comparison to other females her age, the more likely she was to be a regular smoker. Lower self-esteem was predictive of regular smoking. Perceived weight and trying to lose weight failed to reach statistical significance in the multivariate model. This current study highlights the importance of perceived physical development and self-esteem when predicting regular smoking in adolescent females. Efforts to promote positive self-esteem in young females may be an important strategy when creating interventions to reduce regular cigarette smoking. PMID:18686177

  7. Asymptotic, multigroup flux reconstruction and consistent discontinuity factors

    DOE PAGES

    Trahan, Travis J.; Larsen, Edward W.

    2015-05-12

    Recent theoretical work has led to an asymptotically derived expression for reconstructing the neutron flux from lattice functions and multigroup diffusion solutions. The leading-order asymptotic term is the standard expression for flux reconstruction, i.e., it is the product of a shape function, obtained through a lattice calculation, and the multigroup diffusion solution. The first-order asymptotic correction term is significant only where the gradient of the diffusion solution is not small. Inclusion of this first-order correction term can significantly improve the accuracy of the reconstructed flux. One may define discontinuity factors (DFs) to make certain angular moments of the reconstructed fluxmore » continuous across interfaces between assemblies in 1-D. Indeed, the standard assembly discontinuity factors make the zeroth moment (scalar flux) of the reconstructed flux continuous. The inclusion of the correction term in the flux reconstruction provides an additional degree of freedom that can be used to make two angular moments of the reconstructed flux continuous across interfaces by using current DFs in addition to flux DFs. Thus, numerical results demonstrate that using flux and current DFs together can be more accurate than using only flux DFs, and that making the second angular moment continuous can be more accurate than making the zeroth moment continuous.« less

  8. Multiple Quantum Coherences (MQ) NMR and Entanglement Dynamics in the Mixed-Three-Spin XXX Heisenberg Model with Single-Ion Anisotropy

    NASA Astrophysics Data System (ADS)

    Hamid, Arian Zad

    2016-12-01

    We analytically investigate Multiple Quantum (MQ) NMR dynamics in a mixed-three-spin (1/2,1,1/2) system with XXX Heisenberg model at the front of an external homogeneous magnetic field B. A single-ion anisotropy property ζ is considered for the spin-1. The intensities dependence of MQ NMR coherences on their orders (zeroth and second orders) for two pairs of spins (1,1/2) and (1/2,1/2) of the favorite tripartite system are obtained. It is also investigated dynamics of the pairwise quantum entanglement for the bipartite (sub)systems (1,1/2) and (1/2,1/2) permanently coupled by, respectively, coupling constants J}1 and J}2, by means of concurrence and fidelity. Then, some straightforward comparisons are done between these quantities and the intensities of MQ NMR coherences and ultimately some interesting results are reported. We also show that the time evolution of MQ coherences based on the reduced density matrix of the pair spins (1,1/2) is closely connected with the dynamics of the pairwise entanglement. Finally, we prove that one can introduce MQ coherence of the zeroth order corresponds to the pair spins (1,1/2) as an entanglement witness at some special time intervals.

  9. An ERP study of regular and irregular English past tense inflection.

    PubMed

    Newman, Aaron J; Ullman, Michael T; Pancheva, Roumyana; Waligura, Diane L; Neville, Helen J

    2007-01-01

    Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs ("regulars") is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, "irregular" English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over midline parietal sites (P600s), and seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory.

  10. Navigation System Design and State Estimation for a Small Rigid Hull Inflatable Boat (RHIB)

    DTIC Science & Technology

    2014-09-01

    addition of the Coriolis term as previously defined has no effect on pitch, only one measurement is compared against Condor’s true pitch angle values...33  B.  REFERENCE FRAME DEFINITIONS ......................................................33  1.  Earth Centered Inertial...the effect of higher order terms. Lastly, the zeroth weight of the scaled weight set can be modified to incorporate prior knowledge of the

  11. Metastable Behavior for Bootstrap Percolation on Regular Trees

    NASA Astrophysics Data System (ADS)

    Biskup, Marek; Schonmann, Roberto H.

    2009-08-01

    We examine bootstrap percolation on a regular ( b+1)-ary tree with initial law given by Bernoulli( p). The sites are updated according to the usual rule: a vacant site becomes occupied if it has at least θ occupied neighbors, occupied sites remain occupied forever. It is known that, when b> θ≥2, the limiting density q= q( p) of occupied sites exhibits a jump at some p T= p T( b, θ)∈(0,1) from q T:= q( p T)<1 to q( p)=1 when p> p T. We investigate the metastable behavior associated with this transition. Explicitly, we pick p= p T+ h with h>0 and show that, as h ↓0, the system lingers around the "critical" state for time order h -1/2 and then passes to fully occupied state in time O(1). The law of the entire configuration observed when the occupation density is q∈( q T,1) converges, as h ↓0, to a well-defined measure.

  12. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  13. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  14. Sudden emergence of q-regular subgraphs in random graphs

    NASA Astrophysics Data System (ADS)

    Pretti, M.; Weigt, M.

    2006-07-01

    We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.

  15. Application of thermodynamics to silicate crystalline solutions

    NASA Technical Reports Server (NTRS)

    Saxena, S. K.

    1972-01-01

    A review of thermodynamic relations is presented, describing Guggenheim's regular solution models, the simple mixture, the zeroth approximation, and the quasi-chemical model. The possibilities of retrieving useful thermodynamic quantities from phase equilibrium studies are discussed. Such quantities include the activity-composition relations and the free energy of mixing in crystalline solutions. Theory and results of the study of partitioning of elements in coexisting minerals are briefly reviewed. A thermodynamic study of the intercrystalline and intracrystalline ion exchange relations gives useful information on the thermodynamic behavior of the crystalline solutions involved. Such information is necessary for the solution of most petrogenic problems and for geothermometry. Thermodynamic quantities for tungstates (CaWO4-SrWO4) are calculated.

  16. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  17. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  18. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  19. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  20. Local Regularity Analysis with Wavelet Transform in Gear Tooth Failure Detection

    NASA Astrophysics Data System (ADS)

    Nissilä, Juhani

    2017-09-01

    Diagnosing gear tooth and bearing failures in industrial power transition situations has been studied a lot but challenges still remain. This study aims to look at the problem from a more theoretical perspective. Our goal is to find out if the local regularity i.e. smoothness of the measured signal can be estimated from the vibrations of epicyclic gearboxes and if the regularity can be linked to the meshing events of the gear teeth. Previously it has been shown that the decreasing local regularity of the measured acceleration signals can reveal the inner race faults in slowly rotating bearings. The local regularity is estimated from the modulus maxima ridges of the signal's wavelet transform. In this study, the measurements come from the epicyclic gearboxes of the Kelukoski water power station (WPS). The very stable rotational speed of the WPS makes it possible to deduce that the gear mesh frequencies of the WPS and a frequency related to the rotation of the turbine blades are the most significant components in the spectra of the estimated local regularity signals.

  1. Penalized weighted least-squares approach for multienergy computed tomography image reconstruction via structure tensor total variation regularization.

    PubMed

    Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua

    2016-10-01

    Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Regularity in an environment produces an internal torque pattern for biped balance control.

    PubMed

    Ito, Satoshi; Kawasaki, Haruhisa

    2005-04-01

    In this paper, we present a control method for achieving biped static balance under unknown periodic external forces whose periods are only known. In order to maintain static balance adaptively in an uncertain environment, it is essential to have information on the ground reaction forces. However, when the biped is exposed to a steady environment that provides an external force periodically, uncertain factors on the regularity with respect to a steady environment are gradually clarified using learning process, and finally a torque pattern for balancing motion is acquired. Consequently, static balance is maintained without feedback from ground reaction forces and achieved in a feedforward manner.

  3. The Temporal Dynamics of Regularity Extraction in Non-Human Primates

    ERIC Educational Resources Information Center

    Minier, Laure; Fagot, Joël; Rey, Arnaud

    2016-01-01

    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…

  4. 12 CFR 407.3 - Procedures applicable to regularly scheduled meetings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Procedures applicable to regularly scheduled meetings. 407.3 Section 407.3 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES REGULATIONS GOVERNING PUBLIC OBSERVATION OF EX-IM BANK MEETINGS § 407.3 Procedures applicable to regularly scheduled...

  5. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Disability for any regular employment... Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is... employment since before age 22. To meet this definition of disability, a claimant must have a severe...

  6. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Disability for any regular employment... Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is... employment since before age 22. To meet this definition of disability, a claimant must have a severe...

  7. 20 CFR 220.26 - Disability for any regular employment, defined.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Disability for any regular employment... Employment § 220.26 Disability for any regular employment, defined. An employee, widow(er), or child is... employment since before age 22. To meet this definition of disability, a claimant must have a severe...

  8. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  9. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  10. Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points

    NASA Astrophysics Data System (ADS)

    Adamowicz, Tomasz; Björn, Anders; Björn, Jana

    2014-11-01

    We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.

  11. Regular black holes in Einstein-Gauss-Bonnet gravity

    NASA Astrophysics Data System (ADS)

    Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.

    2018-05-01

    Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.

  12. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    NASA Astrophysics Data System (ADS)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  13. Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2018-12-01

    The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.

  14. Adiabatic regularization for gauge fields and the conformal anomaly

    NASA Astrophysics Data System (ADS)

    Chu, Chong-Sun; Koyama, Yoji

    2017-03-01

    Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.

  15. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  16. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  17. On the regularization for nonlinear tomographic absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Dai, Jinghang; Yu, Tao; Xu, Lijun; Cai, Weiwei

    2018-02-01

    Tomographic absorption spectroscopy (TAS) has attracted increased research efforts recently due to the development in both hardware and new imaging concepts such as nonlinear tomography and compressed sensing. Nonlinear TAS is one of the emerging modality that bases on the concept of nonlinear tomography and has been successfully demonstrated both numerically and experimentally. However, all the previous demonstrations were realized using only two orthogonal projections simply for ease of implementation. In this work, we examine the performance of nonlinear TAS using other beam arrangements and test the effectiveness of the beam optimization technique that has been developed for linear TAS. In addition, so far only smoothness prior has been adopted and applied in nonlinear TAS. Nevertheless, there are also other useful priors such as sparseness and model-based prior which have not been investigated yet. This work aims to show how these priors can be implemented and included in the reconstruction process. Regularization through Bayesian formulation will be introduced specifically for this purpose, and a method for the determination of a proper regularization factor will be proposed. The comparative studies performed with different beam arrangements and regularization schemes on a few representative phantoms suggest that the beam optimization method developed for linear TAS also works for the nonlinear counterpart and the regularization scheme should be selected properly according to the available a priori information under specific application scenarios so as to achieve the best reconstruction fidelity. Though this work is conducted under the context of nonlinear TAS, it can also provide useful insights for other tomographic modalities.

  18. The Influence of a Presence of a Heavy Atom on (13)C Shielding Constants in Organomercury Compounds and Halogen Derivatives.

    PubMed

    Wodyński, Artur; Gryff-Keller, Adam; Pecul, Magdalena

    2013-04-09

    (13)C nuclear magnetic resonance shielding constants have been calculated by means of density functional theory (DFT) for several organomercury compounds and halogen derivatives of aliphatic and aromatic compounds. Relativistic effects have been included through the four-component Dirac-Kohn-Sham (DKS) method, two-component Zeroth Order Regular Approximation (ZORA) DFT, and DFT with scalar effective core potentials (ECPs). The relative shieldings have been analyzed in terms of the position of carbon atoms with respect to the heavy atom and their hybridization. The results have been compared with the experimental values, some newly measured and some found in the literature. The main aim of the calculations has been to evaluate the magnitude of heavy atom effects on the (13)C shielding constants and to check what are the relative contributions of scalar relativistic effects and spin-orbit coupling. Another object has been to compare the DKS and ZORA results and to check how the approximate method of accounting for the heavy-atom-on-light-atom (HALA) relativistic effect by means of scalar effective core potentials on heavy atoms performs in comparison with the more rigorous two- and four-component treatment.

  19. Halogen Bonding versus Hydrogen Bonding: A Molecular Orbital Perspective

    PubMed Central

    Wolters, Lando P; Bickelhaupt, F Matthias

    2012-01-01

    We have carried out extensive computational analyses of the structure and bonding mechanism in trihalides DX⋅⋅⋅A− and the analogous hydrogen-bonded complexes DH⋅⋅⋅A− (D, X, A=F, Cl, Br, I) using relativistic density functional theory (DFT) at zeroth-order regular approximation ZORA-BP86/TZ2P. One purpose was to obtain a set of consistent data from which reliable trends in structure and stability can be inferred over a large range of systems. The main objective was to achieve a detailed understanding of the nature of halogen bonds, how they resemble, and also how they differ from, the better understood hydrogen bonds. Thus, we present an accurate physical model of the halogen bond based on quantitative Kohn–Sham molecular orbital (MO) theory, energy decomposition analyses (EDA) and Voronoi deformation density (VDD) analyses of the charge distribution. It appears that the halogen bond in DX⋅⋅⋅A− arises not only from classical electrostatic attraction but also receives substantial stabilization from HOMO–LUMO interactions between the lone pair of A− and the σ* orbital of D–X. PMID:24551497

  20. Implications of the Regular Education Initiative Debate for School Psychologists.

    ERIC Educational Resources Information Center

    Davis, William E.

    The paper examines critical issues involved in the debate over the Regular Education Initiative (REI) to merge special and regular education, with emphasis on implications for school psychologists. The arguments of proponents and opponents of the REI are summarized and the lack of involvement by school psychologists is noted. The REI is seen to…

  1. The Effects of Regular Exercise on the Physical Fitness Levels

    ERIC Educational Resources Information Center

    Kirandi, Ozlem

    2016-01-01

    The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…

  2. Myth 13: The Regular Classroom Teacher Can "Go It Alone"

    ERIC Educational Resources Information Center

    Sisk, Dorothy

    2009-01-01

    With most gifted students being educated in a mainstream model of education, the prevailing myth that the regular classroom teacher can "go it alone" and the companion myth that the teacher can provide for the education of gifted students through differentiation are alive and well. In reality, the regular classroom teacher is too often concerned…

  3. A critical analysis of some popular methods for the discretisation of the gradient operator in finite volume methods

    NASA Astrophysics Data System (ADS)

    Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John

    2017-12-01

    Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.

  4. 75 FR 75722 - Order Extending Temporary Exemptions From Certain Government Securities Act Provisions and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-06

    ... government securities broker generally is ``any person regularly engaged in the business of effecting...). \\3\\ A government securities dealer generally is ``any person engaged in the business of buying and... DEPARTMENT OF THE TREASURY Order Extending Temporary Exemptions From Certain Government Securities...

  5. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  6. Thermodynamics of a class of regular black holes with a generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Maluf, R. V.; Neves, Juliano C. S.

    2018-05-01

    In this article, we present a study on thermodynamics of a class of regular black holes. Such a class includes Bardeen and Hayward regular black holes. We obtained thermodynamic quantities like the Hawking temperature, entropy, and heat capacity for the entire class. As part of an effort to indicate some physical observable to distinguish regular black holes from singular black holes, we suggest that regular black holes are colder than singular black holes. Besides, contrary to the Schwarzschild black hole, that class of regular black holes may be thermodynamically stable. From a generalized uncertainty principle, we also obtained the quantum-corrected thermodynamics for the studied class. Such quantum corrections provide a logarithmic term for the quantum-corrected entropy.

  7. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  8. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    NASA Astrophysics Data System (ADS)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  9. Global Regularity for Several Incompressible Fluid Models with Partial Dissipation

    NASA Astrophysics Data System (ADS)

    Wu, Jiahong; Xu, Xiaojing; Ye, Zhuan

    2017-09-01

    This paper examines the global regularity problem on several 2D incompressible fluid models with partial dissipation. They are the surface quasi-geostrophic (SQG) equation, the 2D Euler equation and the 2D Boussinesq equations. These are well-known models in fluid mechanics and geophysics. The fundamental issue of whether or not they are globally well-posed has attracted enormous attention. The corresponding models with partial dissipation may arise in physical circumstances when the dissipation varies in different directions. We show that the SQG equation with either horizontal or vertical dissipation always has global solutions. This is in sharp contrast with the inviscid SQG equation for which the global regularity problem remains outstandingly open. Although the 2D Euler is globally well-posed for sufficiently smooth data, the associated equations with partial dissipation no longer conserve the vorticity and the global regularity is not trivial. We are able to prove the global regularity for two partially dissipated Euler equations. Several global bounds are also obtained for a partially dissipated Boussinesq system.

  10. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  11. Application of real rock pore-threat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakibul, M.; Sarker, H.; McIntyre, D.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable

  12. Application of real rock pore-throat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, M.R.; McIntyre, D.; Ferer, M.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable

  13. Regularity for Fully Nonlinear Elliptic Equations with Oblique Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Zhang, Kai

    2018-06-01

    In this paper, we obtain a series of regularity results for viscosity solutions of fully nonlinear elliptic equations with oblique derivative boundary conditions. In particular, we derive the pointwise C α, C 1,α and C 2,α regularity. As byproducts, we also prove the A-B-P maximum principle, Harnack inequality, uniqueness and solvability of the equations.

  14. The persistence of the attentional bias to regularities in a changing environment.

    PubMed

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  15. Self-assembly of a binodal metal-organic framework exhibiting a demi-regular lattice.

    PubMed

    Yan, Linghao; Kuang, Guowen; Zhang, Qiushi; Shang, Xuesong; Liu, Pei Nian; Lin, Nian

    2017-10-26

    Designing metal-organic frameworks with new topologies is a long-standing quest because new topologies often accompany new properties and functions. Here we report that 1,3,5-tris[4-(pyridin-4-yl)phenyl]benzene molecules coordinate with Cu atoms to form a two-dimensional framework in which Cu adatoms form a nanometer-scale demi-regular lattice. The lattice is articulated by perfectly arranged twofold and threefold pyridyl-Cu coordination motifs in a ratio of 1 : 6 and features local dodecagonal symmetry. This structure is thermodynamically robust and emerges solely when the molecular density is at a critical value. In comparison, we present three framework structures that consist of semi-regular and regular lattices of Cu atoms self-assembled out of 1,3,5-tris[4-(pyridin-4-yl)phenyl]benzene and trispyridylbenzene molecules. Thus a family of regular, semi-regular and demi-regular lattices can be achieved by Cu-pyridyl coordination.

  16. 5 CFR 532.221 - Industries included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Industries included in regular... CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.221 Industries... American Industry Classification System (NAICS) codes in all regular nonappropriated fund wage surveys...

  17. Hierarchical 3D ordered meso-/macroporous metal-organic framework produced through a facile template-free self-assembly

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoli; Wu, Suilan; Wang, Panhao; Yang, Lin

    2018-02-01

    The synthesis of well-ordered hierarchical metal-organic frameworks (MOFs) in an efficient manner is a great challenge. Here, a 3D regular ordered meso-/macroporous MOF of Cu-TATAB (referred to as MM-MOF) was synthesized through a facile template-free self-assembly process with pore sizes of 31 nm and 119 nm.

  18. On split regular BiHom-Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Liangyun; Zhang, Chiping

    2018-06-01

    We introduce the class of split regular BiHom-Lie superalgebras as the natural extension of the one of split Hom-Lie superalgebras and the one of split Lie superalgebras. By developing techniques of connections of roots for this kind of algebras, we show that such a split regular BiHom-Lie superalgebra L is of the form L = U +∑ [ α ] ∈ Λ / ∼I[α] with U a subspace of the Abelian (graded) subalgebra H and any I[α], a well described (graded) ideal of L, satisfying [I[α] ,I[β] ] = 0 if [ α ] ≠ [ β ] . Under certain conditions, in the case of L being of maximal length, the simplicity of the algebra is characterized and it is shown that L is the direct sum of the family of its simple (graded) ideals.

  19. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  20. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  1. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  2. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  3. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two

  4. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  5. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  6. Anisotropic smoothing regularization (AnSR) in Thirion's Demons registration evaluates brain MRI tissue changes post-laser ablation.

    PubMed

    Hwuang, Eileen; Danish, Shabbar; Rusu, Mirabela; Sparks, Rachel; Toth, Robert; Madabhushi, Anant

    2013-01-01

    MRI-guided laser-induced interstitial thermal therapy (LITT) is a form of laser ablation and a potential alternative to craniotomy in treating glioblastoma multiforme (GBM) and epilepsy patients, but its effectiveness has yet to be fully evaluated. One way of assessing short-term treatment of LITT is by evaluating changes in post-treatment MRI as a measure of response. Alignment of pre- and post-LITT MRI in GBM and epilepsy patients via nonrigid registration is necessary to detect subtle localized treatment changes on imaging, which can then be correlated with patient outcome. A popular deformable registration scheme in the context of brain imaging is Thirion's Demons algorithm, but its flexibility often introduces artifacts without physical significance, which has conventionally been corrected by Gaussian smoothing of the deformation field. In order to prevent such artifacts, we instead present the Anisotropic smoothing regularizer (AnSR) which utilizes edge-detection and denoising within the Demons framework to regularize the deformation field at each iteration of the registration more aggressively in regions of homogeneously oriented displacements while simultaneously regularizing less aggressively in areas containing heterogeneous local deformation and tissue interfaces. In contrast, the conventional Gaussian smoothing regularizer (GaSR) uniformly averages over the entire deformation field, without carefully accounting for transitions across tissue boundaries and local displacements in the deformation field. In this work we employ AnSR within the Demons algorithm and perform pairwise registration on 2D synthetic brain MRI with and without noise after inducing a deformation that models shrinkage of the target region expected from LITT. We also applied Demons with AnSR for registering clinical T1-weighted MRI for one epilepsy and one GBM patient pre- and post-LITT. Our results demonstrate that by maintaining select displacements in the deformation field, An

  7. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  8. AN ERP STUDY OF REGULAR AND IRREGULAR ENGLISH PAST TENSE INFLECTION

    PubMed Central

    Newman, Aaron J.; Ullman, Michael T.; Pancheva, Roumyana; Waligura, Diane L.; Neville, Helen J.

    2006-01-01

    Compositionality is a critical and universal characteristic of human language. It is found at numerous levels, including the combination of morphemes into words and of words into phrases and sentences. These compositional patterns can generally be characterized by rules. For example, the past tense of most English verbs (“regulars”) is formed by adding an -ed suffix. However, many complex linguistic forms have rather idiosyncratic mappings. For example, “irregular” English verbs have past tense forms that cannot be derived from their stems in a consistent manner. Whether regular and irregular forms depend on fundamentally distinct neurocognitive processes (rule-governed combination vs. lexical memorization), or whether a single processing system is sufficient to explain the phenomena, has engendered considerable investigation and debate. We recorded event-related potentials while participants read English sentences that were either correct or had violations of regular past tense inflection, irregular past tense inflection, syntactic phrase structure, or lexical semantics. Violations of regular past tense and phrase structure, but not of irregular past tense or lexical semantics, elicited left-lateralized anterior negativities (LANs). These seem to reflect neurocognitive substrates that underlie compositional processes across linguistic domains, including morphology and syntax. Regular, irregular, and phrase structure violations all elicited later positivities that were maximal over right parietal sites (P600s), and which seem to index aspects of controlled syntactic processing of both phrase structure and morphosyntax. The results suggest distinct neurocognitive substrates for processing regular and irregular past tense forms: regulars depending on compositional processing, and irregulars stored in lexical memory. PMID:17070703

  9. Regularization strategies for hyperplane classifiers: application to cancer classification with gene expression data.

    PubMed

    Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl

    2007-02-01

    Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.

  10. Mainstreaming: Merging Regular and Special Education.

    ERIC Educational Resources Information Center

    Hasazi, Susan E.; And Others

    The booklet on mainstreaming looks at the merging of special and regular education as a process rather than as an end. Chapters address the following topics (sample subtopics in parentheses): what is mainstreaming; pros and cons of mainstreaming; forces influencing change in special education (educators, parents and advocacy groups, the courts,…

  11. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  12. High-resolution molybdenum K-edge X-ray absorption spectroscopy analyzed with time-dependent density functional theory.

    PubMed

    Lima, Frederico A; Bjornsson, Ragnar; Weyhermüller, Thomas; Chandrasekaran, Perumalreddy; Glatzel, Pieter; Neese, Frank; DeBeer, Serena

    2013-12-28

    X-ray absorption spectroscopy (XAS) is a widely used experimental technique capable of selectively probing the local structure around an absorbing atomic species in molecules and materials. When applied to heavy elements, however, the quantitative interpretation can be challenging due to the intrinsic spectral broadening arising from the decrease in the core-hole lifetime. In this work we have used high-energy resolution fluorescence detected XAS (HERFD-XAS) to investigate a series of molybdenum complexes. The sharper spectral features obtained by HERFD-XAS measurements enable a clear assignment of the features present in the pre-edge region. Time-dependent density functional theory (TDDFT) has been previously shown to predict K-pre-edge XAS spectra of first row transition metal compounds with a reasonable degree of accuracy. Here we extend this approach to molybdenum K-edge HERFD-XAS and present the necessary calibration. Modern pure and hybrid functionals are utilized and relativistic effects are accounted for using either the Zeroth Order Regular Approximation (ZORA) or the second order Douglas-Kroll-Hess (DKH2) scalar relativistic approximations. We have found that both the predicted energies and intensities are in excellent agreement with experiment, independent of the functional used. The model chosen to account for relativistic effects also has little impact on the calculated spectra. This study provides an important calibration set for future applications of molybdenum HERFD-XAS to complex catalytic systems.

  13. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  14. Matching Extension in Regular Graphs

    DTIC Science & Technology

    1989-01-01

    Plummer, Matching Theory, Ann. Discrete Math . 29, North- Holland, Amsterdam, 1986. [101 , The matching structure of graphs: some recent re- sults...maximums d’un graphe, These, Dr. troisieme cycle, Univ. Grenoble, 1978. [12 ] D. Naddef and W.R. Pulleyblank, Matching in regular graphs, Discrete Math . 34...1981, 283-291. [13 1 M.D. Plummer, On n-extendable graphs, Discrete Math . 31, 1980, 201-210. . [ 141 ,Matching extension in planar graphs IV

  15. Regular and Special Educators: Handicap Integration Attitudes and Implications for Consultants.

    ERIC Educational Resources Information Center

    Gans, Karen D.

    1985-01-01

    One-hundred twenty-eight regular and 133 special educators responded to a questionnaire on mainstreaming. The two groups were similiar in their attitudes. Regular educators displayed more negative attitudes, but the differences rarely reached significance. Group differences became more apparent when attitudes concerning specific handicapping…

  16. Shaping highly regular glass architectures: A lesson from nature

    PubMed Central

    Schoeppler, Vanessa; Reich, Elke; Vacelet, Jean; Rosenthal, Martin; Pacureanu, Alexandra; Rack, Alexander; Zaslansky, Paul; Zolotoyabko, Emil; Zlotnikov, Igor

    2017-01-01

    Demospongiae is a class of marine sponges that mineralize skeletal elements, the glass spicules, made of amorphous silica. The spicules exhibit a diversity of highly regular three-dimensional branched morphologies that are a paradigm example of symmetry in biological systems. Current glass shaping technology requires treatment at high temperatures. In this context, the mechanism by which glass architectures are formed by living organisms remains a mystery. We uncover the principles of spicule morphogenesis. During spicule formation, the process of silica deposition is templated by an organic filament. It is composed of enzymatically active proteins arranged in a mesoscopic hexagonal crystal-like structure. In analogy to synthetic inorganic nanocrystals that show high spatial regularity, we demonstrate that the branching of the filament follows specific crystallographic directions of the protein lattice. In correlation with the symmetry of the lattice, filament branching determines the highly regular morphology of the spicules on the macroscale. PMID:29057327

  17. Determinants of regular smoking onset in South Africa using duration analysis.

    PubMed

    Vellios, Nicole; van Walbeek, Corné

    2016-07-18

    South Africa has achieved significant success with its tobacco control policy. Between 1994 and 2012, the real price of cigarettes increased by 229%, while regular smoking prevalence decreased from about 31% to 18.2%. Cigarette prices and socioeconomic variables are used to examine the determinants of regular smoking onset. We apply duration analysis techniques to the National Income Dynamics Study, a nationally representative survey of South Africa. We find that an increase in cigarette prices significantly reduces regular smoking initiation among males, but not among females. Regular smoking among parents is positively correlated with smoking initiation among children. Children with more educated parents are less likely to initiate regular smoking than those with less educated parents. Africans initiate later and at lower rates than other race groups. As the tobacco epidemic is shifting towards low-income and middle-income countries, there is an increasing urgency to perform studies in these countries to influence policy. Higher cigarette excise taxes, which lead to higher retail prices, reduce smoking prevalence by encouraging smokers to quit and by discouraging young people from starting smoking. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Real-world spatial regularities affect visual working memory for objects.

    PubMed

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V

    2015-12-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.

  19. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    PubMed

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  20. Regularization method for large eddy simulations of shock-turbulence interactions

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2018-05-01

    The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.