Sample records for regular approximation zora

  1. A gauge-independent zeroth-order regular approximation to the exact relativistic Hamiltonian—Formulation and applications

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-01-01

    A simple modification of the zeroth-order regular approximation (ZORA) in relativistic theory is suggested to suppress its erroneous gauge dependence to a high level of approximation. The method, coined gauge-independent ZORA (ZORA-GI), can be easily installed in any existing nonrelativistic quantum chemical package by programming simple one-electron matrix elements for the quasirelativistic Hamiltonian. Results of benchmark calculations obtained with ZORA-GI at the Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory (MP2) level for dihalogens X2 (X=F,Cl,Br,I,At) are in good agreement with the results of four-component relativistic calculations (HF level) and experimental data (MP2 level). ZORA-GI calculations based on MP2 or coupled-cluster theory with single and double perturbations and a perturbative inclusion of triple excitations [CCSD(T)] lead to accurate atomization energies and molecular geometries for the tetroxides of group VIII elements. With ZORA-GI/CCSD(T), an improved estimate for the atomization energy of hassium (Z=108) tetroxide is obtained.

  2. Relativistic calculation of nuclear magnetic shielding using normalized elimination of the small component

    NASA Astrophysics Data System (ADS)

    Kudo, K.; Maeda, H.; Kawakubo, T.; Ootani, Y.; Funaki, M.; Fukui, H.

    2006-06-01

    The normalized elimination of the small component (NESC) theory, recently proposed by Filatov and Cremer [J. Chem. Phys. 122, 064104 (2005)], is extended to include magnetic interactions and applied to the calculation of the nuclear magnetic shielding in HX (X =F,Cl,Br,I) systems. The NESC calculations are performed at the levels of the zeroth-order regular approximation (ZORA) and the second-order regular approximation (SORA). The calculations show that the NESC-ZORA results are very close to the NESC-SORA results, except for the shielding of the I nucleus. Both the NESC-ZORA and NESC-SORA calculations yield very similar results to the previously reported values obtained using the relativistic infinite-order two-component coupled Hartree-Fock method. The difference between NESC-ZORA and NESC-SORA results is significant for the shieldings of iodine.

  3. Connection between the regular approximation and the normalized elimination of the small component in relativistic quantum theory

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-02-01

    The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.

  4. Quantum theory of atoms in molecules: results for the SR-ZORA Hamiltonian.

    PubMed

    Anderson, James S M; Ayers, Paul W

    2011-11-17

    The quantum theory of atoms in molecules (QTAIM) is generalized to include relativistic effects using the popular scalar-relativistic zeroth-order regular approximation (SR-ZORA). It is usually assumed that the definition of the atom as a volume bounded by a zero-flux surface of the electron density is closely linked to the form of the kinetic energy, so it is somewhat surprising that the atoms corresponding to the relativistic kinetic-energy operator in the SR-ZORA Hamiltonian are also bounded by zero-flux surfaces. The SR-ZORA Hamiltonian should be sufficient for qualitative descriptions of molecular electronic structure across the periodic table, which suggests that QTAIM-based analysis can be useful for molecules and solids containing heavy atoms.

  5. Nuclear magnetic resonance shielding constants and chemical shifts in linear 199Hg compounds: a comparison of three relativistic computational methods.

    PubMed

    Arcisauskaite, Vaida; Melo, Juan I; Hemmingsen, Lars; Sauer, Stephan P A

    2011-07-28

    We investigate the importance of relativistic effects on NMR shielding constants and chemical shifts of linear HgL(2) (L = Cl, Br, I, CH(3)) compounds using three different relativistic methods: the fully relativistic four-component approach and the two-component approximations, linear response elimination of small component (LR-ESC) and zeroth-order regular approximation (ZORA). LR-ESC reproduces successfully the four-component results for the C shielding constant in Hg(CH(3))(2) within 6 ppm, but fails to reproduce the Hg shielding constants and chemical shifts. The latter is mainly due to an underestimation of the change in spin-orbit contribution. Even though ZORA underestimates the absolute Hg NMR shielding constants by ∼2100 ppm, the differences between Hg chemical shift values obtained using ZORA and the four-component approach without spin-density contribution to the exchange-correlation (XC) kernel are less than 60 ppm for all compounds using three different functionals, BP86, B3LYP, and PBE0. However, larger deviations (up to 366 ppm) occur for Hg chemical shifts in HgBr(2) and HgI(2) when ZORA results are compared with four-component calculations with non-collinear spin-density contribution to the XC kernel. For the ZORA calculations it is necessary to use large basis sets (QZ4P) and the TZ2P basis set may give errors of ∼500 ppm for the Hg chemical shifts, despite deceivingly good agreement with experimental data. A Gaussian nucleus model for the Coulomb potential reduces the Hg shielding constants by ∼100-500 ppm and the Hg chemical shifts by 1-143 ppm compared to the point nucleus model depending on the atomic number Z of the coordinating atom and the level of theory. The effect on the shielding constants of the lighter nuclei (C, Cl, Br, I) is, however, negligible. © 2011 American Institute of Physics

  6. The Influence of a Presence of a Heavy Atom on (13)C Shielding Constants in Organomercury Compounds and Halogen Derivatives.

    PubMed

    Wodyński, Artur; Gryff-Keller, Adam; Pecul, Magdalena

    2013-04-09

    (13)C nuclear magnetic resonance shielding constants have been calculated by means of density functional theory (DFT) for several organomercury compounds and halogen derivatives of aliphatic and aromatic compounds. Relativistic effects have been included through the four-component Dirac-Kohn-Sham (DKS) method, two-component Zeroth Order Regular Approximation (ZORA) DFT, and DFT with scalar effective core potentials (ECPs). The relative shieldings have been analyzed in terms of the position of carbon atoms with respect to the heavy atom and their hybridization. The results have been compared with the experimental values, some newly measured and some found in the literature. The main aim of the calculations has been to evaluate the magnitude of heavy atom effects on the (13)C shielding constants and to check what are the relative contributions of scalar relativistic effects and spin-orbit coupling. Another object has been to compare the DKS and ZORA results and to check how the approximate method of accounting for the heavy-atom-on-light-atom (HALA) relativistic effect by means of scalar effective core potentials on heavy atoms performs in comparison with the more rigorous two- and four-component treatment.

  7. Revisiting HgCl 2: A solution- and solid-state 199Hg NMR and ZORA-DFT computational study

    NASA Astrophysics Data System (ADS)

    Taylor, R. E.; Carver, Colin T.; Larsen, Ross E.; Dmitrenko, Olga; Bai, Shi; Dybowski, C.

    2009-07-01

    The 199Hg chemical-shift tensor of solid HgCl 2 was determined from spectra of polycrystalline materials, using static and magic-angle spinning (MAS) techniques at multiple spinning frequencies and field strengths. The chemical-shift tensor of solid HgCl 2 is axially symmetric ( η = 0) within experimental error. The 199Hg chemical-shift anisotropy (CSA) of HgCl 2 in a frozen solution in dimethylsulfoxide (DMSO) is significantly smaller than that of the solid, implying that the local electronic structure in the solid is different from that of the material in solution. The experimental chemical-shift results (solution and solid state) are compared with those predicted by density functional theory (DFT) calculations using the zeroth-order regular approximation (ZORA) to account for relativistic effects. 199Hg spin-lattice relaxation of HgCl 2 dissolved in DMSO is dominated by a CSA mechanism, but a second contribution to relaxation arises from ligand exchange. Relaxation in the solid state is independent of temperature, suggesting relaxation by paramagnetic impurities or defects.

  8. Relativistic (SR-ZORA) quantum theory of atoms in molecules properties.

    PubMed

    Anderson, James S M; Rodríguez, Juan I; Ayers, Paul W; Götz, Andreas W

    2017-01-15

    The Quantum Theory of Atoms in Molecules (QTAIM) is used to elucidate the effects of relativity on chemical systems. To do this, molecules are studied using density-functional theory at both the nonrelativistic level and using the scalar relativistic zeroth-order regular approximation. Relativistic effects on the QTAIM properties and topology of the electron density can be significant for chemical systems with heavy atoms. It is important, therefore, to use the appropriate relativistic treatment of QTAIM (Anderson and Ayers, J. Phys. Chem. 2009, 115, 13001) when treating systems with heavy atoms. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Scalar relativistic computations of nuclear magnetic shielding and g-shifts with the zeroth-order regular approximation and range-separated hybrid density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aquino, Fredy W.; Govind, Niranjan; Autschbach, Jochen

    2011-10-01

    Density functional theory (DFT) calculations of NMR chemical shifts and molecular g-tensors with Gaussian-type orbitals are implemented via second-order energy derivatives within the scalar relativistic zeroth order regular approximation (ZORA) framework. Nonhybrid functionals, standard (global) hybrids, and range-separated (Coulomb-attenuated, long-range corrected) hybrid functionals are tested. Origin invariance of the results is ensured by use of gauge-including atomic orbital (GIAO) basis functions. The new implementation in the NWChem quantum chemistry package is verified by calculations of nuclear shielding constants for the heavy atoms in HX (X=F, Cl, Br, I, At) and H2X (X = O, S, Se, Te, Po), and Temore » chemical shifts in a number of tellurium compounds. The basis set and functional dependence of g-shifts is investigated for 14 radicals with light and heavy atoms. The problem of accurately predicting F NMR shielding in UF6-nCln, n = 1 to 6, is revisited. The results are sensitive to approximations in the density functionals, indicating a delicate balance of DFT self-interaction vs. correlation. For the uranium halides, the results with the range-separated functionals are mixed.« less

  10. Relativistic Zeroth-Order Regular Approximation Combined with Nonhybrid and Hybrid Density Functional Theory: Performance for NMR Indirect Nuclear Spin-Spin Coupling in Heavy Metal Compounds.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-01-12

    A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.

  11. Introducing ZORA to Children with Severe Physical Disabilities.

    PubMed

    van den Heuvel, Renée; Lexis, Monique; de Witte, Luc

    2017-01-01

    The aim of the present study was to explore the potential of a ZORA-robot based intervention in rehabilitation and special education for children with (severe) physical disabilities from the professionals perspective. The qualitative results of this study will be presented. Professionals indicated meaningful application possibilities for ZORA. Overall, ZORA was able to improve motivation, concentration, taking initiative and attention span. Three domains could be identified to be most promising for application of ZORA: (re)learning of movement skills, cognitive skills and communication/social interaction skills.

  12. Robot ZORA in rehabilitation and special education for children with severe physical disabilities: a pilot study

    PubMed Central

    Lexis, Monique A.S.; de Witte, Luc P.

    2017-01-01

    The aim of this study was to explore the potential of ZORA robot-based interventions in rehabilitation and special education for children with severe physical disabilities. A two-centre explorative pilot study was carried out over a 2.5-month period involving children with severe physical disabilities with a developmental age ranging from 2 to 8 years. Children participated in six sessions with the ZORA robot in individual or in group sessions. Qualitative and quantitative methods were used to collect data on aspects of feasibility, usability, barriers and facilitators for the child as well as for the therapist and to obtain an indication of the effects on playfulness and the achievement of goals. In total, 17 children and seven professionals participated in the study. The results of this study show a positive contribution of ZORA in achieving therapy and educational goals. Moreover, sessions with ZORA were indicated as playful. Three main domains were indicated to be the most promising for the application of ZORA: movement skills, communication skills and cognitive skills. Furthermore, ZORA can contribute towards eliciting motivation, concentration, taking initiative and improving attention span of the children. On the basis of the results of the study, it can be concluded that ZORA has potential in therapy and education for children with severe physical disabilities. More research is needed to gain insight into how ZORA can be applied best in rehabilitation and special education. PMID:28837499

  13. Robot ZORA in rehabilitation and special education for children with severe physical disabilities: a pilot study.

    PubMed

    van den Heuvel, Renée J F; Lexis, Monique A S; de Witte, Luc P

    2017-12-01

    The aim of this study was to explore the potential of ZORA robot-based interventions in rehabilitation and special education for children with severe physical disabilities. A two-centre explorative pilot study was carried out over a 2.5-month period involving children with severe physical disabilities with a developmental age ranging from 2 to 8 years. Children participated in six sessions with the ZORA robot in individual or in group sessions. Qualitative and quantitative methods were used to collect data on aspects of feasibility, usability, barriers and facilitators for the child as well as for the therapist and to obtain an indication of the effects on playfulness and the achievement of goals. In total, 17 children and seven professionals participated in the study. The results of this study show a positive contribution of ZORA in achieving therapy and educational goals. Moreover, sessions with ZORA were indicated as playful. Three main domains were indicated to be the most promising for the application of ZORA: movement skills, communication skills and cognitive skills. Furthermore, ZORA can contribute towards eliciting motivation, concentration, taking initiative and improving attention span of the children. On the basis of the results of the study, it can be concluded that ZORA has potential in therapy and education for children with severe physical disabilities. More research is needed to gain insight into how ZORA can be applied best in rehabilitation and special education.

  14. A Mo-95 and C-13 Solid-state NMR and Relativistic DFT Investigation of Mesitylenetricarbonylmolybdenum(0) -a Typical Transition Metal Piano-stool Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryce, David L.; Wasylishen, Roderick E.

    2002-06-21

    The chemical shift (CS) and electric field gradient (EFG) tensors in the piano-stool compound mesitylenetricarbonylmolybdenum(0), 1, have been investigated via {sup 95}Mo and {sup 13}C solid-state magic-angle spinning (MAS) NMR as well as relativistic zeroth-order regular approximation density functional theory (ZORA-DFT) calculations. Molybdenum-95 (I = 5/2) MAS NMR spectra acquired at 18.8 T are dominated by the anisotropic chemical shift interaction ({Omega} = 775 {+-} 30 ppm) rather than the 2nd-order quadrupolar interaction (C{sub Q} = -0.96 {+-} 0.15 MHz), an unusual situation for a quadrupolar nucleus. ZORA-DFT calculations of the {sup 95}Mo EFG and CS tensors are in agreementmore » with the experimental data. Mixing of appropriate occupied and virtual d-orbital dominated MOs in the region of the HOMO-LUMO gap are shown to be responsible for the large chemical shift anisotropy. The small, but non-negligible, {sup 95}Mo quadrupolar interaction is discussed in terms of the geometry about Mo. Carbon-13 CPMAS spectra acquired at 4.7 T demonstrate the crystallographic and magnetic nonequivalence of the twelve {sup 13}C nuclei in 1, despite the chemical equivalence of some of these nuclei in isotropic solutions. The principal components of the carbon CS tensors are determined via a Herzfeld-Berger analysis, and indicate that motion of the mesitylene ring is slow compared to a rate which would influence the carbon CS tensors (i.e. tens of {micro}s). ZORA-DFT calculations reproduce the experimental carbon CS tensors accurately. Oxygen-17 EFG and CS tensors for 1 are also calculated and discussed in terms of existing experimental data for related molybdenum carbonyl compounds. This work provides an example of the information available from combined multi-field solid-state multinuclear magnetic resonance and computational investigations of transition metal compounds, in particular the direct study of quadrupolar transition metal nuclei with relatively small magnetic moments.« less

  15. Care Robot ZORA in Dutch Nursing Homes; An Evaluation Study.

    PubMed

    Kort, Helianthe; Huisman, Chantal

    2017-01-01

    From May 2016 - November 2016 the use of the ZORA robot was investigated in 15 long-term care facilities for older people. The ZORA robot is built as a social robot and used for pleasure and entertainment or to stimulate physical activities of the residents.

  16. Redox properties of biscyclopentadienyl uranium(V) imido-halide complexes: a relativistic DFT study.

    PubMed

    Elkechai, Aziz; Kias, Farida; Talbi, Fazia; Boucekkine, Abdou

    2014-06-01

    Calculations of ionization energies (IE) and electron affinities (EA) of a series of biscyclopentadienyl imido-halide uranium(V) complexes Cp*2U(=N-2,6-(i)Pr2-C6H3)(X) with X =  F, Cl, Br, and I, related to the U(IV)/U(V) and U(V)/U(VI) redox systems, were carried out, for the first time, using density functional theory (DFT) in the framework of the relativistic zeroth order regular approximation (ZORA) coupled with the conductor-like screening model (COSMO) solvation approach. A very good linear correlation (R(2) =  0.993) was obtained, between calculated ionization energies at the ZORA/BP86/TZP level, and the experimental half-wave oxidation potentials E1/2. A similar linear correlation between the computed electron affinities and the electrochemical reduction U(IV)/U(III) potentials (R(2) =  0.996) is obtained. The importance of solvent effects and of spin-orbit coupling is definitively confirmed. The molecular orbital analysis underlines the crucial role played by the 5f orbitals of the central metal whereas the Nalewajski-Mrozek (N-M) bond indices explain well the bond distances variations following the redox processes. The IE variation of the complexes, i.e., IE(F) < IE(Cl) < IE(Br) < IE(I) is also well rationalized considering the frontier MO diagrams of these species. Finally, this work confirms the relevance of the Hirshfeld charges analysis which bring to light an excellent linear correlation (R(2) =  0.999) between the variations of the uranium charges and E1/2 in the reduction process of the U(V) species.

  17. NMR shielding calculations across the periodic table: diamagnetic uranium compounds. 2. Ligand and metal NMR.

    PubMed

    Schreckenbach, Georg

    2002-12-16

    In this and a previous article (J. Phys. Chem. A 2000, 104, 8244), the range of application for relativistic density functional theory (DFT) is extended to the calculation of nuclear magnetic resonance (NMR) shieldings and chemical shifts in diamagnetic actinide compounds. Two relativistic DFT methods are used, ZORA ("zeroth-order regular approximation") and the quasirelativistic (QR) method. In the given second paper, NMR shieldings and chemical shifts are calculated and discussed for a wide range of compounds. The molecules studied comprise uranyl complexes, [UO(2)L(n)](+/-)(q); UF(6); inorganic UF(6) derivatives, UF(6-n)Cl(n), n = 0-6; and organometallic UF(6) derivatives, UF(6-n)(OCH(3))(n), n = 0-5. Uranyl complexes include [UO(2)F(4)](2-), [UO(2)Cl(4)](2-), [UO(2)(OH)(4)](2-), [UO(2)(CO(3))(3)](4-), and [UO(2)(H(2)O)(5)](2+). For the ligand NMR, moderate (e.g., (19)F NMR chemical shifts in UF(6-n)Cl(n)) to excellent agreement [e.g., (19)F chemical shift tensor in UF(6) or (1)H NMR in UF(6-n)(OCH(3))(n)] has been found between theory and experiment. The methods have been used to calculate the experimentally unknown (235)U NMR chemical shifts. A large chemical shift range of at least 21,000 ppm has been predicted for the (235)U nucleus. ZORA spin-orbit appears to be the most accurate method for predicting actinide metal chemical shifts. Trends in the (235)U NMR chemical shifts of UF(6-n)L(n) molecules are analyzed and explained in terms of the calculated electronic structure. It is argued that the energy separation and interaction between occupied and virtual orbitals with f-character are the determining factors.

  18. Halogen Bonding versus Hydrogen Bonding: A Molecular Orbital Perspective

    PubMed Central

    Wolters, Lando P; Bickelhaupt, F Matthias

    2012-01-01

    We have carried out extensive computational analyses of the structure and bonding mechanism in trihalides DX⋅⋅⋅A− and the analogous hydrogen-bonded complexes DH⋅⋅⋅A− (D, X, A=F, Cl, Br, I) using relativistic density functional theory (DFT) at zeroth-order regular approximation ZORA-BP86/TZ2P. One purpose was to obtain a set of consistent data from which reliable trends in structure and stability can be inferred over a large range of systems. The main objective was to achieve a detailed understanding of the nature of halogen bonds, how they resemble, and also how they differ from, the better understood hydrogen bonds. Thus, we present an accurate physical model of the halogen bond based on quantitative Kohn–Sham molecular orbital (MO) theory, energy decomposition analyses (EDA) and Voronoi deformation density (VDD) analyses of the charge distribution. It appears that the halogen bond in DX⋅⋅⋅A− arises not only from classical electrostatic attraction but also receives substantial stabilization from HOMO–LUMO interactions between the lone pair of A− and the σ* orbital of D–X. PMID:24551497

  19. Gender and Ambition: Zora Neale Hurston in the Harlem Renaissance.

    ERIC Educational Resources Information Center

    Story, Ralph D.

    1989-01-01

    Discusses various critical interpretations of Zora Neale Hurston's personality and writing beginning in the Harlem Renaissance. Examines literary skirmishes and aesthetic debates between Hurston and Langston Hughes and Richard Wright beginning in the Harlem Renaissance period. Explores Black male and female writers' perspectives during this time.…

  20. "Sis Cat" as Ethnographer: Self-Presentation and Self-Inscription in Zora Neale Hurston's "Mules and Men."

    ERIC Educational Resources Information Center

    Boxwell, D. A.

    1992-01-01

    Examines Zora Neale Hurston's work, particularly her collection of folklore and ethnography of the American South, "Mules and Men." Looks at the author's role, the ways the ethnographer inscribes herself into the text, and speculates about Hurston's understanding of the limits of the impersonal researcher. (JB)

  1. "Sisters Under the Skin": Race and Gender in Zora Neale Hurston's "Tell My Horse."

    ERIC Educational Resources Information Center

    Meisenhelder, Susan

    1995-01-01

    No work by Zora Neale Hurston has received harsher critical evaluation than her anthropological study of Haiti and Jamaica, "Tell My Horse." Although her aim in part was to write a commercially successful popular book, she also aimed, with some success, to offer significant social commentary. (SLD)

  2. "Different by Degree": Ella Cara Deloria, Zora Neale Hurston, and Franz Boas Contend with Race and Ethnicity.

    ERIC Educational Resources Information Center

    Hoefel, Roseanne

    2001-01-01

    American Indian ethnographer and linguist Ella Cara Deloria and African American folklorist and writer Zora Neale Hurston did fieldwork for Franz Boas, the father of modern anthropology. Both were shocked by how American racism empowered white people's historical actions. By correcting stereotypes through their work, they reasserted the role of…

  3. High-resolution molybdenum K-edge X-ray absorption spectroscopy analyzed with time-dependent density functional theory.

    PubMed

    Lima, Frederico A; Bjornsson, Ragnar; Weyhermüller, Thomas; Chandrasekaran, Perumalreddy; Glatzel, Pieter; Neese, Frank; DeBeer, Serena

    2013-12-28

    X-ray absorption spectroscopy (XAS) is a widely used experimental technique capable of selectively probing the local structure around an absorbing atomic species in molecules and materials. When applied to heavy elements, however, the quantitative interpretation can be challenging due to the intrinsic spectral broadening arising from the decrease in the core-hole lifetime. In this work we have used high-energy resolution fluorescence detected XAS (HERFD-XAS) to investigate a series of molybdenum complexes. The sharper spectral features obtained by HERFD-XAS measurements enable a clear assignment of the features present in the pre-edge region. Time-dependent density functional theory (TDDFT) has been previously shown to predict K-pre-edge XAS spectra of first row transition metal compounds with a reasonable degree of accuracy. Here we extend this approach to molybdenum K-edge HERFD-XAS and present the necessary calibration. Modern pure and hybrid functionals are utilized and relativistic effects are accounted for using either the Zeroth Order Regular Approximation (ZORA) or the second order Douglas-Kroll-Hess (DKH2) scalar relativistic approximations. We have found that both the predicted energies and intensities are in excellent agreement with experiment, independent of the functional used. The model chosen to account for relativistic effects also has little impact on the calculated spectra. This study provides an important calibration set for future applications of molybdenum HERFD-XAS to complex catalytic systems.

  4. Molecular orbital analysis of the inverse halogen dependence of nuclear magnetic shielding in LaX₃, X = F, Cl, Br, I.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-12-01

    The NMR nuclear shielding tensors for the series LaX(3), with X = F, Cl, Br and I, have been computed using two-component relativistic density functional theory based on the zeroth-order regular approximation (ZORA). A detailed analysis of the inverse halogen dependence (IHD) of the La shielding was performed via decomposition of the shielding tensor elements into contributions from localized and delocalized molecular orbitals. Both spin-orbit and paramagnetic shielding terms are important, with the paramagnetic terms being dominant. Major contributions to the IHD can be attributed to the La-X bonding orbitals, as well as to trends associated with the La core and halogen lone pair orbitals, the latter being related to X-La π donation. An 'orbital rotation' model for the in-plane π acceptor f orbital of La helps to rationalize the significant magnitude of deshielding associated with the in-plane π donation. The IHD goes along with a large increase in the shielding tensor anisotropy as X becomes heavier, which can be associated with trends for the covalency of the La-X bonds, with a particularly effective transfer of spin-orbit coupling induced spin density from iodine to La in LaI(3). Copyright © 2010 John Wiley & Sons, Ltd.

  5. The N2O activation by Rh5 clusters. A quantum chemistry study.

    PubMed

    Olvera-Neria, Oscar; Avilés, Roberto; Francisco-Rodríguez, Héctor; Bertin, Virineya; García-Cruz, Raúl; González-Torres, Julio César; Poulain, Enrique

    2015-04-01

    Nitrous oxide (N2O) is a by-product of exhaust pipe gases treatment produced by motor vehicles. Therefore, the N2O reduction to N2 is necessary to meet the actual environmental legislation. The N2O adsorption and dissociation assisted by the square-based pyramidal Rh5 cluster was investigated using the density functional theory and the zero-order regular approximation (ZORA). The Rh5 sextet ground state is the most active in N2O dissociation, though the quartet and octet states are also active because they are degenerate. The Rh5 cluster spontaneously activates the N2─O cleavage, and the reaction is highly exothermic ca. -75 kcal mol(-1). The N2─O breaking is obtained for the geometrical arrangement that maximizes the overlap and electron transfers between the N2O and Rh5 frontier orbitals. The Rh5 high activity is due to the Rh 3d orbitals are located between the N2O HOMO and LUMO orbitals, which makes possible the interactions between them. In particular, the O 2p states strongly interact with Rh 3d orbitals, which finally weaken the N2─O bond. The electron transfer is from the Rh5 HOMO orbital to the N2O antibonding orbital.

  6. Spin-orbit ZORA and four-component Dirac-Coulomb estimation of relativistic corrections to isotropic nuclear shieldings and chemical shifts of noble gas dimers.

    PubMed

    Jankowska, Marzena; Kupka, Teobald; Stobiński, Leszek; Faber, Rasmus; Lacerda, Evanildo G; Sauer, Stephan P A

    2016-02-05

    Hartree-Fock and density functional theory with the hybrid B3LYP and general gradient KT2 exchange-correlation functionals were used for nonrelativistic and relativistic nuclear magnetic shielding calculations of helium, neon, argon, krypton, and xenon dimers and free atoms. Relativistic corrections were calculated with the scalar and spin-orbit zeroth-order regular approximation Hamiltonian in combination with the large Slater-type basis set QZ4P as well as with the four-component Dirac-Coulomb Hamiltonian using Dyall's acv4z basis sets. The relativistic corrections to the nuclear magnetic shieldings and chemical shifts are combined with nonrelativistic coupled cluster singles and doubles with noniterative triple excitations [CCSD(T)] calculations using the very large polarization-consistent basis sets aug-pcSseg-4 for He, Ne and Ar, aug-pcSseg-3 for Kr, and the AQZP basis set for Xe. For the dimers also, zero-point vibrational (ZPV) corrections are obtained at the CCSD(T) level with the same basis sets were added. Best estimates of the dimer chemical shifts are generated from these nuclear magnetic shieldings and the relative importance of electron correlation, ZPV, and relativistic corrections for the shieldings and chemical shifts is analyzed. © 2015 Wiley Periodicals, Inc.

  7. Effect of Spin Multiplicity in O2 Adsorption and Dissociation on Small Bimetallic AuAg Clusters.

    PubMed

    García-Cruz, Raúl; Poulain, Enrique; Hernández-Pérez, Isaías; Reyes-Nava, Juan A; González-Torres, Julio C; Rubio-Ponce, A; Olvera-Neria, Oscar

    2017-08-17

    To dispose of atomic oxygen, it is necessary the O 2 activation; however, an energy barrier must be overcome to break the O-O bond. This work presents theoretical calculations of the O 2 adsorption and dissociation on small pure Au n and Ag m and bimetallic Au n Ag m (n + m ≤ 6) clusters using the density functional theory (DFT) and the zeroth-order regular approximation (ZORA) to explicitly include scalar relativistic effects. The most stable Au n Ag m clusters contain a higher concentration of Au with Ag atoms located in the center of the cluster. The O 2 adsorption energy on pure and bimetallic clusters and the ensuing geometries depend on the spin multiplicity of the system. For a doublet multiplicity, O 2 is adsorbed in a bridge configuration, whereas for a triplet only one O-metal bond is formed. The charge transfer from metal toward O 2 occupies the σ* O-O antibonding natural bond orbital, which weakens the oxygen bond. The Au 3 ( 2 A) cluster presents the lowest activation energy to dissociate O 2 , whereas the opposite applies to the AuAg ( 3 A) system. In the O 2 activation, bimetallic clusters are not as active as pure Au n clusters due to the charge donated by Ag atoms being shared between O 2 and Au atoms.

  8. DFT benchmark study for the oxidative addition of CH 4 to Pd. Performance of various density functionals

    NASA Astrophysics Data System (ADS)

    de Jong, G. Theodoor; Geerke, Daan P.; Diefenbach, Axel; Matthias Bickelhaupt, F.

    2005-06-01

    We have evaluated the performance of 24 popular density functionals for describing the potential energy surface (PES) of the archetypal oxidative addition reaction of the methane C-H bond to the palladium atom by comparing the results with our recent ab initio [CCSD(T)] benchmark study of this reaction. The density functionals examined cover the local density approximation (LDA), the generalized gradient approximation (GGA), meta-GGAs as well as hybrid density functional theory. Relativistic effects are accounted for through the zeroth-order regular approximation (ZORA). The basis-set dependence of the density-functional-theory (DFT) results is assessed for the Becke-Lee-Yang-Parr (BLYP) functional using a hierarchical series of Slater-type orbital (STO) basis sets ranging from unpolarized double-ζ (DZ) to quadruply polarized quadruple-ζ quality (QZ4P). Stationary points on the reaction surface have been optimized using various GGA functionals, all of which yield geometries that differ only marginally. Counterpoise-corrected relative energies of stationary points are converged to within a few tenths of a kcal/mol if one uses the doubly polarized triple-ζ (TZ2P) basis set and the basis-set superposition error (BSSE) drops to 0.0 kcal/mol for our largest basis set (QZ4P). Best overall agreement with the ab initio benchmark PES is achieved by functionals of the GGA, meta-GGA, and hybrid-DFT type, with mean absolute errors of 1.3-1.4 kcal/mol and errors in activation energies ranging from +0.8 to -1.4 kcal/mol. Interestingly, the well-known BLYP functional compares very reasonably with an only slightly larger mean absolute error of 2.5 kcal/mol and an underestimation by -1.9 kcal/mol of the overall barrier (i.e., the difference in energy between the TS and the separate reactants). For comparison, with B3LYP we arrive at a mean absolute error of 3.8 kcal/mol and an overestimation of the overall barrier by 4.5 kcal/mol.

  9. Spin-orbit effects on the (119)Sn magnetic-shielding tensor in solids: a ZORA/DFT investigation.

    PubMed

    Alkan, Fahri; Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil

    2016-07-28

    Periodic-boundary and cluster calculations of the magnetic-shielding tensors of (119)Sn sites in various co-ordination and stereochemical environments are reported. The results indicate a significant difference between the predicted NMR chemical shifts for tin(ii) sites that exhibit stereochemically-active lone pairs and tin(iv) sites that do not have stereochemically-active lone pairs. The predicted magnetic shieldings determined either with the cluster model treated with the ZORA/Scalar Hamiltonian or with the GIPAW formalism are dependent on the oxidation state and the co-ordination geometry of the tin atom. The inclusion of relativistic effects at the spin-orbit level removes systematic differences in computed magnetic-shielding parameters between tin sites of differing stereochemistries, and brings computed NMR shielding parameters into significant agreement with experimentally-determined chemical-shift principal values. Slight improvement in agreement with experiment is noted in calculations using hybrid exchange-correlation functionals.

  10. 77 FR 66463 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... 63166-2034: 1. R. Forest Taylor and Zora Taylor, both of Morgantown, Kentucky, as the largest individual..., Tennessee; Emily Ann Romans, Russellville, Kentucky; Robert Daniel Taylor, and Sharon Kay Taylor, both of...

  11. Serbo-Croatian Workbook. Phase 1. Part 1A,

    DTIC Science & Technology

    1983-10-25

    1 PRACTICE OF NOUN GENDERS * Determine the gender of the following nouns: kiupa zadruga junak me so zora novac stolica voce jastuk Pismo izvinjenje...tamo? (povrce i voce ) 7. Da li ste napisali ?___________________________ (pismo, zadatak, knjiga) 71 8. Koga niste videli? (majka, otac, sin, moji

  12. Books for Summer Reading.

    ERIC Educational Resources Information Center

    Phi Delta Kappan, 1991

    1991-01-01

    To help replenish educators' supply of ideas, "Kappan" editors suggest several books for summer reading, including many noncurrent titles not specifically on education such as Peter Novick's "That Noble Dream," Joy Kogawa's "Obasan," Zora Neale Hurston's "Their Eyes Were Watching God," Kate Chopin's "The Awakening," Willa Cather's "My Antonia,"…

  13. An Argument for an Integrated Approach to Teaching Southern Literature

    ERIC Educational Resources Information Center

    Ellis, Grace

    1978-01-01

    In addition to such writers as William Faulkner, Flannery O'Connor, Carson McCullers, and Eudora Welty, a good course in modern Southern fiction should include black writers such as Zora Hurston, Nella Larsen, Jean Toomer, Richard Wright, Maya Angelou, and Alice Walker. (MKM)

  14. Anthropology with an Agenda: Four Forgotten Dance Anthropologists

    ERIC Educational Resources Information Center

    Richter, Katrina

    2010-01-01

    In response to postcolonial, feminist and subaltern critiques of anthropology, this article seeks to answer the question, "For whom should research be conducted, and by whom should it be used?" by examining the lives and works of four female dance anthropologists. Franziska Boas, Zora Neale Hurston, Katherine Dunham and Pearl Primus used…

  15. Reevaluation of Vegetational Characteristics at the CERC (Coastal Engineering Research Center) Field Research Facility, Duck, North Carolina.

    DTIC Science & Technology

    1983-03-01

    var. cerifera L. Wax myrtle Onagraceae *Ludiigia aZata Ell. W. - r-priurose Oenothera frticoea L. drops 0. hisniflsa Nuttall Evening primrose... Orchidaceae Spiranthee cernua ver. odorata Nodding ladies’ (Nuttall) Correll. tresses Passiflorarese *PassifZora lutea L. Passion-flower Phytolacaceat POytoZaca

  16. Alkali Metal Cation Affinities of Anionic Main Group-Element Hydrides Across the Periodic Table.

    PubMed

    Boughlala, Zakaria; Fonseca Guerra, Célia; Bickelhaupt, F Matthias

    2017-10-05

    We have carried out an extensive exploration of gas-phase alkali metal cation affinities (AMCA) of archetypal anionic bases across the periodic system using relativistic density functional theory at ZORA-BP86/QZ4P//ZORA-BP86/TZ2P. AMCA values of all bases were computed for the lithium, sodium, potassium, rubidium and cesium cations and compared with the corresponding proton affinities (PA). One purpose of this work is to provide an intrinsically consistent set of values of the 298 K AMCAs of all anionic (XH n-1 - ) constituted by main group-element hydrides of groups 14-17 along the periods 2-6. In particular, we wish to establish the trend in affinity for a cation as the latter varies from proton to, and along, the alkali cations. Our main purpose is to understand these trends in terms of the underlying bonding mechanism using Kohn-Sham molecular orbital theory together with a quantitative bond energy decomposition analyses (EDA). © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The role of E-mentorship in a virtual world for youth transplant recipients.

    PubMed

    Cantrell, Kathryn; Fischer, Amy; Bouzaher, Alisha; Bers, Marina

    2010-01-01

    Because of geographic distances, many youth transplant recipients do not have the opportunity to meet and form relationships with peers who have undergone similar experiences. This article explores the role of E-mentorship in virtual environments. Most specifically, by analyzing data from a study conducted with the Zora virtual world with pediatric transplant recipients, suggestions and recommendations are given for conceiving the role of virtual mentors and allocating the needed resources. Zora is a graphical virtual world designed to create a community that offers psychoeducational support and the possibility of participating in virtual activities following a curriculum explicitly designed to address issues of school transition and medical adherence. Activities are designed to foster relationships, teach technological skills, and facilitate the formation of a support network of peers and mentors.This article addresses the research question, "What makes a successful E-mentorship model in virtual worlds for children with serious illnesses?" by looking at E-mentoring patterns such as time spent online, chat analysis, initiation of conversation, initiation of activities, and out-of-world contact.

  18. DFT calculations in the assignment of solid-state NMR and crystal structure elucidation of a lanthanum(iii) complex with dithiocarbamate and phenanthroline.

    PubMed

    Gowda, Vasantha; Laitinen, Risto S; Telkki, Ville-Veikko; Larsson, Anna-Carin; Antzutkin, Oleg N; Lantto, Perttu

    2016-12-06

    The molecular, crystal, and electronic structures as well as spectroscopic properties of a mononuclear heteroleptic lanthanum(iii) complex with diethyldithiocarbamate and 1,10-phenanthroline ligands (3 : 1) were studied by solid-state 13 C and 15 N cross-polarisation (CP) magic-angle-spinning (MAS) NMR, X-ray diffraction (XRD), and first principles density functional theory (DFT) calculations. A substantially different powder XRD pattern and 13 C and 15 N CP-MAS NMR spectra indicated that the title compound is not isostructural to the previously reported analogous rare earth complexes with the space group P2 1 /n. Both 13 C and 15 N CP-MAS NMR revealed the presence of six structurally different dithiocarbamate groups in the asymmetric unit cell, implying a non-centrosymmetric packing arrangement of molecules. This was supported by single-crystal X-ray crystallography showing that the title compound crystallised in the triclinic space group P1[combining macron]. In addition, the crystal structure also revealed that one of the dithiocarbamate ligands has a conformational disorder. NMR chemical shift calculations employing the periodic gauge including projector augmented wave (GIPAW) approach supported the assignment of the experimental 13 C and 15 N NMR spectra. However, the best correspondences were obtained with the structure where the atomic positions in the X-ray unit cell were optimised at the DFT level. The roles of the scalar and spin-orbit relativistic effects on NMR shielding were investigated using the zeroth-order regular approximation (ZORA) method with the outcome that already the scalar relativistic level qualitatively reproduces the experimental chemical shifts. The electronic properties of the complex were evaluated based on the results of the natural bond orbital (NBO) and topology of the electron density analyses. Overall, we apply a multidisciplinary approach acquiring comprehensive information about the solid-state structure and the metal-ligand bonding of the heteroleptic lanthanum complex.

  19. We Need More Drama: A Comparison of Ford, Hurston, and Boykin's African American Characteristics and Instructional Strategies for the Culturally Different Classroom

    ERIC Educational Resources Information Center

    Trotman Scott, Michelle; Moss-Bouldin, Shondrika

    2014-01-01

    Teachers who are not considered to be culturally competent may misinterpret many characteristics exhibited by African American students. They may be unaware of the African American linguistic practices and characteristics and they may also be unfamiliar with research conducted by scholars such as Zora Neale Hurston and A. Wade Boykin. This lack of…

  20. DefEX: Hands-On Cyber Defense Exercise for Undergraduate Students

    DTIC Science & Technology

    2011-07-01

    Injection, and 4) File Upload. Next, the students patched the associated flawed Perl and PHP Hypertext Preprocessor ( PHP ) code. Finally, students...underlying script. The Zora XSS vulnerability existed in a PHP file that echoed unfiltered user input back to the screen. To eliminate the...vulnerability, students filtered the input using the PHP htmlentities function and retested the code. The htmlentities function translates certain ambiguous

  1. Automatic Aircraft Collision Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  2. Approximate isotropic cloak for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ghosh, Tuhin; Tarikere, Ashwin

    2018-05-01

    We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.

  3. Regularization and Approximation of a Class of Evolution Problems in Applied Mathematics

    DTIC Science & Technology

    1991-01-01

    8217 DT)IG AD-A242 223 FINAL REPORT Nov61991:ti -ll IN IImI 1OV1 Ml99 1 REGULARIZATION AND APPROXIMATION OF A-CLASS OF EVOLUTION -PROBLEMS IN APPLIED...The University of Texas at Austin Austin, TX 78712 91 10 30 050 FINAL REPORT "Regularization and Approximation of a Class of Evolution Problems in...micro-structured parabolic system. A mathematical analysis of the regularized equations-has been developed to support our approach. Supporting

  4. Density functional studies on the exchange interaction of a dinuclear Gd(iii)-Cu(ii) complex: method assessment, magnetic coupling mechanism and magneto-structural correlations.

    PubMed

    Rajaraman, Gopalan; Totti, Federico; Bencini, Alessandro; Caneschi, Andrea; Sessoli, Roberta; Gatteschi, Dante

    2009-05-07

    Density functional calculations have been performed on a [Gd(iii)Cu(ii)] complex [L(1)CuGd(O(2)CCF(3))(3)(C(2)H(5)OH)(2)] () (where L(1) is N,N'-bis(3-ethoxy-salicylidene)-1,2-diamino-2-methylpropanato) with an aim of assessing a suitable functional within the DFT formalism to understand the mechanism of magnetic coupling and also to develop magneto-structural correlations. Encouraging results have been obtained in our studies where the application of B3LYP on the crystal structure of yields a ferromagnetic J value of -5.8 cm(-1) which is in excellent agreement with the experimental value of -4.42 cm(-1) (H = JS(Gd).S(Cu)). After testing varieties of functional for the method assessment we recommend the use of B3LYP with a combination of an effective core potential basis set. For all electron basis sets the relativistic effects should be incorporated either via the Douglas-Kroll-Hess (DKH) or zeroth-order regular approximation (ZORA) methods. A breakdown approach has been adopted where the calculations on several model complexes of have been performed. Their wave functions have been analysed thereafter (MO and NBO analysis) in order to gain some insight into the coupling mechanism. The results suggest, unambiguously, that the empty Gd(iii) 5d orbitals have a prominent role on the magnetic coupling. These 5d orbitals gain partial occupancy via Cu(ii) charge transfer as well as from the Gd(iii) 4f orbitals. A competing 4f-3d interaction associated with the symmetry of the complex has also been observed. The general mechanism hence incorporates both contributions and sets forth rather a prevailing mechanism for the 3d-4f coupling. The magneto-structural correlations reveal that there is no unique parameter which the J values are strongly correlated with, but an exponential relation to the J value found for the O-Cu-O-Gd dihedral angle parameter is the most credible correlation.

  5. Multinuclear Solid-State Magnetic Resonance as a Sensitive Probe of Structural Changes upon the Occurrence of Halogen Bonding in Co-crystals.

    PubMed

    Widdifield, Cory M; Cavallo, Gabriella; Facey, Glenn A; Pilati, Tullio; Lin, Jingxiang; Metrangolo, Pierangelo; Resnati, Giuseppe; Bryce, David L

    2013-09-02

    Although the understanding of intermolecular interactions, such as hydrogen bonding, is relatively well-developed, many additional weak interactions work both in tandem and competitively to stabilize a given crystal structure. Due to a wide array of potential applications, a substantial effort has been invested in understanding the halogen bond. Here, we explore the utility of multinuclear ((13)C, (14/15)N, (19)F, and (127)I) solid-state magnetic resonance experiments in characterizing the electronic and structural changes which take place upon the formation of five halogen-bonded co-crystalline product materials. Single-crystal X-ray diffraction (XRD) structures of three novel co-crystals which exhibit a 1:1 stoichiometry between decamethonium diiodide (i.e., [(CH3)3N(+)(CH2)10N(+)(CH3)3][2 I(-)]) and different para-dihalogen-substituted benzene moieties (i.e., p-C6X2Y4, X=Br, I; Y=H, F) are presented. (13)C and (15)N NMR experiments carried out on these and related systems validate sample purity, but also serve as indirect probes of the formation of a halogen bond in the co-crystal complexes in the solid state. Long-range changes in the electronic environment, which manifest through changes in the electric field gradient (EFG) tensor, are quantitatively measured using (14)N NMR spectroscopy, with a systematic decrease in the (14)N quadrupolar coupling constant (CQ) observed upon halogen bond formation. Attempts at (127)I solid-state NMR spectroscopy experiments are presented and variable-temperature (19)F NMR experiments are used to distinguish between dynamic and static disorder in selected product materials, which could not be conclusively established using solely XRD. Quantum chemical calculations using the gauge-including projector augmented-wave (GIPAW) or relativistic zeroth-order regular approximation (ZORA) density functional theory (DFT) approaches complement the experimental NMR measurements and provide theoretical corroboration for the changes in NMR parameters observed upon the formation of a halogen bond. Copyright © 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Substituent effects on the optical properties of naphthalenediimides: A frontier orbital analysis across the periodic table.

    PubMed

    Mulder, Joshua R; Guerra, Célia Fonseca; Slootweg, J Chris; Lammertsma, Koop; Bickelhaupt, F Matthias

    2016-01-15

    A comprehensive theoretical treatment is presented for the electronic excitation spectra of ca. 50 different mono-, di-, and tetrasubstituted naphthalenediimides (NDI) using time-dependent density functional theory (TDDFT) at ZORA-CAM-B3LYP/TZ2P//ZORA-BP86/TZ2P with COSMO for simulating the effect of dichloromethane (DCM) solution. The substituents -XHn are from groups 14-17 and rows 2-5 of the periodic table. The lowest dipole-allowed singlet excitation (S0 -S1 ) of the monosubstituted NDIs can be tuned from 3.39 eV for -F to 2.42 eV for -TeH, while the S0 -S2 transition is less sensitive to substitution with energies ranging between 3.67 eV for -CH3 and 3.44 eV for -SbH2 . In the case of NDIs with group-15 and -16 substituents, the optical transitions strongly depend on the extent to which -XHn is planar or pyramidal as well as on the possible formation of intramolecular hydrogen bonds. The accumulative effect of double and quadruple substitution leads in general to increasing bathochromic shifts, but the increased steric hindrance in tetrasubstituted NDIs can lead to deformations that diminish the effectiveness of the substituents. Detailed analyses of the Kohn-Sham orbital electronic structure in monosubstituted NDIs reveal the mesomeric destabilization of the HOMO as the primary cause of the bathochromic shift of the S0-S1 transition. © 2015 Wiley Periodicals, Inc.

  7. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  8. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    PubMed

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  9. Representation of the exact relativistic electronic Hamiltonian within the regular approximation

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2003-12-01

    The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.

  10. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  11. The method of A-harmonic approximation and optimal interior partial regularity for nonlinear elliptic systems under the controllable growth condition

    NASA Astrophysics Data System (ADS)

    Chen, Shuhong; Tan, Zhong

    2007-11-01

    In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.

  12. KSC kicks off African-American History Month

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Mack McKinney, chief, program resources management at NASA and chairperson for African-American History Month, presents a plaque to Bhetty Waldron at the kick-off ceremony of African-American History Month on Feb. 3 at the NASA Training Auditorium. The award was given in thanks for Waldron's portrayal of Dr. Mary McLeod Bethune and Zora Neal Hurston during the ceremony. The theme for this year's observation is 'Heritage and Horizons: The African-American Legacy and the Challenges of the 21st Century.' February is designated each year as a time to celebrate the achievements and contributions of African Americans to Kennedy Space Center, NASA and the nation.

  13. Modifications of spontaneous oculomotor activity in microgravitational conditions

    NASA Astrophysics Data System (ADS)

    Kornilova, L. N.; Goncharenko, A. M.; Polyakov, V. V.; Grigorova, V.; Manev, A.

    Investigations on spontaneous oculomotor activity were carried out prior to and after (five cosmonauts) and during space flight (two cosmonauts) on the 3rd, 5th and 164th days of the space flight. Recording of oculomotor activity was carried out by electrooculography on automated data acquisition and processing system "Zora" based on personal computers. During the space flight and after it all the cosmonauts with the eyes closed or open and dark-goggled showed an essential increase of the movements' amplitude when removing the eyes into the extreme positions especially in a vertical direction, occurrence of correcting saccadic movements (or nystagmus), an increase in time of fixing reactions.

  14. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  15. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  16. Revised Thomas-Fermi approximation for singular potentials

    NASA Astrophysics Data System (ADS)

    Dufty, James W.; Trickey, S. B.

    2016-08-01

    Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.

  17. Regularization of the double period method for experimental data processing

    NASA Astrophysics Data System (ADS)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  18. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  19. Au11Re: A hollow or endohedral binary cluster?

    NASA Astrophysics Data System (ADS)

    MacLeod Carey, Desmond; Muñoz-Castro, Alvaro

    2018-06-01

    In this letter, we discussed the plausible formation of [Au11Re] as a superatom with an electronic structure accounted by the 1S21P61D10 shell order, denoting similar stability to [W@Au12]. The possible hollow or endohedral structures show a variable HOMO-LUMO gap according to the given structure (from 0.33 to 1.30 eV, at the PBE/ZORA level). Our results show that the energy minimum is an endohedral arrangement, where Re is encapsulated in a D3h-Au11 cage, retaining a higher gold-dopant stoichiometric ratio. This approach is useful for further rationalization and design of novel superatoms expanding the libraries of endohedral clusters.

  20. Post-Newtonian and numerical calculations of the gravitational self-force for circular orbits in the Schwarzschild geometry

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    2010-03-01

    The problem of a compact binary system whose components move on circular orbits is addressed using two different approximation techniques in general relativity. The post-Newtonian (PN) approximation involves an expansion in powers of v/c≪1, and is most appropriate for small orbital velocities v. The perturbative self-force analysis requires an extreme mass ratio m1/m2≪1 for the components of the binary. A particular coordinate-invariant observable is determined as a function of the orbital frequency of the system using these two different approximations. The post-Newtonian calculation is pushed up to the third post-Newtonian (3PN) order. It involves the metric generated by two point particles and evaluated at the location of one of the particles. We regularize the divergent self-field of the particle by means of dimensional regularization. We show that the poles ∝(d-3)-1 appearing in dimensional regularization at the 3PN order cancel out from the final gauge invariant observable. The 3PN analytical result, through first order in the mass ratio, and the numerical self-force calculation are found to agree well. The consistency of this cross cultural comparison confirms the soundness of both approximations in describing compact binary systems. In particular, it provides an independent test of the very different regularization procedures invoked in the two approximation schemes.

  1. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  2. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  3. Predicting survival in critical patients by use of body temperature regularity measurement based on approximate entropy.

    PubMed

    Cuesta, D; Varela, M; Miró, P; Galdós, P; Abásolo, D; Hornero, R; Aboy, M

    2007-07-01

    Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient's condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.

  4. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  5. Analytic derivation of an approximate SU(3) symmetry inside the symmetry triangle of the interacting boson approximation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonatsos, Dennis; Karampagia, S.; Casten, R. F.

    2011-05-15

    Using a contraction of the SU(3) algebra to the algebra of the rigid rotator in the large-boson-number limit of the interacting boson approximation (IBA) model, a line is found inside the symmetry triangle of the IBA, along which the SU(3) symmetry is preserved. The line extends from the SU(3) vertex to near the critical line of the first-order shape/phase transition separating the spherical and prolate deformed phases, and it lies within the Alhassid-Whelan arc of regularity, the unique valley of regularity connecting the SU(3) and U(5) vertices in the midst of chaotic regions. In addition to providing an explanation formore » the existence of the arc of regularity, the present line represents an example of an analytically determined approximate symmetry in the interior of the symmetry triangle of the IBA. The method is applicable to algebraic models possessing subalgebras amenable to contraction. This condition is equivalent to algebras in which the equilibrium ground state and its rotational band become energetically isolated from intrinsic excitations, as typified by deformed solutions to the IBA for large numbers of valence nucleons.« less

  6. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  7. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  8. A hybrid Pade-Galerkin technique for differential equations

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1993-01-01

    A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.

  9. Applicability of regular particle shapes in light scattering calculations for atmospheric ice particles.

    PubMed

    Macke, A; Mishchenko, M I

    1996-07-20

    We ascertain the usefulness of simple ice particle geometries for modeling the intensity distribution of light scattering by atmospheric ice particles. To this end, similarities and differences in light scattering by axis-equivalent, regular and distorted hexagonal cylindric, ellipsoidal, and circular cylindric ice particles are reported. All the results pertain to particles with sizes much larger than a wavelength and are based on a geometrical optics approximation. At a nonabsorbing wavelength of 0.55 µm, ellipsoids (circular cylinders) have a much (slightly) larger asymmetry parameter g than regular hexagonal cylinders. However, our computations show that only random distortion of the crystal shape leads to a closer agreement with g values as small as 0.7 as derived from some remote-sensing data analysis. This may suggest that scattering by regular particle shapes is not necessarily representative of real atmospheric ice crystals at nonabsorbing wavelengths. On the other hand, if real ice particles happen to be hexagonal, they may be approximated by circular cylinders at absorbing wavelengths.

  10. Electronic transport coefficients in plasmas using an effective energy-dependent electron-ion collision-frequency

    NASA Astrophysics Data System (ADS)

    Faussurier, G.; Blancard, C.; Combis, P.; Decoster, A.; Videau, L.

    2017-10-01

    We present a model to calculate the electrical and thermal electronic conductivities in plasmas using the Chester-Thellung-Kubo-Greenwood approach coupled with the Kramers approximation. The divergence in photon energy at low values is eliminated using a regularization scheme with an effective energy-dependent electron-ion collision-frequency. Doing so, we interpolate smoothly between the Drude-like and the Spitzer-like regularizations. The model still satisfies the well-known sum rule over the electrical conductivity. Such kind of approximation is also naturally extended to the average-atom model. A particular attention is paid to the Lorenz number. Its nondegenerate and degenerate limits are given and the transition towards the Drude-like limit is proved in the Kramers approximation.

  11. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  12. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  13. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  14. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

  15. A Unified Approach for Solving Nonlinear Regular Perturbation Problems

    ERIC Educational Resources Information Center

    Khuri, S. A.

    2008-01-01

    This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…

  16. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  17. Ca-Rich Carbonate Melts: A Regular-Solution Model, with Applications to Carbonatite Magma + Vapor Equilibria and Carbonate Lavas on Venus

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.

    1995-01-01

    A thermochemical model of the activities of species in carbonate-rich melts would be useful in quantifying chemical equilibria between carbonatite magmas and vapors and in extrapolating liquidus equilibria to unexplored PTX. A regular-solution model of Ca-rich carbonate melts is developed here, using the fact that they are ionic liquids, and can be treated (to a first approximation) as interpenetrating regular solutions of cations and of anions. Thermochemical data on systems of alkali metal cations with carbonate and other anions are drawn from the literature; data on systems with alkaline earth (and other) cations and carbonate (and other) anions are derived here from liquidus phase equilibria. The model is validated in that all available data (at 1 kbar) are consistent with single values for the melting temperature and heat of fusion for calcite, and all liquidi are consistent with the liquids acting as regular solutions. At 1 kbar, the metastable congruent melting temperature of calcite (CaCO3) is inferred to be 1596 K, with (Delta)bar-H(sub fus)(calcite) = 31.5 +/- 1 kJ/mol. Regular solution interaction parameters (W) for Ca(2+) and alkali metal cations are in the range -3 to -12 kJ/sq mol; W for Ca(2+)-Ba(2+) is approximately -11 kJ/sq mol; W for Ca(2+)-Mg(2+) is approximately -40 kJ/sq mol, and W for Ca(2+)-La(3+) is approximately +85 kJ/sq mol. Solutions of carbonate and most anions (including OH(-), F(-), and SO4(2-)) are nearly ideal, with W between 0(ideal) and -2.5 kJ/sq mol. The interaction of carbonate and phosphate ions is strongly nonideal, which is consistent with the suggestion of carbonate-phosphate liquid immiscibility. Interaction of carbonate and sulfide ions is also nonideal and suggestive of carbonate-sulfide liquid immiscibility. Solution of H2O, for all but the most H2O-rich compositions, can be modeled as a disproportionation to hydronium (H3O(+)) and hydroxyl (OH(-)) ions with W for Ca(2+)-H3O(+) (approximately) equals 33 kJ/sq mol. The regular-solution model of carbonate melts can be applied to problems of carbonatite magma + vapor equilibria and of extrapolating liquidus equilibria to unstudied systems. Calculations on one carbonatite (the Husereau dike, Oka complex, Quebec, Canada) show that the anion solution of its magma contained an OH mole fraction of (approximately) 0.07, although the vapor in equilibrium with the magma had P(H2O) = 8.5 x P(CO2). F in carbonatite systems is calculated to be strongly partitioned into the magma (as F(-)) relative to coexisting vapor. In the Husereau carbonatite magma, the anion solution contained an F(-) mole fraction of (approximately) 6 x 10(exp -5).

  18. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  19. Methyl cation affinities of neutral and anionic maingroup-element hydrides: trends across the periodic table and correlation with proton affinities.

    PubMed

    Mulder, R Joshua; Guerra, Célia Fonseca; Bickelhaupt, F Matthias

    2010-07-22

    We have computed the methyl cation affinities in the gas phase of archetypal anionic and neutral bases across the periodic table using ZORA-relativistic density functional theory (DFT) at BP86/QZ4P//BP86/TZ2P. The main purpose of this work is to provide the methyl cation affinities (and corresponding entropies) at 298 K of all anionic (XH(n-1)(-)) and neutral bases (XH(n)) constituted by maingroup-element hydrides of groups 14-17 and the noble gases (i.e., group 18) along the periods 2-6. The cation affinity of the bases decreases from H(+) to CH(3)(+). To understand this trend, we have carried out quantitative bond energy decomposition analyses (EDA). Quantitative correlations are established between the MCA and PA values.

  20. On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.

    2000-01-01

    Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)

  1. Study of X(5568) in a unitary coupled-channel approximation of BK¯ and Bs π

    NASA Astrophysics Data System (ADS)

    Sun, Bao-Xi; Dong, Fang-Yong; Pang, Jing-Long

    2017-07-01

    The potential of the B meson and the pseudoscalar meson is constructed up to the next-to-leading order Lagrangian, and then the BK¯ and Bs π interaction is studied in the unitary coupled-channel approximation. A resonant state with a mass about 5568 MeV and JP =0+ is generated dynamically, which can be associated with the X(5568) state announced by the D0 Collaboration recently. The mass and the decay width of this resonant state depend on the regularization scale in the dimensional regularization scheme, or the maximum momentum in the momentum cutoff regularization scheme. The scattering amplitude of the vector B meson and the pseudoscalar meson is calculated, and an axial-vector state with a mass near 5620 MeV and JP =1+ is produced. Their partners in the charm sector are also discussed.

  2. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  3. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  4. DFT-based molecular modeling and vibrational study of the La(III) complex of 3,3'-(benzylidene)bis(4-hydroxycoumarin).

    PubMed

    Mihaylov, Tzvetan; Trendafilova, Natasha; Georgieva, Ivelina

    2008-05-01

    Molecular modeling of the La(III) complex of 3,3'-(benzylidene)bis(4-hydroxycoumarin) (PhDC) was performed using density functional theory (DFT) methods at B3LYP/6-31G(d) and BP86/TZP levels. Both Stuttgart-Dresden effective core potential and ZORA approximation were applied to the La(III) center. The electron density distribution and the nucleophilic centers of the deprotonated ligand PhDC(2-) in a solvent environment were estimated on the basis of Hirshfeld atomic charges, electrostatic potential values at the nuclei, and Nalewajski-Mrozek bond orders. In accordance with the empirical formula La(PhDC)(OH)(H(2)O), a chain structure of the complex was simulated by means of two types of molecular fragment: (1) two La(III) cations bound to one PhDC(2-) ligand, and (2) two PhDC(2-) ligands bound to one La(III) cation. Different orientations of PhDC(2-), OH(-) and H(2)O ligands in the La(III) complexes were investigated using 20 possible [La(PhDC(2-))(2)(OH)(H(2)O)](2-) fragments. Energy calculations predicted that the prism-like structure based on "tail-head" cis-LML2 type binding and stabilized via HO...HOH intramolecular hydrogen bonds is the most probable structure for the La(III) complex. The calculated vibrational spectrum of the lowest energy La(III) model fragment is in very good agreement with the experimental IR spectrum of the complex, supporting the suggested ligand binding mode to La(III) in a chain structure, namely, every PhDC(2-) interacts with two La(III) cations through both carbonylic and both hydroxylic oxygens, and every La(III) cation binds four oxygen atoms of two different PhDC(2-).

  5. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  6. Generalized matrix summability of a conjugate derived Fourier series.

    PubMed

    Mursaleen, M; Alotaibi, Abdullah

    2017-01-01

    The study of infinite matrices is important in the theory of summability and in approximation. In particular, Toeplitz matrices or regular matrices and almost regular matrices have been very useful in this context. In this paper, we propose to use a more general matrix method to obtain necessary and sufficient conditions to sum the conjugate derived Fourier series.

  7. Vacuum polarization in the field of a multidimensional global monopole

    NASA Astrophysics Data System (ADS)

    Grats, Yu. V.; Spirin, P. A.

    2016-11-01

    An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values <ϕ2>ren and < T 00>ren have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.

  8. The role of the [CpM(CO)2](-) chromophore in the optical properties of the [Cp2ThMCp(CO)2](+) complexes, where M = Fe, Ru and Os. A theoretical view.

    PubMed

    Cantero-López, Plinio; Le Bras, Laura; Páez-Hernández, Dayán; Arratia-Pérez, Ramiro

    2015-12-14

    The chemical bond between actinide and the transition metal unsupported by bridging ligands is not well characterized. In this paper we study the electronic properties, bonding nature and optical spectra in a family of [Cp2ThMCp(CO)2](+) complexes where M = Fe, Ru, Os, based on the relativistic two component density functional theory calculations. The Morokuma-Ziegler energy decomposition analysis shows an important ionic contribution in the Th-M interaction with around 25% of covalent character. Clearly, charge transfer occurs on Th-M bond formation, however the orbital term most likely represents a strong charge rearrangement in the fragments due to the interaction. Finally the spin-orbit-ZORA calculation shows the possible NIR emission induced by the [FeCp(CO)2](-) chromophore accomplishing the antenna effect that justifies the sensitization of the actinide complexes.

  9. X2Y2 isomers: tuning structure and relative stability through electronegativity differences (X = H, Li, Na, F, Cl, Br, I; Y = O, S, Se, Te).

    PubMed

    El-Hamdi, Majid; Poater, Jordi; Bickelhaupt, F Matthias; Solà, Miquel

    2013-03-04

    We have studied the XYYX and X2YY isomers of the X2Y2 species (X = H, Li, Na, F, Cl, Br, I; Y = O, S, Se, Te) using density functional theory at the ZORA-BP86/QZ4P level. Our computations show that, over the entire range of our model systems, the XYYX isomers are more stable than the X2YY forms except for X = F and Y = S and Te, for which the F2SS and F2TeTe isomers are slightly more stable. Our results also point out that the Y-Y bond length can be tuned quite generally through the X-Y electronegativity difference. The mechanism behind this electronic tuning is the population or depopulation of the π* in the YY fragment.

  10. Solid-state (55)Mn NMR spectroscopy of bis(μ-oxo)dimanganese(IV) [Mn(2)O(2)(salpn)(2)], a model for the oxygen evolving complex in photosystem II.

    PubMed

    Ellis, Paul D; Sears, Jesse A; Yang, Ping; Dupuis, Michel; Boron, Thaddeus T; Pecoraro, Vincent L; Stich, Troy A; Britt, R David; Lipton, Andrew S

    2010-12-01

    We have examined the antiferromagneticly coupled bis(μ-oxo)dimanganese(IV) complex [Mn(2)O(2)(salpn)(2)] (1) with (55)Mn solid-state NMR at cryogenic temperatures and first-principle theory. The extracted values of the (55)Mn quadrupole coupling constant, C(Q), and its asymmetry parameter, η(Q), for 1 are 24.7 MHz and 0.43, respectively. Further, there was a large anisotropic contribution to the shielding of each Mn(4+), i.e. a Δσ of 3375 ppm. Utilizing broken symmetry density functional theory, the predicted values of the electric field gradient (EFG) or equivalently the C(Q) and η(Q) at ZORA, PBE QZ4P all electron level of theory are 23.4 MHz and 0.68, respectively, in good agreement with experimental observations.

  11. 40 CFR 63.2872 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... NESHAP General Provisions. (c) In this section as follows: Accounting month means a time interval defined... consistent and regular basis. An accounting month will consist of approximately 4 to 5 calendar weeks and each accounting month will be of approximate equal duration. An accounting month may not correspond...

  12. High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.

  13. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  14. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  15. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  16. The Adler D-function for N = 1 SQCD regularized by higher covariant derivatives in the three-loop approximation

    NASA Astrophysics Data System (ADS)

    Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.

    2018-01-01

    We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.

  17. Polarimetric image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Valenzuela, John R.

    In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.

  18. Vacuum polarization in the field of a multidimensional global monopole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grats, Yu. V., E-mail: grats@phys.msu.ru; Spirin, P. A.

    2016-11-15

    An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values 〈ϕ{sup 2}〉{sub ren} and 〈T{sub 00}〉{sub ren} have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.

  19. Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

    PubMed

    Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A

    2018-02-01

    Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.

  20. Solid-state (185/187)Re NMR and GIPAW DFT study of perrhenates and Re2(CO)10: chemical shift anisotropy, NMR crystallography, and a metal-metal bond.

    PubMed

    Widdifield, Cory M; Perras, Frédéric A; Bryce, David L

    2015-04-21

    Advances in solid-state nuclear magnetic resonance (SSNMR) methods, such as dynamic nuclear polarization (DNP), intricate pulse sequences, and increased applied magnetic fields, allow for the study of systems which even very recently would be impractical. However, SSNMR methods using certain quadrupolar probe nuclei (i.e., I > 1/2), such as (185/187)Re remain far from fully developed due to the exceedingly strong interaction between the quadrupole moment of these nuclei and local electric field gradients (EFGs). We present a detailed high-field (B0 = 21.1 T) experimental SSNMR study on several perrhenates (KReO4, AgReO4, Ca(ReO4)2·2H2O), as well as ReO3 and Re2(CO)10. We propose solid ReO3 as a new rhenium SSNMR chemical shift standard due to its reproducible and sharp (185/187)Re NMR resonances. We show that for KReO4, previously poorly understood high-order quadrupole-induced effects (HOQIE) on the satellite transitions can be used to measure the EFG tensor asymmetry (i.e., ηQ) to nearly an order-of-magnitude greater precision than competing SSNMR and nuclear quadrupole resonance (NQR) approaches. Samples of AgReO4 and Ca(ReO4)2·2H2O enable us to comment on the effects of counter-ions and hydration upon Re(vii) chemical shifts. Calcium-43 and (185/187)Re NMR tensor parameters allow us to conclude that two proposed crystal structures for Ca(ReO4)2·2H2O, which would be considered as distinct, are in fact the same structure. Study of Re2(CO)10 provides insights into the effects of Re-Re bonding on the rhenium NMR tensor parameters and rhenium oxidation state on the Re chemical shift value. As overtone NQR experiments allowed us to precisely measure the (185/187)Re EFG tensor of Re2(CO)10, we were able to measure rhenium chemical shift anisotropy (CSA) for the first time in a powdered sample. Experimental observations are supported by gauge-including projector augmented-wave (GIPAW) density functional theory (DFT) calculations, with NMR tensor calculations also provided for NH4ReO4, NaReO4 and RbReO4. These calculations are able to reproduce many of the experimental trends in rhenium δiso values and EFG tensor magnitudes. Using KReO4 as a prototypical perrhenate-containing system, we establish a correlation between the tetrahedral shear strain parameter (|ψ|) and the nuclear electric quadrupolar coupling constant (CQ), which enables the refinement of the structure of ND4ReO4. Shortcomings in traditional DFT approaches, even when including relativistic effects via the zeroth-order regular approximation (ZORA), for calculating rhenium NMR tensor parameters are identified for Re2(CO)10.

  1. Migration statistics relevant for malaria transmission in Senegal derived from mobile phone data and used in an agent-based migration model.

    PubMed

    Tompkins, Adrian M; McCreesh, Nicky

    2016-03-31

    One year of mobile phone location data from Senegal is analysed to determine the characteristics of journeys that result in an overnight stay, and are thus relevant for malaria transmission. Defining the home location of each person as the place of most frequent calls, it is found that approximately 60% of people who spend nights away from home have regular destinations that are repeatedly visited, although only 10% have 3 or more regular destinations. The number of journeys involving overnight stays peaks at a distance of 50 km, although roughly half of such journeys exceed 100 km. Most visits only involve a stay of one or two nights away from home, with just 4% exceeding one week. A new agent-based migration model is introduced, based on a gravity model adapted to represent overnight journeys. Each agent makes journeys involving overnight stays to either regular or random locations, with journey and destination probabilities taken from the mobile phone dataset. Preliminary simulations show that the agent-based model can approximately reproduce the patterns of migration involving overnight stays.

  2. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    DTIC Science & Technology

    2016-11-22

    structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The

  3. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  4. A Systematic Theoretical Study of UC6: Structure, Bonding Nature, and Spectroscopy.

    PubMed

    Du, Jiguang; Jiang, Gang

    2017-11-20

    The study of uranium carbides has received renewed attention in recent years due to the potential use of these compounds as fuels in new generations of nuclear reactors. The isomers of the UC 6 cluster were determined by DFT and ab initio methods. The structures obtained using SC-RECP for U were generally consistent with those obtained using an all-electron basis set (ZORA-SARC). The CCSD(T) calculations indicated that two isomers had similar energies and may coexist in laser evaporation experiments. The nature of the U-C bonds in the different isomers was examined via a topological analysis of the electron density, and the results indicated that the U-C bonds are predominantly closed-shell (ionic) interactions with a certain degree of covalent character in all cases, particularly in the linear species. The IR and UV-vis spectra of the isomers were theoretically simulated to provide information that can be used to identify the isomers of UC 6 in future experiments.

  5. What has research over the past two decades revealed about the adverse health effects of recreational cannabis use?

    PubMed

    Hall, Wayne

    2015-01-01

    To examine changes in the evidence on the adverse health effects of cannabis since 1993. A comparison of the evidence in 1993 with the evidence and interpretation of the same health outcomes in 2013. Research in the past 20 years has shown that driving while cannabis-impaired approximately doubles car crash risk and that around one in 10 regular cannabis users develop dependence. Regular cannabis use in adolescence approximately doubles the risks of early school-leaving and of cognitive impairment and psychoses in adulthood. Regular cannabis use in adolescence is also associated strongly with the use of other illicit drugs. These associations persist after controlling for plausible confounding variables in longitudinal studies. This suggests that cannabis use is a contributory cause of these outcomes but some researchers still argue that these relationships are explained by shared causes or risk factors. Cannabis smoking probably increases cardiovascular disease risk in middle-aged adults but its effects on respiratory function and respiratory cancer remain unclear, because most cannabis smokers have smoked or still smoke tobacco. The epidemiological literature in the past 20 years shows that cannabis use increases the risk of accidents and can produce dependence, and that there are consistent associations between regular cannabis use and poor psychosocial outcomes and mental health in adulthood. © 2014 Society for the Study of Addiction.

  6. Effects of high-frequency damping on iterative convergence of implicit viscous solver

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko

    2017-11-01

    This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.

  7. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  8. Evolutionary Games of Multiplayer Cooperation on Graphs

    PubMed Central

    Arranz, Jordi; Traulsen, Arne

    2016-01-01

    There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946

  9. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  10. Zeroth order regular approximation approach to electric dipole moment interactions of the electron.

    PubMed

    Gaul, Konstantin; Berger, Robert

    2017-07-07

    A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  11. Zeroth order regular approximation approach to electric dipole moment interactions of the electron

    NASA Astrophysics Data System (ADS)

    Gaul, Konstantin; Berger, Robert

    2017-07-01

    A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  12. Range-Separated Brueckner Coupled Cluster Doubles Theory

    NASA Astrophysics Data System (ADS)

    Shepherd, James J.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2014-04-01

    We introduce a range-separation approximation to coupled cluster doubles (CCD) theory that successfully overcomes limitations of regular CCD when applied to the uniform electron gas. We combine the short-range ladder channel with the long-range ring channel in the presence of a Bruckner renormalized one-body interaction and obtain ground-state energies with an accuracy of 0.001 a.u./electron across a wide range of density regimes. Our scheme is particularly useful in the low-density and strongly correlated regimes, where regular CCD has serious drawbacks. Moreover, we cure the infamous overcorrelation of approaches based on ring diagrams (i.e., the particle-hole random phase approximation). Our energies are further shown to have appropriate basis set and thermodynamic limit convergence, and overall this scheme promises energetic properties for realistic periodic and extended systems which existing methods do not possess.

  13. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  14. Particle dynamics around time conformal regular black holes via Noether symmetries

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Umair Shahzad, M.

    The time conformal regular black hole (RBH) solutions which are admitting the time conformal factor e𝜖g(t), where g(t) is an arbitrary function of time and 𝜖 is the perturbation parameter are being considered. The approximate Noether symmetries technique is being used for finding the function g(t) which leads to t α. The dynamics of particles around RBHs are also being discussed through symmetry generators which provide approximate energy as well as angular momentum of the particles. In addition, we analyze the motion of neutral and charged particles around two well known RBHs such as charged RBH using Fermi-Dirac distribution and Kehagias-Sftesos asymptotically flat RBH. We obtain the innermost stable circular orbit and corresponding approximate energy and angular momentum. The behavior of effective potential, effective force and escape velocity of the particles in the presence/absence of magnetic field for different values of angular momentum near horizons are also being analyzed. The stable and unstable regions of particle near horizons due to the effect of angular momentum and magnetic field are also explained.

  15. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  16. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  17. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  18. The Berry phase and the phase of the determinant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braverman, Maxim

    2014-04-15

    We show that under very general assumptions the adiabatic approximation of the phase of the zeta-regularized determinant of the imaginary-time Schrödinger operator with periodic Hamiltonian is equal to the Berry phase.

  19. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  20. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  1. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  2. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  3. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  4. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  5. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure.

    PubMed

    Fei, Juntao; Lu, Cheng

    2018-04-01

    In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a -axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.

  6. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  7. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Renormalization Group Theory of Bolgiano Scaling in Boussinesq Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1994-01-01

    Bolgiano scaling in Boussinesq turbulence is analyzed using the Yakhot-Orszag renormalization group. For this purpose, an isotropic model is introduced. Scaling exponents are calculated by forcing the temperature equation so that the temperature variance flux is constant in the inertial range. Universal amplitudes associated with the scaling laws are computed by expanding about a logarithmic theory. Connections between this formalism and the direct interaction approximation are discussed. It is suggested that the Yakhot-Orszag theory yields a lowest order approximate solution of a regularized direct interaction approximation which can be corrected by a simple iterative procedure.

  9. A sensor network system for the health monitoring of the Parkview bridge deck.

    DOT National Transportation Integrated Search

    2010-01-31

    Bridges are a critical component of the transportation infrastructure. There are approximately 600,000 bridges in : the United State according to the Federal Highway Administration. Four billion vehicles traverse these bridges daily. : Regular inspec...

  10. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  11. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  12. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  13. A finite element method with overlapping meshes for free-boundary axisymmetric plasma equilibria in realistic geometries

    NASA Astrophysics Data System (ADS)

    Heumann, Holger; Rapetti, Francesca

    2017-04-01

    Existing finite element implementations for the computation of free-boundary axisymmetric plasma equilibria approximate the unknown poloidal flux function by standard lowest order continuous finite elements with discontinuous gradients. As a consequence, the location of critical points of the poloidal flux, that are of paramount importance in tokamak engineering, is constrained to nodes of the mesh leading to undesired jumps in transient problems. Moreover, recent numerical results for the self-consistent coupling of equilibrium with resistive diffusion and transport suggest the necessity of higher regularity when approximating the flux map. In this work we propose a mortar element method that employs two overlapping meshes. One mesh with Cartesian quadrilaterals covers the vacuum chamber domain accessible by the plasma and one mesh with triangles discretizes the region outside. The two meshes overlap in a narrow region. This approach gives the flexibility to achieve easily and at low cost higher order regularity for the approximation of the flux function in the domain covered by the plasma, while preserving accurate meshing of the geometric details outside this region. The continuity of the numerical solution in the region of overlap is weakly enforced by a mortar-like mapping.

  14. Bonding nature and electron delocalization of An(COT)2, An = Th, Pa, U.

    PubMed

    Páez-Hernández, Dayán; Murillo-López, Juliana A; Arratia-Pérez, Ramiro

    2011-08-18

    A systematic study of a series of An(COT)(2) compounds, where An = Th, Pa, U, and COT represents cyclooctatetraene, has been performed using relativistic density functional theory. The ZORA Hamiltonian was applied for the inclusion of relativistic effects, taking into account all of the electrons for the optimization and explicitly including spin-orbit coupling effects. Time-dependent density functional theory (TDDFT) was used to calculate the excitation energies with the GGA SAOP functional, and the electronic transitions were analyzed using double group irreducible representations. The calculated excitation energies are in perfect correlation with the increment of the ring delocalization as it increases along the actinide series. These results are sufficient to ensure that, for these complexes, the increment in delocalization, as indicated by ELF bifurcation and NICS analysis, leads to a shift in the maximum wavelength of absorption in the visible region. Also, delocalization in the COT ring increases along the actinide series, so the systems become more aromatic because of a modulation induced by the actinides. © 2011 American Chemical Society

  15. Understanding cage effects in imidazolium ionic liquids by 129Xe NMR: MD simulations and relativistic DFT calculations.

    PubMed

    Saielli, Giacomo; Bagno, Alessandro; Castiglione, Franca; Simonutti, Roberto; Mauri, Michele; Mele, Andrea

    2014-12-04

    (129)Xe NMR has been recently employed to probe the local structure of ionic liquids (ILs). However, no theoretical investigation has been yet reported addressing the problem of the dependence of the chemical shift of xenon on the cage structure of the IL. Therefore, we present here a study of the chemical shift of (129)Xe in two ionic liquids, [bmim][Cl] and [bmim][PF6], by a combination of classical MD simulations and relativistic DFT calculations of the xenon shielding constant. The bulk structure of the two ILs is investigated by means of the radial distribution functions, paying special attention to the local structure, volume, and charge distribution of the cage surrounding the xenon atom. Relativistic DFT calculations, based on the ZORA formalism, on clusters extracted from the trajectory files of the two systems, yield an average relative chemical shift in good agreement with the experimental data. Our results demonstrate the importance of the cage volume and the average charge surrounding the xenon nucleus in the IL cage as the factors determining the effective shielding.

  16. Determination of the turbulence integral model parameters for a case of a coolant angular flow in regular rod-bundle

    NASA Astrophysics Data System (ADS)

    Bayaskhalanov, M. V.; Vlasov, M. N.; Korsun, A. S.; Merinov, I. G.; Philippov, M. Ph

    2017-11-01

    Research results of “k-ε” turbulence integral model (TIM) parameters dependence on the angle of a coolant flow in regular smooth cylindrical rod-bundle are presented. TIM is intended for the definition of efficient impulse and heat transport coefficients in the averaged equations of a heat and mass transfer in the regular rod structures in an anisotropic porous media approximation. The TIM equations are received by volume-averaging of the “k-ε” turbulence model equations on periodic cell of rod-bundle. The water flow across rod-bundle under angles from 15 to 75 degrees was simulated by means of an ANSYS CFX code. Dependence of the TIM parameters on flow angle was as a result received.

  17. Dynamics of Urban Informal Labor Supply in the United States

    PubMed Central

    Gunter, Samara R.

    2016-01-01

    Objective This study provides the first panel data estimates of informal work in the US and explores relationships between informal- and regular-sector participation among urban parents of young children. Methods I examine determinants of informal-sector participation in five waves of data from the Fragile Families and Child Wellbeing Study using probit, pooled Tobit, and fixed effects OLS models. Results Approximately 53 percent of urban fathers and 32 percent of urban mothers with young children pursue informal work over a nine-year period. Informal work most often occurs in conjunction with regular work. Workers who work in both sectors in the same year are more likely to be non-minority race, higher education (mothers only), own credit cards, and work in skilled white- or blue-collar occupations. Workers who ever participate in only the informal sector are more likely to be younger, to have health limitations, and to have never worked in the regular sector. Informal participation spells are shorter than regular-sector participation spells and are associated with changes in regular-sector participation and occupation but not most other life events. Conclusion Consistent with past work, informal work among parents of young children is widespread across socioeconomic groups. Transitions in and out of the informal sector are strongly related to changes in regular-sector employment and occupation. The results suggest that regular-sector participation provides access to informal work opportunities. PMID:28439143

  18. Dynamics of Urban Informal Labor Supply in the United States.

    PubMed

    Gunter, Samara R

    2017-03-01

    This study provides the first panel data estimates of informal work in the US and explores relationships between informal- and regular-sector participation among urban parents of young children. I examine determinants of informal-sector participation in five waves of data from the Fragile Families and Child Wellbeing Study using probit, pooled Tobit, and fixed effects OLS models. Approximately 53 percent of urban fathers and 32 percent of urban mothers with young children pursue informal work over a nine-year period. Informal work most often occurs in conjunction with regular work. Workers who work in both sectors in the same year are more likely to be non-minority race, higher education (mothers only), own credit cards, and work in skilled white- or blue-collar occupations. Workers who ever participate in only the informal sector are more likely to be younger, to have health limitations, and to have never worked in the regular sector. Informal participation spells are shorter than regular-sector participation spells and are associated with changes in regular-sector participation and occupation but not most other life events. Consistent with past work, informal work among parents of young children is widespread across socioeconomic groups. Transitions in and out of the informal sector are strongly related to changes in regular-sector employment and occupation. The results suggest that regular-sector participation provides access to informal work opportunities.

  19. Regularization techniques for backward--in--time evolutionary PDE problems

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonathan; Protas, Bartosz

    2007-11-01

    Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.

  20. Analytic Regularity and Polynomial Approximation of Parametric and Stochastic Elliptic PDEs

    DTIC Science & Technology

    2010-05-31

    Todor : Finite elements for elliptic problems with stochastic coefficients Comp. Meth. Appl. Mech. Engg. 194 (2005) 205-228. [14] R. Ghanem and P. Spanos...for elliptic partial differential equations with random input data SIAM J. Num. Anal. 46(2008), 2411–2442. [20] R. Todor , Robust eigenvalue computation...for smoothing operators, SIAM J. Num. Anal. 44(2006), 865– 878. [21] Ch. Schwab and R.A. Todor , Karhúnen-Loève Approximation of Random Fields by

  1. Regular and chaotic dynamics of non-spherical bodies. Zeldovich's pancakes and emission of very long gravitational waves

    NASA Astrophysics Data System (ADS)

    Bisnovatyi-Kogan, G. S.; Tsupko, O. Yu.

    2015-10-01

    > In this paper we review a recently developed approximate method for investigation of dynamics of compressible ellipsoidal figures. Collapse and subsequent behaviour are described by a system of ordinary differential equations for time evolution of semi-axes of a uniformly rotating, three-axis, uniform-density ellipsoid. First, we apply this approach to investigate dynamic stability of non-spherical bodies. We solve the equations that describe, in a simplified way, the Newtonian dynamics of a self-gravitating non-rotating spheroidal body. We find that, after loss of stability, a contraction to a singularity occurs only in a pure spherical collapse, and deviations from spherical symmetry prevent the contraction to the singularity through a stabilizing action of nonlinear non-spherical oscillations. The development of instability leads to the formation of a regularly or chaotically oscillating body, in which dynamical motion prevents the formation of the singularity. We find regions of chaotic and regular pulsations by constructing a Poincaré diagram. A real collapse occurs after damping of the oscillations because of energy losses, shock wave formation or viscosity. We use our approach to investigate approximately the first stages of collapse during the large scale structure formation. The theory of this process started from ideas of Ya. B. Zeldovich, concerning the formation of strongly non-spherical structures during nonlinear stages of the development of gravitational instability, known as `Zeldovich's pancakes'. In this paper the collapse of non-collisional dark matter and the formation of pancake structures are investigated approximately. Violent relaxation, mass and angular momentum losses are taken into account phenomenologically. We estimate an emission of very long gravitational waves during the collapse, and discuss the possibility of gravitational lensing and polarization of the cosmic microwave background by these waves.

  2. Slow relaxation in weakly open rational polygons.

    PubMed

    Kokshenev, Valery B; Vicentini, Eduardo

    2003-07-01

    The interplay between the regular (piecewise-linear) and irregular (vertex-angle) boundary effects in nonintegrable rational polygonal billiards (of m equal sides) is discussed. Decay dynamics in polygons (of perimeter P(m) and small opening Delta) is analyzed through the late-time survival probability S(m) approximately equal t(-delta). Two distinct slow relaxation channels are established. The primary universal channel exhibits relaxation of regular sliding orbits, with delta=1. The secondary channel is given by delta>1 and becomes open when m>P(m)/Delta. It originates from vertex order-disorder dual effects and is due to relaxation of chaoticlike excitations.

  3. Quasinormal modes of gravitational perturbation around regular Bardeen black hole surrounded by quintessence

    NASA Astrophysics Data System (ADS)

    Saleh, Mahamat; Thomas, Bouetou Bouetou; Kofane, Timoleon Crepin

    2018-04-01

    In this paper, Quasinormal modes of gravitational perturbation are investigated for the regular Bardeen black hole surrounded by quintessence. Considering the metric of the Bardeen spacetime surrounded by quintessence, we derived the perturbation equation for gravitational perturbation using Regge-Wheeler gauge. The third order Wentzel-Kramers-Brillouin (WKB) approximation method is used to evaluate quasinormal frequencies. Explicitly, the behaviors of the black hole potential and quasinormal modes were plotted. The results show that, due to the presence of quintessence, the gravitational perturbation around the black hole damps more slowly and oscillates more slowly.

  4. International Intercomparison of Regular Transmittance Scales

    NASA Astrophysics Data System (ADS)

    Eckerle, K. L.; Sutter, E.; Freeman, G. H. C.; Andor, G.; Fillinger, L.

    1990-01-01

    An intercomparison of the regular spectral transmittance scales of NIST, Gaithersburg, MD (USA); PTB, Braunschweig (FRG); NPL, Teddington, Middlesex (UK); and OMH, Budapest (H) was accomplished using three sets of neutral glass filters with transmittances ranging from approximately 0.92 to 0.001. The difference between the results from the reference spectrophotometers of the laboratories was generally smaller than the total uncertainty of the interchange. The relative total uncertainty ranges from 0.05% to 0.75% for transmittances from 0.92 to 0.001. The sample-induced error was large - contributing 40% or more of the total except in a few cases.

  5. Japanese Listeners' Perceptions of Phonotactic Violations

    ERIC Educational Resources Information Center

    Fais, Laurel; Kajikawa, Sachiyo; Werker, Janet; Amano, Shigeaki

    2005-01-01

    The canonical form for Japanese words is (Consonant)Vowel(Consonant) Vowel[approximately]. However, a regular process of high vowel devoicing between voiceless consonants and word-finally after voiceless consonants results in consonant clusters and word-final consonants, apparent violations of that phonotactic pattern. We investigated Japanese…

  6. Applying the Index of Watershed Integrity to the Western Balkan Region

    EPA Science Inventory

    In 2014, the western Balkans’ heaviest recorded rains triggered extensive flooding affecting approximately 29,600 km2, or the equivalent of 75% of the study area. Rapid urbanization and the increasing regularity of late-summer droughts in the region likely exacerbated these...

  7. Nonlinear second order evolution inclusions with noncoercive viscosity term

    NASA Astrophysics Data System (ADS)

    Papageorgiou, Nikolaos S.; Rădulescu, Vicenţiu D.; Repovš, Dušan D.

    2018-04-01

    In this paper we deal with a second order nonlinear evolution inclusion, with a nonmonotone, noncoercive viscosity term. Using a parabolic regularization (approximation) of the problem and a priori bounds that permit passing to the limit, we prove that the problem has a solution.

  8. Fighting Violence without Violence.

    ERIC Educational Resources Information Center

    Rowicki, Mark A.; Martin, William C.

    Violence is becoming the number one problem in United States schools. Approximately 20 percent of high school students regularly carry guns and other weapons. Several nonviolent measures are appropriate to reduce violence in schools; but only the implementation of multiple ideas and measures, not "quick fix" solutions, will curb…

  9. A class of renormalised meshless Laplacians for boundary value problems

    NASA Astrophysics Data System (ADS)

    Basic, Josip; Degiuli, Nastia; Ban, Dario

    2018-02-01

    A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.

  10. Weak-noise limit of a piecewise-smooth stochastic differential equation.

    PubMed

    Chen, Yaming; Baule, Adrian; Touchette, Hugo; Just, Wolfram

    2013-11-01

    We investigate the validity and accuracy of weak-noise (saddle-point or instanton) approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example a piecewise-constant SDE, which serves as a simple model of Brownian motion with solid friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided the singularity of the path integral associated with the nonsmooth SDE is treated with some heuristics. We also show that, as in the case of smooth SDEs, the deterministic paths of the noiseless system correctly describe the behavior of the nonsmooth SDE in the low-noise limit. Finally, we consider a smooth regularization of the piecewise-constant SDE and study to what extent this regularization can rectify some of the problems encountered when dealing with discontinuous drifts and singularities in SDEs.

  11. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  12. Effects of Cereal, Fruit and Vegetable Fibers on Human Fecal Weight and Transit Time: A Comprehensive Review of Intervention Trials

    PubMed Central

    de Vries, Jan; Birkett, Anne; Hulshof, Toine; Verbeke, Kristin; Gibes, Kernon

    2016-01-01

    Cereal fibers are known to increase fecal weight and speed transit time, but far less data are available on the effects of fruits and vegetable fibers on regularity. This study provides a comprehensive review of the impact of these three fiber sources on regularity in healthy humans. We identified English-language intervention studies on dietary fibers and regularity and performed weighted linear regression analyses for fecal weight and transit time. Cereal and vegetable fiber groups had comparable effects on fecal weight; both contributed to it more than fruit fibers. Less fermentable fibers increased fecal weight to a greater degree than more fermentable fibers. Dietary fiber did not change transit time in those with an initial time of <48 h. In those with an initial transit time ≥48 h, transit time was reduced by approximately 30 min per gram of cereal, fruit or vegetable fibers, regardless of fermentability. Cereal fibers have been studied more than any other kind in relation to regularity. This is the first comprehensive review comparing the effects of the three major food sources of fiber on bowel function and regularity since 1993. PMID:26950143

  13. Effects of Cereal, Fruit and Vegetable Fibers on Human Fecal Weight and Transit Time: A Comprehensive Review of Intervention Trials.

    PubMed

    de Vries, Jan; Birkett, Anne; Hulshof, Toine; Verbeke, Kristin; Gibes, Kernon

    2016-03-02

    Cereal fibers are known to increase fecal weight and speed transit time, but far less data are available on the effects of fruits and vegetable fibers on regularity. This study provides a comprehensive review of the impact of these three fiber sources on regularity in healthy humans. We identified English-language intervention studies on dietary fibers and regularity and performed weighted linear regression analyses for fecal weight and transit time. Cereal and vegetable fiber groups had comparable effects on fecal weight; both contributed to it more than fruit fibers. Less fermentable fibers increased fecal weight to a greater degree than more fermentable fibers. Dietary fiber did not change transit time in those with an initial time of <48 h. In those with an initial transit time ≥48 h, transit time was reduced by approximately 30 min per gram of cereal, fruit or vegetable fibers, regardless of fermentability. Cereal fibers have been studied more than any other kind in relation to regularity. This is the first comprehensive review comparing the effects of the three major food sources of fiber on bowel function and regularity since 1993.

  14. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  15. Hadamard States for the Klein-Gordon Equation on Lorentzian Manifolds of Bounded Geometry

    NASA Astrophysics Data System (ADS)

    Gérard, Christian; Oulghazi, Omar; Wrochna, Michał

    2017-06-01

    We consider the Klein-Gordon equation on a class of Lorentzian manifolds with Cauchy surface of bounded geometry, which is shown to include examples such as exterior Kerr, Kerr-de Sitter spacetime and the maximal globally hyperbolic extension of the Kerr outer region. In this setup, we give an approximate diagonalization and a microlocal decomposition of the Cauchy evolution using a time-dependent version of the pseudodifferential calculus on Riemannian manifolds of bounded geometry. We apply this result to construct all pure regular Hadamard states (and associated Feynman inverses), where regular refers to the state's two-point function having Cauchy data given by pseudodifferential operators. This allows us to conclude that there is a one-parameter family of elliptic pseudodifferential operators that encodes both the choice of (pure, regular) Hadamard state and the underlying spacetime metric.

  16. Regular source of primary care and emergency department use of children in Victoria.

    PubMed

    Turbitt, Erin; Freed, Gary Lee

    2016-03-01

    The aim of this paper was to study the prevalence of a regular source of primary care for Victorian children attending one of four emergency departments (EDs) and to determine associated characteristics, including ED use. Responses were collected via an electronic survey from parents attending EDs with their child (≤9 years of age) for a lower-urgency condition. Single, multiple choice, and Likert scale responses were analysed using bivariate and logistic regression tests. Of the 1146 parents who provided responses, 80% stated their child has a regular source of primary care. Of these, care is mostly received by a general practitioner (GP) (95%) in GP group practices (71%). Approximately 20% have changed where their child receives primary care in the last year. No associations were observed between having a regular source of primary care and frequency of ED attendance in the past 12 months, although parents whose child did not have a regular source of primary care were more likely to view the ED as a more convenient place to receive care than the primary care provider (39% without regular source vs. 18% with regular source; P < 0.0001). Children were less likely to have a regular source of primary care if their parents were younger, had a lower household income, lower education, and were visiting a hospital in a lower socio-economic indexes for areas rank. Policy options to improve continuity of care for children may require investigation. Increasing the prevalence of regular source of primary care for children may in turn reduce ED visits. © 2015 The Authors. Journal of Paediatrics and Child Health © 2015 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  17. The impact of comorbid cannabis and methamphetamine use on mental health among regular ecstasy users.

    PubMed

    Scott, Laura A; Roxburgh, Amanda; Bruno, Raimondo; Matthews, Allison; Burns, Lucy

    2012-09-01

    Residual effects of ecstasy use induce neurotransmitter changes that make it biologically plausible that extended use of the drug may induce psychological distress. However, there has been only mixed support for this in the literature. The presence of polysubstance use is a confounding factor. The aim of this study was to investigate whether regular cannabis and/or regular methamphetamine use confers additional risk of poor mental health and high levels of psychological distress, beyond regular ecstasy use alone. Three years of data from a yearly, cross-sectional, quantitative survey of Australian regular ecstasy users was examined. Participants were divided into four groups according to whether they regularly (at least monthly) used ecstasy only (n=936), ecstasy and weekly cannabis (n=697), ecstasy and weekly methamphetamine (n=108) or ecstasy, weekly cannabis and weekly methamphetamine (n=180). Self-reported mental health problems and Kessler Psychological Distress Scale (K10) were examined. Approximately one-fifth of participants self-reported at least one mental health problem, most commonly depression and anxiety. The addition of regular cannabis and/or methamphetamine use substantially increases the likelihood of self-reported mental health problems, particularly with regard to paranoia, over regular ecstasy use alone. Regular cannabis use remained significantly associated with self reported mental health problems even when other differences between groups were accounted for. Regular cannabis and methamphetamine use was also associated with earlier initiation to ecstasy use. These findings suggest that patterns of drug use can help identify at risk groups that could benefit from targeted approaches in education and interventions. Given that early initiation to substance use was more common in those with regular cannabis and methamphetamine use and given that this group had a higher likelihood of mental health problems, work around delaying onset of initiation should continue to be a priority. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Chemistry in a Large, Multidisciplinary Laboratory.

    ERIC Educational Resources Information Center

    Lingren, Wesley E.; Hughson, Robert C.

    1982-01-01

    Describes a science facility built at Seattle Pacific University for approximately 70 percent of the capital cost of a conventional science building. The building serves seven disciplines on a regular basis. The operation of the multidisciplinary laboratory, special features, laboratory security, and student experience/reactions are highlighted.…

  19. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  20. Robust Principal Component Analysis Regularized by Truncated Nuclear Norm for Identifying Differentially Expressed Genes.

    PubMed

    Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun

    2017-09-01

    Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.

  1. Approximate matching of regular expressions.

    PubMed

    Myers, E W; Miller, W

    1989-01-01

    Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.

  2. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  3. Audio-Tutorial Programming with Exceptional Children.

    ERIC Educational Resources Information Center

    Hofmeister, Alan

    The impetus for this study developed from a search for intervention procedures applicable to children with learning difficulties in the regular grades. It was noted that, when certain aspects of the curriculum which involved extensive repetition were being taught to pupils, approximately 90 percent of the interactions between teachers and pupils…

  4. Teaching the Value of Science

    ERIC Educational Resources Information Center

    Shumow, Lee; Schmidt, Jennifer A.

    2015-01-01

    Why and under what conditions might students value their science learning? To find out, the authors observed approximately 400 science classes. They found that although several teachers were amazingly adept at regularly promoting the value of science, many others missed out on important opportunities to promote the value of science. The authors…

  5. Making the Fitness Connection

    ERIC Educational Resources Information Center

    Brock, Sheri J.; Fittipaldi-Wert, Jeanine

    2005-01-01

    Children's fitness levels are decreasing at an alarming rate. The Centers for Disease Control has determined that approximately 33% of children do not regularly engage in vigorous physical activity (CDC, 2002). As a result, childhood obesity has increased 100% since 1980 in the United States due to physical inactivity (CDC, 2004). A well-planned…

  6. Generative Themes and At-Risk Students

    ERIC Educational Resources Information Center

    Thelin, William H.; Taczak, Kara

    2007-01-01

    At the University of Akron, the administration decided to segregate the students previously called "provisional" from the "regular" population. As an open-access institution, the university directly admits only approximately 15 percent of the students to a program of study. The vast majority of students start in University College and transfer to…

  7. Non-Aggressive Isolated and Rejected Students: School Social Work Interventions to Help Them

    ERIC Educational Resources Information Center

    Margolin, Sylvia

    2007-01-01

    Approximately half of children and adolescents who are socially unpopular are not aggressive. Some voluntarily isolate themselves from the peer group, whereas others are intentionally excluded and often victimized. Although school social workers regularly provide services to this population, there is little reported research on effective…

  8. Why Adolescent Problem Gamblers Do Not Seek Treatment

    ERIC Educational Resources Information Center

    Ladouceur, Robert; Blaszczynski, Alexander; Pelletier, Amelie

    2004-01-01

    Prevalence studies indicate that approximately 40% of adolescents participate in regular gambling with rates of problem gambling up to four times greater than that found in adult populations. However, it appears that few adolescents actually seek treatment for such problems. The purpose of this study was to explore potential reasons why…

  9. A unified framework for approximation in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.

  10. Discrete Morse flow for Ricci flow and porous medium equation

    NASA Astrophysics Data System (ADS)

    Ma, Li; Witt, Ingo

    2018-06-01

    In this paper, we study the discrete Morse flow for the Ricci flow on the American football, which is the 2-sphere with the north and south poles removed and equipped with a metric g0 of constant scalar curvature, and for the porous medium equation on a bounded regular domain in the plane. We show that under suitable assumptions on the initial metric g(0) one has a weak approximate discrete Morse flow for the approximated Ricci flow and porous medium equation on any time interval.

  11. Strong solutions and instability for the fitness gradient system in evolutionary games between two populations

    NASA Astrophysics Data System (ADS)

    Xu, Qiuju; Belmonte, Andrew; deForest, Russ; Liu, Chun; Tan, Zhong

    2017-04-01

    In this paper, we study a fitness gradient system for two populations interacting via a symmetric game. The population dynamics are governed by a conservation law, with a spatial migration flux determined by the fitness. By applying the Galerkin method, we establish the existence, regularity and uniqueness of global solutions to an approximate system, which retains most of the interesting mathematical properties of the original fitness gradient system. Furthermore, we show that a Turing instability occurs for equilibrium states of the fitness gradient system, and its approximations.

  12. Numerical modeling and analytical evaluation of light absorption by gold nanostars

    NASA Astrophysics Data System (ADS)

    Zarkov, Sergey; Akchurin, Georgy; Yakunin, Alexander; Avetisyan, Yuri; Akchurin, Garif; Tuchin, Valery

    2018-04-01

    In this paper, the regularity of local light absorption by gold nanostars (AuNSts) model is studied by method of numerical simulation. The mutual diffraction influence of individual geometric fragments of AuNSts is analyzed. A comparison is made with an approximate analytical approach for estimating the average bulk density of absorbed power and total absorbed power by individual geometric fragments of AuNSts. It is shown that the results of the approximate analytical estimate are in qualitative agreement with the numerical calculations of the light absorption by AuNSts.

  13. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.

  14. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  15. Gene Expression Data to Mouse Atlas Registration Using a Nonlinear Elasticity Smoother and Landmark Points Constraints

    PubMed Central

    Lin, Tungyou; Guyader, Carole Le; Dinov, Ivo; Thompson, Paul; Toga, Arthur; Vese, Luminita

    2013-01-01

    This paper proposes a numerical algorithm for image registration using energy minimization and nonlinear elasticity regularization. Application to the registration of gene expression data to a neuroanatomical mouse atlas in two dimensions is shown. We apply a nonlinear elasticity regularization to allow larger and smoother deformations, and further enforce optimality constraints on the landmark points distance for better feature matching. To overcome the difficulty of minimizing the nonlinear elasticity functional due to the nonlinearity in the derivatives of the displacement vector field, we introduce a matrix variable to approximate the Jacobian matrix and solve for the simplified Euler-Lagrange equations. By comparison with image registration using linear regularization, experimental results show that the proposed nonlinear elasticity model also needs fewer numerical corrections such as regridding steps for binary image registration, it renders better ground truth, and produces larger mutual information; most importantly, the landmark points distance and L2 dissimilarity measure between the gene expression data and corresponding mouse atlas are smaller compared with the registration model with biharmonic regularization. PMID:24273381

  16. Anticipated HIV Stigma and Delays in Regular HIV Testing Behaviors Among Sexually-Active Young Gay, Bisexual, and Other Men Who Have Sex with Men and Transgender Women.

    PubMed

    Gamarel, Kristi E; Nelson, Kimberly M; Stephenson, Rob; Santiago Rivera, Olga J; Chiaramonte, Danielle; Miller, Robin Lin

    2018-02-01

    Young gay, bisexual and other men who have sex with men (YGBMSM) and young transgender women are disproportionately affected by HIV/AIDS. The success of biomedical prevention strategies is predicated on regular HIV testing; however, there has been limited uptake of testing among YGBMSM and young transgender women. Anticipated HIV stigma-expecting rejection as a result of seroconversion- may serve as a significant barrier to testing. A cross-sectional sample of YGBMSM (n = 719, 95.5%) and young transgender women (n = 33, 4.4%) ages 15-24 were recruited to participate in a one-time survey. Approximately one-third of youth had not tested within the last 6 months. In a multivariable model, anticipated HIV stigma and reporting a non-gay identity were associated with an increased odds of delaying regular HIV testing. Future research and interventions are warranted to address HIV stigma, in order to increase regular HIV testing among YGBMSM and transgender women.

  17. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  18. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, J. M.; Liu, Y.; Li, W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less

  19. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  20. 13-Moment System with Global Hyperbolicity for Quantum Gas

    NASA Astrophysics Data System (ADS)

    Di, Yana; Fan, Yuwei; Li, Ruo

    2017-06-01

    We point out that the quantum Grad's 13-moment system (Yano in Physica A 416:231-241, 2014) is lack of global hyperbolicity, and even worse, the thermodynamic equilibrium is not an interior point of the hyperbolicity region of the system. To remedy this problem, by fully considering Grad's expansion, we split the expansion into the equilibrium part and the non-equilibrium part, and propose a regularization for the system with the help of the new hyperbolic regularization theory developed in Cai et al. (SIAM J Appl Math 75(5):2001-2023, 2015) and Fan et al. (J Stat Phys 162(2):457-486, 2016). This provides us a new model which is hyperbolic for all admissible thermodynamic states, and meanwhile preserves the approximate accuracy of the original system. It should be noted that this procedure is not a trivial application of the hyperbolic regularization theory.

  1. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  2. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  3. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE PAGES

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    2016-10-19

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  4. Flexible binding simulation by a novel and improved version of virtual-system coupled adaptive umbrella sampling

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi

    2016-10-01

    Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.

  5. Natural Gas Use On Minibuses, Engaged In The Carriage Of Passengers And Baggage On The Regular Routes, As A Measure For Decrease In Harmful Environment Effects

    NASA Astrophysics Data System (ADS)

    Chikishev, E.; Chikisheva, A.; Anisimov, I.; Chainikov, D.

    2017-01-01

    The paper deals with an option of increase the compressed natural gas use as a motor fuel for diesel minibuses to reduce emissions of harmful substances with exhaust gases. In terms of the Russian company LTD "WTC Automobilist", carrying passengers and baggage on regular routes by minibuses, the calculation of the park's natural gas needs is promoted. A mini CNG RS with optimal performance is suggested. The approximate payback period of the natural gas equipment installation on all buses of company and introduction into service period of mini CNG RS are calculated.

  6. The gravitational potential of axially symmetric bodies from a regularized green kernel

    NASA Astrophysics Data System (ADS)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  7. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  8. SPILC: An expert student advisor

    NASA Technical Reports Server (NTRS)

    Read, D. R.

    1990-01-01

    The Lamar University Computer Science Department serves about 350 undergraduate C.S. majors, and 70 graduate majors. B.S. degrees are offered in Computer Science and Computer and Information Science, and an M.S. degree is offered in Computer Science. In addition, the Computer Science Department plays a strong service role, offering approximately sixteen service course sections per long semester. The department has eight regular full-time faculty members, including the Department Chairman and the Undergraduate Advisor, and from three to seven part-time faculty members. Due to the small number of regular faculty members and the resulting very heavy teaching loads, undergraduate advising has become a difficult problem for the department. There is a one week early registration period and a three-day regular registration period once each semester. The Undergraduate Advisor's regular teaching load of two classes, 6 - 8 semester hours, per semester, together with the large number of majors and small number of regular faculty, cause long queues and short tempers during these advising periods. The situation is aggravated by the fact that entering freshmen are rarely accompanied by adequate documentation containing the facts necessary for proper counselling. There has been no good method of obtaining necessary facts and documenting both the information provided by the student and the resulting advice offered by the counsellors.

  9. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  10. Direct Recordings of Pitch Responses from Human Auditory Cortex

    PubMed Central

    Griffiths, Timothy D.; Kumar, Sukhbinder; Sedley, William; Nourski, Kirill V.; Kawasaki, Hiroto; Oya, Hiroyuki; Patterson, Roy D.; Brugge, John F.; Howard, Matthew A.

    2010-01-01

    Summary Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept. PMID:20605456

  11. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  12. Approximal caries increment in adolescents in a low caries prevalence area in Sweden after a 3.5-year school-based fluoride varnish programme with Bifluorid 12 and Duraphat.

    PubMed

    Bergström, Eva-Karin; Birkhed, Dowen; Granlund, Christina; Sköld, Ulla Moberg

    2014-10-01

    To evaluate approximal caries increment among 12- to 16-year-olds in a low caries prevalence area in Sweden after a 3.5-year school-based fluoride (F) varnish programme with Bifluorid 12 and Duraphat. The design was a RCT study with 1365 adolescents, divided into following four groups: Group 1 Bifluorid 12 two applications/year; Group 2 Duraphat two applications/year; Group 3 Bifluorid 12 four applications/year and Group 4 no F varnish at school. 1143 children (84%) completed the study. Approximal caries was registered on bitewing radiographs. There were no statistically significant differences in caries prevalence among the groups either at baseline or after 3.5 years . The caries increment for Group 1 was 1.34 ± 2.99 (mean ± SD), 1.24 ± 2.84 for Group 2, 1.07 ± 2.66 for Group 3 and 1.25 ± 2.75 for Group 4, with no statically significant differences either between Bifluorid 12 and Duraphat with the same frequency of F varnish applications or between the F groups and the control group. In an area with low caries prevalence in Sweden, the supplementary caries-preventive effect of school-based F varnish applications, to regular use of F toothpaste at home and to regular caries prevention given at the Public Dental Clinics, appears to be nonsignificant regarding approximal caries increment. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Beyond computer literacy: supporting youth's positive development through technology.

    PubMed

    Bers, Marina Umaschi

    2010-01-01

    In a digital era in which technology plays a role in most aspects of a child's life, having the competence and confidence to use computers might be a necessary step, but not a goal in itself. Developing character traits that will serve children to use technology in a safe way to communicate and connect with others, and providing opportunities for children to make a better world through the use of their computational skills, is just as important. The Positive Technological Development framework (PTD), a natural extension of the computer literacy and the technological fluency movements that have influenced the world of educational technology, adds psychosocial, civic, and ethical components to the cognitive ones. PTD examines the developmental tasks of a child growing up in our digital era and provides a model for developing and evaluating technology-rich youth programs. The explicit goal of PTD programs is to support children in the positive uses of technology to lead more fulfilling lives and make the world a better place. This article introduces the concept of PTD and presents examples of the Zora virtual world program for young people that the author developed following this framework.

  14. Prevalence of Sufficient Physical Activity among Parents Attending a University

    ERIC Educational Resources Information Center

    Sabourin, Sharon; Irwin, Jennifer

    2008-01-01

    Objective: The benefits of regular physical activity are well documented. However, approximately half of all university students are insufficiently active, and no research to date exists on the activity behavior of university students who are also parents. Participants and Methods: Using an adapted version of the Godin Leisure Time Exercise…

  15. Factors Contributing to the Uptake and Maintenance of Regular Exercise Behaviour in Emerging Adults

    ERIC Educational Resources Information Center

    Langdon, Jody; Johnson, Chad; Melton, Bridget

    2017-01-01

    Objective: To identify the influence of parental autonomy support, basic need satisfaction and motivation on emerging adults' physical activity level and exercise behaviours. Design: Cross-sectional survey. Setting: This study convenience-sampled approximately 435 college students identified as emerging adults--aged 18-25 years, who did not have a…

  16. Guys and Games: Practicing 21st Century Workplace Skills in the Great Indoors

    ERIC Educational Resources Information Center

    King, Elizabeth M.

    2011-01-01

    While research indicates that an increasing number of males are experiencing a sense of disaffiliation with traditional education (Kleinfeld, 2006; Steinkuehler & King, 2009), nearly all teenage boys and young adult men (approximately 99%) regularly engage in playing video games of some sort (Roberts, Foehr & Rideout (2008). This is an interesting…

  17. A Model Program of Comprehensive Educational Services for Students With Learning Problems.

    ERIC Educational Resources Information Center

    Union Township Board of Education, NJ.

    Programs are described for learning-disabled or mantally-handicapped elementary and secondary students in regular and special classes in Union, New Jersey, and approximately 58 instructional episodes involving student made objects for understanding technology are presented. In part one, components of the model program such as the multi-learning…

  18. A Simple Method for Deriving the Confidence Regions for the Penalized Cox’s Model via the Minimand Perturbation†

    PubMed Central

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496

  19. A Simple Method for Deriving the Confidence Regions for the Penalized Cox's Model via the Minimand Perturbation.

    PubMed

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.

  20. Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao

    2013-12-01

    A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.

  1. Relativistic nuclear magnetic resonance J-coupling with ultrasoft pseudopotentials and the zeroth-order regular approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Timothy F. G., E-mail: tim.green@materials.ox.ac.uk; Yates, Jonathan R., E-mail: jonathan.yates@materials.ox.ac.uk

    2014-06-21

    We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing themore » heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, {sup 1}J(P-Ag) and {sup 2}J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW.« less

  2. On the "Optimal" Choice of Trial Functions for Modelling Potential Fields

    NASA Astrophysics Data System (ADS)

    Michel, Volker

    2015-04-01

    There are many trial functions (e.g. on the sphere) available which can be used for the modelling of a potential field. Among them are orthogonal polynomials such as spherical harmonics and radial basis functions such as spline or wavelet basis functions. Their pros and cons have been widely discussed in the last decades. We present an algorithm, the Regularized Functional Matching Pursuit (RFMP), which is able to choose trial functions of different kinds in order to combine them to a stable approximation of a potential field. One main advantage of the RFMP is that the constructed approximation inherits the advantages of the different basis systems. By including spherical harmonics, coarse global structures can be represented in a sparse way. However, the additional use of spline basis functions allows a stable handling of scattered data grids. Furthermore, the inclusion of wavelets and scaling functions yields a multiscale analysis of the potential. In addition, ill-posed inverse problems (like a downward continuation or the inverse gravimetric problem) can be regularized with the algorithm. We show some numerical examples to demonstrate the possibilities which the RFMP provides.

  3. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  4. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  5. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  6. Regular threshold-energy increase with charge for neutral-particle emission in collisions of electrons with oligonucleotide anions.

    PubMed

    Tanabe, T; Noda, K; Saito, M; Starikov, E B; Tateno, M

    2004-07-23

    Electron-DNA anion collisions were studied using an electrostatic storage ring with a merging electron-beam technique. The rate of neutral particles emitted in collisions started to increase from definite threshold energies, which increased regularly with ion charges in steps of about 10 eV. These threshold energies were almost independent of the length and sequence of DNA, but depended strongly on the ion charges. Neutral particles came from breaks of DNAs, rather than electron detachment. The step of the threshold energy increase approximately agreed with the plasmon excitation energy. It is deduced that plasmon excitation is closely related to the reaction mechanism. Copyright 2004 The American Physical Society

  7. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  8. Modified Dispersion Relations: from Black-Hole Entropy to the Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Garattini, Remo

    2012-07-01

    Quantum Field Theory is plagued by divergences in the attempt to calculate physical quantities. Standard techniques of regularization and renormalization are used to keep under control such a problem. In this paper we would like to use a different scheme based on Modified Dispersion Relations (MDR) to remove infinities appearing in one loop approximation in contrast to what happens in conventional approaches. In particular, we apply the MDR regularization to the computation of the entropy of a Schwarzschild black hole from one side and the Zero Point Energy (ZPE) of the graviton from the other side. The graviton ZPE is connected to the cosmological constant by means of of the Wheeler-DeWitt equation.

  9. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    NASA Astrophysics Data System (ADS)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  10. Sudden Cardiac Death During Sports Activities in the General Population.

    PubMed

    Narayanan, Kumar; Bougouin, Wulfran; Sharifzadehgan, Ardalan; Waldmann, Victor; Karam, Nicole; Marijon, Eloi; Jouven, Xavier

    2017-12-01

    Regular exercise reduces cardiovascular and overall mortality. Participation in sports is an important determinant of cardiovascular health and fitness. Regular sports activity is associated with a smaller risk of sudden cardiac death (SCD). However, there is a small risk of sports-related SCD. Sports-related SCD accounts for approximately 5% of total SCD. SCD among athletes comprises only a fraction of all sports-related SCD. Sport-related SCD has a male predominance and an average age of affliction of 45 to 50 years. Survival is better than for other SCD. This review summarizes links between sports and SCD and discusses current knowledge and controversies. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  12. Surface waves on multilayer hyperbolic metamaterials: Operator approach to effective medium approximation

    NASA Astrophysics Data System (ADS)

    Popov, Vladislav; Lavrinenko, Andrei V.; Novitsky, Andrey

    2018-03-01

    In this paper, we elaborate on the operator effective medium approximation developed recently in Popov et al. [Phys. Rev. B 94, 085428 (2016), 10.1103/PhysRevB.94.085428] to get insight into the surface polariton excitation at the interface of a multilayer hyperbolic metamaterial (HMM). In particular, we find that HMMs with bilayer unit cells support the TE- and TM-polarized surface waves beyond the Maxwell Garnett approximation due to the spatial dispersion interpreted as effective magnetoelectric coupling. The latter is also responsible for the dependence of surface wave propagation on the order of layers in the unit cell. Elimination of the magnetoelectric coupling in three-layer unit cells complying with inversion symmetry restores the qualitative regularity of the Maxwell Garnett approximation, as well as strongly suppresses the influence of the order of layers in the unit cell.

  13. Relatedness in spatially structured populations with empty sites: An approach based on spatial moment equations.

    PubMed

    Lion, Sébastien

    2009-09-07

    Taking into account the interplay between spatial ecological dynamics and selection is a major challenge in evolutionary ecology. Although inclusive fitness theory has proven to be a very useful tool to unravel the interactions between spatial genetic structuring and selection, applications of the theory usually rely on simplifying demographic assumptions. In this paper, I attempt to bridge the gap between spatial demographic models and kin selection models by providing a method to compute approximations for relatedness coefficients in a spatial model with empty sites. Using spatial moment equations, I provide an approximation of nearest-neighbour relatedness on random regular networks, and show that this approximation performs much better than the ordinary pair approximation. I discuss the connection between the relatedness coefficients I define and those used in population genetics, and sketch some potential extensions of the theory.

  14. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  15. 78 FR 51741 - Notice of Application for Withdrawal and Opportunity for Public Meeting; California

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-21

    ... approximately 541 acres of National Forest System lands in the Shasta-Trinity National Forest for a period of 20..., Shasta-Trinity National Forest Headquarters, 530-226-2500 during regular business hours, 8 a.m. to 4:30 p... County, California. The above-described lands being National Forest System lands, the Secretary shall...

  16. NCI/DCCPS R21 Program Announcements | DCCPS/NCI/NIH

    Cancer.gov

    The Division of Cancer Control and Population Sciences funds a large portfolio of grants and contracts. The portfolio currently includes approximately 800 grants valued at nearly $450 million. Here we provide a listing of funding opportunities that are currently accepting applications. Please visit this page regularly as new funding opportunities are added upon approval by NCI.

  17. NCI/DCCPS R03 Program Announcements | DCCPS/NCI/NIH

    Cancer.gov

    The Division of Cancer Control and Population Sciences funds a large portfolio of grants and contracts. The portfolio currently includes approximately 800 grants valued at nearly $450 million. Here we provide a listing of funding opportunities that are currently accepting applications. Please visit this page regularly as new funding opportunities are added upon approval by NCI.

  18. Alternation of regular and chaotic dynamics in a simple two-degree-of-freedom system with nonlinear inertial coupling.

    PubMed

    Sigalov, G; Gendelman, O V; AL-Shudeifat, M A; Manevitch, L I; Vakakis, A F; Bergman, L A

    2012-03-01

    We show that nonlinear inertial coupling between a linear oscillator and an eccentric rotator can lead to very interesting interchanges between regular and chaotic dynamical behavior. Indeed, we show that this model demonstrates rather unusual behavior from the viewpoint of nonlinear dynamics. Specifically, at a discrete set of values of the total energy, the Hamiltonian system exhibits non-conventional nonlinear normal modes, whose shape is determined by phase locking of rotatory and oscillatory motions of the rotator at integer ratios of characteristic frequencies. Considering the weakly damped system, resonance capture of the dynamics into the vicinity of these modes brings about regular motion of the system. For energy levels far from these discrete values, the motion of the system is chaotic. Thus, the succession of resonance captures and escapes by a discrete set of the normal modes causes a sequence of transitions between regular and chaotic behavior, provided that the damping is sufficiently small. We begin from the Hamiltonian system and present a series of Poincaré sections manifesting the complex structure of the phase space of the considered system with inertial nonlinear coupling. Then an approximate analytical description is presented for the non-conventional nonlinear normal modes. We confirm the analytical results by numerical simulation and demonstrate the alternate transitions between regular and chaotic dynamics mentioned above. The origin of the chaotic behavior is also discussed.

  19. Regular scattering patterns from near-cloaking devices and their implications for invisibility cloaking

    NASA Astrophysics Data System (ADS)

    Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng

    2013-04-01

    In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy.

  20. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  1. A Varifold Approach to Surface Approximation

    NASA Astrophysics Data System (ADS)

    Buet, Blanche; Leonardi, Gian Paolo; Masnou, Simon

    2017-11-01

    We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.

  2. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  3. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  4. Nonlinear image registration with bidirectional metric and reciprocal regularization

    PubMed Central

    Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng

    2017-01-01

    Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal. PMID:28231342

  5. Incompressible flow simulations on regularized moving meshfree grids

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2017-11-01

    A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.

  6. Smoking Status and Exercise in relation to PTSD Symptoms: A Test among Trauma-Exposed Adults

    PubMed Central

    Vujanovic, Anka A.; Farris, Samantha G.; Harte, Christopher B.; Smits, Jasper A. J.; Zvolensky, Michael J.

    2013-01-01

    The present investigation examined the interactive effect of cigarette smoking status (i.e., regular smoking versus non-smoking) and weekly exercise (i.e., weekly metabolic equivalent) in terms of posttraumatic stress (PTSD) symptom severity among a community sample of trauma-exposed adults. Participants included 86 trauma-exposed adults (58.1% female; Mage = 24.3). Approximately 59.7% of participants reported regular (≥ 10 cigarettes per day) daily smoking over the past year. The interactive effect of smoking status by weekly exercise was significantly associated with hyperarousal and avoidance symptom cluster severity (p ≤ .05). These effects were evident above and beyond number of trauma types and gender, as well as the respective main effects of smoking status and weekly exercise. Follow-up tests indicated support for the moderating role of exercise on the association between smoking and PTSD symptoms, such that the highest levels of PTSD symptoms were observed among regular smokers reporting low weekly exercise levels. Theoretical and clinical implications of the findings are discussed. PMID:24273598

  7. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  8. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  9. Detecting regular sound changes in linguistics as events of concerted evolution

    DOE PAGES

    Hruschka, Daniel  J.; Branford, Simon; Smith, Eric  D.; ...

    2014-12-18

    Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular soundmore » change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.« less

  10. Detecting regular sound changes in linguistics as events of concerted evolution.

    PubMed

    Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy

    2015-01-05

    Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  12. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  13. Detecting regular sound changes in linguistics as events of concerted evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hruschka, Daniel  J.; Branford, Simon; Smith, Eric  D.

    Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular soundmore » change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.« less

  14. On the persistence of spatiotemporal oscillations generated by invasion

    NASA Astrophysics Data System (ADS)

    Kay, A. L.; Sherratt, J. A.

    1999-10-01

    Many systems in biology and chemistry are oscillatory, with a stable, spatially homogeneous steady state which consists of periodic temporal oscillations in the interacting species, and such systems have been extensively studied on infinite or semi-infinite spatial domains. We consider the effect of a finite domain, with zero-flux boundary conditions, on the behaviour of solutions to oscillatory reaction-diffusion equations after invasion. We begin by considering numerical simulations of various oscillatory predatory-prey systems. We conclude that when regular spatiotemporal oscillations are left in the wake of invasion, these die out, beginning with a decrease in the spatial frequency of the oscillations at one boundary, which then propagates across the domain. The long-time solution in this case is purely temporal oscillations, corresponding to the limit cycle of the kinetics. Contrastingly, when irregular spatiotemporal oscillations are left in the wake of invasion, they persist, even in very long time simulations. To study this phenomenon in more detail, we consider the {lambda}-{omega} class of reaction-diffusion systems. Numerical simulations show that these systems also exhibit die-out of regular spatiotemporal oscillations and persistence of irregular spatiotemporal oscillations. Exploiting the mathematical simplicity of the {lambda}-{omega} form, we derive analytically an approximation to the transition fronts in r and {theta}x which occur during the die-out of the regular oscillations. We then use this approximation to describe how the die-out occurs, and to derive a measure of its rate, as a function of parameter values. We discuss applications of our results to ecology, calcium signalling and chemistry.

  15. Spin squeezing as an indicator of quantum chaos in the Dicke model.

    PubMed

    Song, Lijun; Yan, Dong; Ma, Jian; Wang, Xiaoguang

    2009-04-01

    We study spin squeezing, an intrinsic quantum property, in the Dicke model without the rotating-wave approximation. We show that the spin squeezing can reveal the underlying chaotic and regular structures in phase space given by a Poincaré section, namely, it acts as an indicator of quantum chaos. Spin squeezing vanishes after a very short time for an initial coherent state centered in a chaotic region, whereas it persists over a longer time for the coherent state centered in a regular region of the phase space. We also study the distribution of the mean spin directions when quantum dynamics takes place. Finally, we discuss relations among spin squeezing, bosonic quadrature squeezing, and two-qubit entanglement in the dynamical processes.

  16. The numerical calculation of laminar boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Klineberg, J. M.; Steger, J. L.

    1974-01-01

    Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.

  17. Symplectic geometry spectrum regression for prediction of noisy time series

    NASA Astrophysics Data System (ADS)

    Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie

    2016-05-01

    We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body).

  18. Compulsory School Attendance: What Research Says and What It Means for State Policy

    ERIC Educational Resources Information Center

    Whitehurst, Grover J.; Whitfield, Sarah

    2012-01-01

    During his 2012 State of the Union address, President Barack Obama offered several recommendations on education policy, including one specifying that all states increase the age of compulsory school attendance to 18. Approximately 25 percent of public school students in the U.S. don't obtain a regular high school diploma, a tragedy for them and a…

  19. Determining the Effectiveness of Bilingual Programs on Third Grade State Exam Scores

    ERIC Educational Resources Information Center

    Vela, Adriana; Jones, Don; Mundy, Marie-Anne; Isaacson, Carrie

    2017-01-01

    This ex-post-facto quasi-experimental research design was conducted by selecting a convenient sample of approximately 2,000 3rd grade ELLs who took the regular reading and math English STAAR test during the 2014-15 school year in an urban southern Texas school district. This study was conducted using a quantitative research method of data…

  20. Solutions of differential equations with regular coefficients by the methods of Richmond and Runge-Kutta

    NASA Technical Reports Server (NTRS)

    Cockrell, C. R.

    1989-01-01

    Numerical solutions of the differential equation which describe the electric field within an inhomogeneous layer of permittivity, upon which a perpendicularly-polarized plane wave is incident, are considered. Richmond's method and the Runge-Kutta method are compared for linear and exponential profiles of permittivities. These two approximate solutions are also compared with the exact solutions.

  1. Oral Health Knowledge, Attitudes, and Behaviors: Investigation of an Educational Intervention Strategy with At-Risk Females

    ERIC Educational Resources Information Center

    Rustvold, Susan Romano

    2012-01-01

    A self-perpetuating cycle of poor health literacy and poor oral health knowledge and behavior affects approximately 90 million people in the United States, most especially those from low-income groups and other at-risk populations such as those with addiction. Poor oral health can result from lack of access to regular preventive dental…

  2. Comparing the Rigor of Compressed Format Courses to Their Regular Semester Counterparts

    ERIC Educational Resources Information Center

    Lutes, Lyndell; Davies, Randall

    2013-01-01

    This study compared workloads of undergraduate courses taught in 16-week and 8-week sessions. A statistically significant difference in workload was found between the two. Based on survey data from approximately 29,000 students, on average students spent about 17 minutes more per credit per week on 16-week courses than on similar 8-week courses.…

  3. Looking Back and Ahead: 20 Years of Technologies for Language Learning

    ERIC Educational Resources Information Center

    Godwin-Jones, Robert

    2016-01-01

    Over the last 20 years Robert Godwin-Jones has written 48 columns on "Emerging Technologies"; an additional six columns have been written by guest columnists. Several topics have been re-examined in regular intervals of approximately five years, namely digital literacy (Vol. 4, Num. 2; Vol. 10, Num. 2; Vol. 14, Num. 3; Vol. 19, Num. 3)…

  4. Cyclic growth in Atlantic region continental crust

    NASA Technical Reports Server (NTRS)

    Goodwin, A. M.

    1986-01-01

    Atlantic region continental crust evolved in successive stages under the influence of regular, approximately 400 Ma-long tectonic cycles. Data point to a variety of operative tectonic processes ranging from widespread ocean floor consumption (Wilson cycle) to entirely ensialic (Ampferer-style subduction or simple crustal attenuation-compression). Different processes may have operated concurrently in some or different belts. Resolving this remains the major challenge.

  5. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  6. CFD analysis of turbopump volutes

    NASA Technical Reports Server (NTRS)

    Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken

    1993-01-01

    An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.

  7. Making waves round a structured cloak: lattices, negative refraction and fringes

    PubMed Central

    Colquitt, D. J.; Jones, I. S.; Movchan, N. V.; Movchan, A. B.; Brun, M.; McPhedran, R. C.

    2013-01-01

    Using the framework of transformation optics, this paper presents a detailed analysis of a non-singular square cloak for acoustic, out-of-plane shear elastic and electromagnetic waves. Analysis of wave propagation through the cloak is presented and accompanied by numerical illustrations. The efficacy of the regularized cloak is demonstrated and an objective numerical measure of the quality of the cloaking effect is provided. It is demonstrated that the cloaking effect persists over a wide range of frequencies. As a demonstration of the effectiveness of the regularized cloak, a Young's double slit experiment is presented. The stability of the interference pattern is examined when a cloaked and uncloaked obstacle are successively placed in front of one of the apertures. This novel link with a well-known quantum mechanical experiment provides an additional method through which the quality of cloaks may be examined. In the second half of the paper, it is shown that an approximate cloak may be constructed using a discrete lattice structure. The efficiency of the approximate lattice cloak is analysed and a series of illustrative simulations presented. It is demonstrated that effective cloaking may be obtained by using a relatively simple lattice structure, particularly, in the low-frequency regime. PMID:24062625

  8. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  9. Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Chen, Y.; Tan, K.; Du, P.

    2018-04-01

    Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.

  10. XTE J1946+274: An Enigmatic X-Ray Pulsar

    NASA Technical Reports Server (NTRS)

    Wilson, Colleen A.; Finger, Mark H.; Coe, M. J.; Negueruela, Ignacio; Six, N. Frank (Technical Monitor)

    2002-01-01

    XTE J1946+274 = GRO J1944+26 is a 15.8-s X-ray pulsar discovered simultaneously by the Rossi X-ray Timing Explorer (RXTE) and the Burst and Transient Source Experiment (BATSE) in September 1998. Follow-up optical/IR observations resulted in the discovery of a Be star companion. Our pulse timing analysis of BATSE and RXTE data indicates that the orbital period is approximately 169 days. Since its discovery in 1998, XTE J1946+274 has undergone 13 outbursts. These outbursts axe not regularly spaced. They occur approximately twice per orbit and are not locked in orbital phase, unlike most Be/X-ray transient systems. A possible explanation for this is a global-one armed oscillation or density perturbation propagating rapidly in the Be star's disk. We will investigate radial velocity variations in the central peak of the H-alpha line to look for evidence of such a perturbation. From 2001 March-September, we regularly monitored XTE J1946+274 with the RXTE PCA. We will demonstrate that the spectrum appears to be varying with orbital phase, based on the 2001 and 1998 RXTE PCA observations. We will also present histories of pulsed frequency and flux.

  11. An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.

    PubMed

    Ilias, Miroslav; Saue, Trond

    2007-02-14

    The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.

  12. Diffusion of strongly magnetized cosmic ray particles in a turbulent medium

    NASA Technical Reports Server (NTRS)

    Ptuskin, V. S.

    1985-01-01

    Cosmic ray (CR) propagation in a turbulent medium is usually considered in the diffusion approximation. Here, the diffusion equation is obtained for strongly magnetized particles in the general form. The influence of a large-scale random magnetic field on CR propagation in interstellar medium is discussed. Cosmic rays are assumed to propagate in a medium with a regular field H and an ensemble of random MHD waves. The energy density of waves on scales smaller than the free path 1 of CR particles is small. The collision integral of the general form which describes interaction between relativistic particles and waves in the quasilinear approximation is used.

  13. Big strokes in small persons.

    PubMed

    Adams, Robert J

    2007-11-01

    Sickle cell disease (SCD) is understood on a genetic and a molecular level better than most diseases. Young children with SCD are at a very high risk of stroke. The molecular pathologic abnormalities of SCD lead to microvascular occlusion and intravascular hemolytic anemia. Microvascular occlusion is related to painful episodes and probably causes microcirculatory problems in the brain. The most commonly recognized stroke syndrome in children with SCD is large-artery infarction. These "big strokes" are the result of a vascular process involving the large arteries of the circle of Willis leading to territorial infarctions from perfusion failure or possibly artery-to-artery embolism. We can detect children who are developing cerebral vasculopathy using transcranial Doppler ultrasonography (TCD) and can provide effective intervention. Transcranial Doppler ultrasonography measures blood flow velocity in the large arteries of the circle of Willis. Velocity is generally increased by the severe anemia in these patients, and it becomes elevated in a focal manner when stenosis reduces the arterial diameter. Children with SCD who are developing high stroke risk can be detected months to years before the stroke using TCD. Healthy adults have a middle cerebral artery velocity of approximately 60 cm/s, whereas children without anemia have velocities of approximately 90 cm/s. In SCD, the mean is approximately 130 cm/s. Two independent studies have demonstrated that the risk of stroke in children with SCD increases with TCD velocity. The Stroke Prevention Trial in Sickle Cell Anemia (STOP) (1995-2000) was halted prematurely when it became evident that regular blood transfusions produced a marked (90%) reduction in first stroke. Children were selected for STOP if they had 2 TCD studies with velocities of 200 cm/s or greater. Children not undergoing transfusion had a stroke risk of 10% per year, which was reduced to less than 1% per year by regular blood transfusions. Stroke risk in all children with SCD is approximately 0.5% to 1.0% per year. On the basis of STOP, if the patient meets the high-risk TCD criteria, regular blood transfusions are recommended. A second study was performed (2000-2005) to attempt withdrawal of transfusion in selected children in a randomized controlled study. Children with initially abnormal TCD velocities (> or =200 cm/s) treated with regular blood transfusion for 30 months or more, which resulted in reduction of the TCD to less than 170 cm/s, were eligible for randomization into STOP II. Half continued transfusion and half had cessation of transfusion. This trial was halted early for safety reasons. There was an unacceptably high rate of TCD reversion back to high risk (> or =200 cm/s), as well as 2 strokes in children who discontinued transfusion. There are no evidence-based guidelines for the discontinuation of transfusion in children once they have been identified as having high risk based on TCD. The current situation is undesirable because of the long-term effects of transfusion, including iron overload. Iron overload has recently become easier to manage with the introduction of an oral iron chelator. The inflammatory environment known to exist in SCD and the known effect of plasma free hemoglobin, released by hemolysis, of reducing available nitric oxide may contribute to the development of cerebrovascular disease. Further research may lead to more targeted therapies. We can reduce many of the big strokes that occur in these small persons by aggressively screening patients at a young age (and periodically throughout the childhood risk period) and interrupting the process with regular blood transfusions.

  14. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  15. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  16. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  17. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  18. Psychoacoustic Testing of Modulated Blade Spacing for Main Rotors

    NASA Technical Reports Server (NTRS)

    Edwards, Bryan; Booth, Earl R., Jr. (Technical Monitor)

    2002-01-01

    Psychoacoustic testing of simulated helicopter main rotor noise is described, and the subjective results are presented. The objective of these tests was to evaluate the potential acoustic benefits of main rotors with modulated (uneven) blade spacing. Sound simulations were prepared for six main rotor configurations. A baseline 4-blade main rotor with regular blade spacing was based on the Bell Model 427 helicopter. A 5-blade main rotor with regular spacing was designed to approximate the performance of the 427, but at reduced tipspeed. Four modulated rotors - one with "optimum" spacing and three alternate configurations - were derived from the 5 bladed regular spacing rotor. The sounds were played to 2 subjects at a time, with care being taken in the speaker selection and placement to ensure that the sounds were identical for each subject. A total of 40 subjects participated. For each rotor configuration, the listeners were asked to evaluate the sounds in terms of noisiness. The test results indicate little to no "annoyance" benefit for the modulated blade spacing. In general, the subjects preferred the sound of the 5-blade regular spaced rotor over any of the modulated ones. A conclusion is that modulated blade spacing is not a promising design feature to reduce the annoyance for helicopter main rotors.

  19. Metal alkyls programmed to generate metal alkylidenes by α-H abstraction: prognosis from NMR chemical shift† †Electronic supplementary information (ESI) available: Experimental and computational details, NMR spectra, results of NMR calculations and NCS analysis, graphical representation of shielding tensors, molecular orbital diagrams of selected compounds, optimized structures for all calculated species. See DOI: 10.1039/c7sc05039a

    PubMed Central

    Gordon, Christopher P.; Yamamoto, Keishi; Searles, Keith; Shirase, Satoru

    2018-01-01

    Metal alkylidenes, which are key organometallic intermediates in reactions such as olefination or alkene and alkane metathesis, are typically generated from metal dialkyl compounds [M](CH2R)2 that show distinctively deshielded chemical shifts for their α-carbons. Experimental solid-state NMR measurements combined with DFT/ZORA calculations and a chemical shift tensor analysis reveal that this remarkable deshielding originates from an empty metal d-orbital oriented in the M–Cα–Cα′ plane, interacting with the Cα p-orbital lying in the same plane. This π-type interaction inscribes some alkylidene character into Cα that favors alkylidene generation via α-H abstraction. The extent of the deshielding and the anisotropy of the alkyl chemical shift tensors distinguishes [M](CH2R)2 compounds that form alkylidenes from those that do not, relating the reactivity to molecular orbitals of the respective molecules. The α-carbon chemical shifts and tensor orientations thus predict the reactivity of metal alkyl compounds towards alkylidene generation. PMID:29675237

  20. First example of a high-level correlated calculation of the indirect spin-spin coupling constants involving tellurium: tellurophene and divinyl telluride.

    PubMed

    Rusakov, Yury Yu; Krivdin, Leonid B; Østerstrøm, Freja F; Sauer, Stephan P A; Potapov, Vladimir A; Amosova, Svetlana V

    2013-08-21

    This paper documents the very first example of a high-level correlated calculation of spin-spin coupling constants involving tellurium taking into account relativistic effects, vibrational corrections and solvent effects for medium sized organotellurium molecules. The (125)Te-(1)H spin-spin coupling constants of tellurophene and divinyl telluride were calculated at the SOPPA and DFT levels, in good agreement with experimental data. A new full-electron basis set, av3z-J, for tellurium derived from the "relativistic" Dyall's basis set, dyall.av3z, and specifically optimized for the correlated calculations of spin-spin coupling constants involving tellurium was developed. The SOPPA method shows a much better performance compared to DFT, if relativistic effects calculated within the ZORA scheme are taken into account. Vibrational and solvent corrections are next to negligible, while conformational averaging is of prime importance in the calculation of (125)Te-(1)H spin-spin couplings. Based on the performed calculations at the SOPPA(CCSD) level, a marked stereospecificity of geminal and vicinal (125)Te-(1)H spin-spin coupling constants originating in the orientational lone pair effect of tellurium has been established, which opens a new guideline in organotellurium stereochemistry.

  1. Nonlinear d10-ML2 Transition-Metal Complexes

    PubMed Central

    Wolters, Lando P; Bickelhaupt, F Matthias

    2013-01-01

    We have investigated the molecular geometries of a series of dicoordinated d10-transition-metal complexes ML2 (M=Co−, Rh−, Ir−, Ni, Pd, Pt, Cu+, Ag+, Au+; L=NH3, PH3, CO) using relativistic density functional theory (DFT) at ZORA-BLYP/TZ2P. Not all complexes have the expected linear ligand–metal–ligand (L–M–L) angle: this angle varies from 180° to 128.6° as a function of the metal as well as the ligands. Our main objective is to present a detailed explanation why ML2 complexes can become bent. To this end, we have analyzed the bonding mechanism in ML2 as a function of the L–M–L angle using quantitative Kohn–Sham molecular orbital (MO) theory in combination with an energy decomposition analysis (EDA) scheme. The origin of bent L–M–L structures is π backdonation. In situations of strong π backdonation, smaller angles increase the overlap of the ligand’s acceptor orbital with a higher-energy donor orbital on the metal-ligand fragment, and therefore favor π backdonation, resulting in additional stabilization. The angle of the complexes thus depends on the balance between this additional stabilization and increased steric repulsion that occurs as the complexes are bent. PMID:24551547

  2. Spin-Forbidden Reactions: Adiabatic Transition States Using Spin-Orbit Coupled Density Functional Theory.

    PubMed

    Gaggioli, Carlo Alberto; Belpassi, Leonardo; Tarantelli, Francesco; Harvey, Jeremy N; Belanzoni, Paola

    2018-04-06

    A spin-forbidden chemical reaction involves a change in the total electronic spin state from reactants to products. The mechanistic study is challenging because such a reaction does not occur on a single diabatic potential energy surface (PES), but rather on two (or multiple) spin diabatic PESs. One possible approach is to calculate the so-called "minimum energy crossing point" (MECP) between the diabatic PESs, which however is not a stationary point. Inclusion of spin-orbit coupling between spin states (SOC approach) allows the reaction to occur on a single adiabatic PES, in which a transition state (TS SOC) as well as activation free energy can be calculated. This Concept article summarizes a previously published application in which, for the first time, the SOC effects, using spin-orbit ZORA Hamiltonian within density functional theory (DFT) framework, are included and account for the mechanism of a spin-forbidden reaction in gold chemistry. The merits of the MECP and TS SOC approaches and the accuracy of the results are compared, considering both our recent calculations on molecular oxygen addition to gold(I)-hydride complexes and new calculations for the prototype spin-forbidden N 2 O and N 2 Se dissociation reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Mathematical formalisms based on approximated kinetic representations for modeling genetic and metabolic pathways.

    PubMed

    Alves, Rui; Vilaprinyo, Ester; Hernádez-Bermejo, Benito; Sorribas, Albert

    2008-01-01

    There is a renewed interest in obtaining a systemic understanding of metabolism, gene expression and signal transduction processes, driven by the recent research focus on Systems Biology. From a biotechnological point of view, such a systemic understanding of how a biological system is designed to work can facilitate the rational manipulation of specific pathways in different cell types to achieve specific goals. Due to the intrinsic complexity of biological systems, mathematical models are a central tool for understanding and predicting the integrative behavior of those systems. Particularly, models are essential for a rational development of biotechnological applications and in understanding system's design from an evolutionary point of view. Mathematical models can be obtained using many different strategies. In each case, their utility will depend upon the properties of the mathematical representation and on the possibility of obtaining meaningful parameters from available data. In practice, there are several issues at stake when one has to decide which mathematical model is more appropriate for the study of a given problem. First, one needs a model that can represent the aspects of the system one wishes to study. Second, one must choose a mathematical representation that allows an accurate analysis of the system with respect to different aspects of interest (for example, robustness of the system, dynamical behavior, optimization of the system with respect to some production goal, parameter value determination, etc). Third, before choosing between alternative and equally appropriate mathematical representations for the system, one should compare representations with respect to easiness of automation for model set-up, simulation, and analysis of results. Fourth, one should also consider how to facilitate model transference and re-usability by other researchers and for distinct purposes. Finally, one factor that is important for all four aspects is the regularity in the mathematical structure of the equations because it facilitates computational manipulation. This regularity is a mark of kinetic representations based on approximation theory. The use of approximation theory to derive mathematical representations with regular structure for modeling purposes has a long tradition in science. In most applied fields, such as engineering and physics, those approximations are often required to obtain practical solutions to complex problems. In this paper we review some of the more popular mathematical representations that have been derived using approximation theory and are used for modeling in molecular systems biology. We will focus on formalisms that are theoretically supported by the Taylor Theorem. These include the Power-law formalism, the recently proposed (log)linear and Lin-log formalisms as well as some closely related alternatives. We will analyze the similarities and differences between these formalisms, discuss the advantages and limitations of each representation, and provide a tentative "road map" for their potential utilization for different problems.

  4. A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2016-02-01

    Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.

  5. The Capra Research Program for Modelling Extreme Mass Ratio Inspirals

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    2011-02-01

    Suppose a small compact object (black hole or neutron star) of mass m orbits a large black hole of mass M ≫ m. This system emits gravitational waves (GWs) that have a radiation-reaction effect on the particle's motion. EMRIs (extreme-mass-ratio inspirals) of this type will be important GW sources for LISA. To fully analyze these GWs, and to detect weaker sources also present in the LISA data stream, will require highly accurate EMRI GW templates. In this article I outline the ``Capra'' research program to try to model EMRIs and calculate their GWs ab initio, assuming only that m ≪ M and that the Einstein equations hold. Because m ≪ M the timescale for the particle's orbit to shrink is too long for a practical direct numerical integration of the Einstein equations, and because this orbit may be deep in the large black hole's strong-field region, a post-Newtonian approximation would be inaccurate. Instead, we treat the EMRI spacetime as a perturbation of the large black hole's ``background'' (Schwarzschild or Kerr) spacetime and use the methods of black-hole perturbation theory, expanding in the small parameter m/M. The particle's motion can be described either as the result of a radiation-reaction ``self-force'' acting in the background spacetime or as geodesic motion in a perturbed spacetime. Several different lines of reasoning lead to the (same) basic O(m/M) ``MiSaTaQuWa'' equations of motion for the particle. In particular, the MiSaTaQuWa equations can be derived by modelling the particle as either a point particle or a small Schwarzschild black hole. The latter is conceptually elegant, but the former is technically much simpler and (surprisingly for a nonlinear field theory such as general relativity) still yields correct results. Modelling the small body as a point particle, its own field is singular along the particle worldline, so it's difficult to formulate a meaningful ``perturbation'' theory or equations of motion there. Detweiler and Whiting found an elegant decomposition of the particle's metric perturbation into a singular part which is spherically symmetric at the particle and a regular part which is smooth (and non-symmetric) at the particle. If we assume that the singular part (being spherically symmetric at the particle) exerts no force on the particle, then the MiSaTaQuWa equations follow immediately. The MiSaTaQuWa equations involve gradients of a (curved-spacetime) Green function, integrated over the particle's entire past worldline. These expressions aren't amenable to direct use in practical computations. By carefully analysing the singularity structure of each term in a spherical-harmonic expansion of the particle's field, Barack and Ori found that the self-force can be written as an infinite sum of modes, each of which can be calculated by (numerically) solving a set of wave equations in 1{+}1 dimensions, summing the gradients of the resulting fields at the particle position, and then subtracting certain analytically-calculable ``regularization parameters''. This ``mode-sum'' regularization scheme has been the basis for much further research including explicit numerical calculations of the self-force in a variety of situations, initially for Schwarzschild spacetime and more recently extending to Kerr spacetime. Recently Barack and Golbourn developed an alternative ``m-mode'' regularization scheme. This regularizes the physical metric perturbation by subtracting from it a suitable ``puncture function'' approximation to the Detweiler-Whiting singular field. The residual is then decomposed into a Fourier sum over azimuthal (e^{imϕ}) modes, and the resulting equations solved numerically in 2{+}1 dimensions. Vega and Detweiler have developed a related scheme that uses the same puncture-function regularization but then solves the regularized perturbation equation numerically in 3{+}1 dimensions, avoiding a mode-sum decomposition entirely. A number of research projects are now using these puncture-function regularization schemes, particularly for calculations in Kerr spacetime. Most Capra research to date has used 1st order perturbation theory, with the particle moving on a fixed (usually geodesic) worldline. Much current research is devoted to generalizing this to allow the particle worldline to be perturbed by the self-force, and to obtain approximation schemes which remain valid over long (EMRI-inspiral) timescales. To obtain the very high accuracies needed to fully exploit LISA's observations of the strongest EMRIs, 2nd order perturbation theory will probably also be needed; both this and long-time approximations remain frontiers for future Capra research.

  6. Some astrophysical processes around magnetized black hole

    NASA Astrophysics Data System (ADS)

    Kološ, M.; Tursunov, A.; Stuchlík, Z.

    2018-01-01

    We study the dynamics of charged test particles in the vicinity of a black hole immersed into an asymptotically uniform external magnetic field. A real magnetic field around a black hole will be far away from to be completely regular and uniform, a uniform magnetic field is used as linear approximation. Ionized particle acceleration, charged particle oscillations and synchrotron radiation of moving charged particle have been studied.

  7. Measuring Regularity of Human Postural Sway Using Approximate Entropy and Sample Entropy in Patients with Ehlers-Danlos Syndrome Hypermobility Type

    ERIC Educational Resources Information Center

    Rigoldi, Chiara; Cimolin, Veronica; Camerota, Filippo; Celletti, Claudia; Albertini, Giorgio; Mainardi, Luca; Galli, Manuela

    2013-01-01

    Ligament laxity in Ehlers-Danlos syndrome hypermobility type (EDS-HT) patients can influence the intrinsic information about posture and movement and can have a negative effect on the appropriateness of postural reactions. Several measures have been proposed in literature to describe the planar migration of CoP over the base of support, and the…

  8. Protective Services Chili Cookoff Full of Good Food, Good Fun, and “Tough Choices” | Poster

    Cancer.gov

    This year’s Protective Services Chili Cookoff was marked by blazing-hot chili and fiery competition as 15 entrants contended for the attention and adoration of approximately 150 hungry attendees. The competition is a fixture at NCI at Frederick and regularly features an array of culinary concoctions, from which the visitors rank their choices for first, second, and third place.

  9. Technology versus Teachers in the Early Literacy Classroom: An Investigation of the Effectiveness of the Istation Integrated Learning System

    ERIC Educational Resources Information Center

    Putman, Rebecca S.

    2017-01-01

    Guided by Vygotsky's social learning theory, this study reports a 24-week investigation on whether regular use of Istation®, an integrated learning system used by approximately 4 million students in the United States, had an effect on the early literacy achievement of children in twelve kindergarten classrooms. A mixed-method, quasi-experimental…

  10. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  11. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  12. The unsaturated flow in porous media with dynamic capillary pressure

    NASA Astrophysics Data System (ADS)

    Milišić, Josipa-Pina

    2018-05-01

    In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.

  13. Chromatic refraction with global ozone monitoring by occultation of stars. I. Description and scintillation correction.

    PubMed

    Dalaudier, F; Kan, V; Gurvich, A S

    2001-02-20

    We describe refractive and chromatic effects, both regular and random, that occur during star occultations by the Earth's atmosphere. The scintillation that results from random density fluctuations, as well as the consequences of regular chromatic refraction, is qualitatively described. The resultant chromatic scintillation will produce random features on the Global Ozone Monitoring by Occultation of Stars (GOMOS) spectrometer, with an amplitude comparable with that of some of the real absorbing features that result from atmospheric constituents. A correction method that is based on the use of fast photometer signals is described, and its efficiency is discussed. We give a qualitative (although accurate) description of the phenomena, including numerical values when needed. Geometrical optics and the phase-screen approximation are used to keep the description simple.

  14. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  15. A noninvasive measure of negative-feedback strength, approximate entropy, unmasks strong diurnal variations in the regularity of LH secretion.

    PubMed

    Liu, Peter Y; Iranmanesh, Ali; Keenan, Daniel M; Pincus, Steven M; Veldhuis, Johannes D

    2007-11-01

    The secretion of anterior-pituitary hormones is subject to negative feedback. Whether negative feedback evolves dynamically over 24 h is not known. Conventional experimental paradigms to test this concept may induce artifacts due to nonphysiological feedback. These limitations might be overcome by a noninvasive methodology to quantify negative feedback continuously over 24 h without disrupting the axis. The present study exploits a recently validated model-free regularity statistic, approximate entropy (ApEn), which monitors feedback changes with high sensitivity and specificity (both >90%; Pincus SM, Hartman ML, Roelfsema F, Thorner MO, Veldhuis JD. Am J Physiol Endocrinol Metab 273: E948-E957, 1999). A time-incremented moving window of ApEn was applied to LH time series obtained by intensive (10-min) blood sampling for four consecutive days (577 successive measurements) in each of eight healthy men. Analyses unveiled marked 24-h variations in ApEn with daily maxima (lowest feedback) at 1100 +/- 1.7 h (mean +/- SE) and minima (highest feedback) at 0430 +/- 1.9 h. The mean difference between maximal and minimal 24-h LH ApEn was 0.348 +/- 0.018, which differed by P < 0.001 from all three of randomly shuffled versions of the same LH time series, simulated pulsatile data and assay noise. Analyses artificially limited to 24-h rather than 96-h data yielded reproducibility coefficients of 3.7-9.0% for ApEn maxima and minima. In conclusion, a feedback-sensitive regularity statistic unmasks strong and consistent 24-h rhythmicity of the orderliness of unperturbed pituitary-hormone secretion. These outcomes suggest that ApEn may have general utility in probing dynamic mechanisms mediating feedback in other endocrine systems.

  16. New tip design and shock wave pattern of electrohydraulic probes for endoureteral lithotripsy.

    PubMed

    Vorreuther, R

    1993-02-01

    A new tip design of a 3.3F electrohydraulic probe for endoureteral lithotripsy was evaluated in comparison to a regular probe. The peak pressure, as well as the slope of the shock front, depend solely on the voltage. Increasing the capacity leads merely to broader pulses. A laser-like short high-pressure pulse has a greater impact on stone disintegration than a corresponding broader low-pressure pulse of the same energy. Using the regular probe, only positive pressures were obtained. Pressure distribution around the regular tip was approximately spherical, whereas the modified probe tip "beamed" the shock wave to a great extent. In addition, a negative-pressure half-cycle was added to the initial positive peak pressure, which resulted in a higher maximal pressure amplitude. The directed shock wave had a greater depth of penetration into a model stone. Thus, the ability of the new probe to destroy harder stones especially should be greater. The trauma to the ureter was reduced when touching the wall tangentially. No difference in the effect of the two probes was seen when placing the probe directly on the mucosa.

  17. Viscous damping and spring force calculation of regularly perforated MEMS microstructures in the Stokes' approximation

    PubMed Central

    Homentcovschi, Dorel; Murray, Bruce T.; Miles, Ronald N.

    2013-01-01

    There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data. PMID:24058267

  18. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  19. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  20. Estimating the need for dental sedation. 3. Analysis of factors contributing to non-attendance for dental treatment in the general population, across 12 English primary care trusts.

    PubMed

    Goodwin, M; Pretty, I A

    2011-12-23

    This is the third paper in a series of four examining a tool which could be used to determine sedation need among patients. The aim of this paper was to assess the reasons why people do not attend the dentist regularly, in order to understand the potential need for sedation services among both attending and non-attending patients. A large telephone survey conducted across 12 primary care trusts (PCTs) found that 17% of participants did not attend the dentist regularly. One of the top reasons given for non-attendance that could be considered a barrier was fear/anxiety. The figure reached in paper 2 ( 2011; 211: E11) stated that approximately 5% of attending patients will, at some time, need sedation services. However, the data from this survey have suggested that anxiety accounts for 16% of people who do not attend the dentist regularly. It could be assumed that if non-attending patients were included, with high levels of anxiety, the sedation need would rise to 6.9% throughout the entire population.

  1. Misclassification rates for current smokers misclassified as nonsmokers.

    PubMed

    Wells, A J; English, P B; Posner, S F; Wagenknecht, L E; Perez-Stable, E J

    1998-10-01

    This paper provides misclassification rates for current cigarette smokers who report themselves as nonsmokers. Such rates are important in determining smoker misclassification bias in the estimation of relative risks in passive smoking studies. True smoking status, either occasional or regular, was determined for individual current smokers in 3 existing studies of nonsmokers by inspecting the cotinine levels of body fluids. The new data, combined with an approximately equal amount in the 1992 Environmental Protection Agency (EPA) report on passive smoking and lung cancer, yielded misclassification rates that not only had lower standard errors but also were stratified by sex and US minority majority status. The misclassification rates for the important category of female smokers misclassified as never smokers were, respectively, 0.8%, 6.0%, 2.8%, and 15.3% for majority regular, majority occasional, US minority regular, and US minority occasional smokers. Misclassification rates for males were mostly somewhat higher. The new information supports EPA's conclusion that smoker misclassification bias is small. Also, investigators are advised to pay attention to minority/majority status of cohorts when correcting for smoker misclassification bias.

  2. Viscous damping and spring force calculation of regularly perforated MEMS microstructures in the Stokes' approximation.

    PubMed

    Homentcovschi, Dorel; Murray, Bruce T; Miles, Ronald N

    2013-10-15

    There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data.

  3. Gravitational self-force meets the post-Newtonian approximation in extreme-mass ratio inspiral of binary black holes

    NASA Astrophysics Data System (ADS)

    Detweiler, Steven

    2010-02-01

    Post-Newtonian analysis, numerical relativity and, now, perturbation-based gravitational self-force analysis are all being used to describe various aspects of black hole binary systems. Recent comparisons between self-force analysis, with m1m2, and post-Newtonian analysis, with v/c 1 show excellent agreement in their common domain of validity. This lends credence to the two very different regularization procedures which are invoked in these approximations. When self-force analysis is able to create gravitational waveforms from extreme mass-ratio inspiral, then unprecedented cross cultural comparisons of these three distinct approaches to understanding gravitational waves will reveal the strengths and weaknesses of each. )

  4. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  5. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  6. Environmental Assessment Improvements to Silver Flag Training Area at Tyndall Air Force Base, Florida

    DTIC Science & Technology

    2013-08-02

    existing dirt  trail  located  approximately 1,800 ft east of the cantonment area at its nearest point. The existing dirt  trail  is drivable by small...vehicles such as pickup trucks but is not regularly maintained as a road. The  trail  is located mostly within forested  uplands; its southernmost portion...is wetland. The width of the  trail  is approximately 10 ft. The new road that would  be constructed under Alternative 3b would be approximately 1,850

  7. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  8. Elementary Basic Skills Program State Report, Elementary and Secondary Education Act, Title I. Part I--Regular Term 1972-73; Part II--Summer Term 1972.

    ERIC Educational Resources Information Center

    Maryland State Dept. of Education, Baltimore. Div. of Compensatory, Urban, and Supplementary Programs.

    The Elementary Secondary Education Act Title I Elementary Basic Skills program operated in 72 schools during fiscal year 1973. There were approximately 23,443 identified Title I pupils who received the services of the program. The first major program objective pertains directly to reading comprehension and anticipates a gain of ten school months…

  9. On the Anticipatory Aspects of the Four Interactions: what the Known Classical and Semi-Classical Solutions Teach us

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lusanna, Luca

    2004-08-19

    The four (electro-magnetic, weak, strong and gravitational) interactions are described by singular Lagrangians and by Dirac-Bergmann theory of Hamiltonian constraints. As a consequence a subset of the original configuration variables are gauge variables, not determined by the equations of motion. Only at the Hamiltonian level it is possible to separate the gauge variables from the deterministic physical degrees of freedom, the Dirac observables, and to formulate a well posed Cauchy problem for them both in special and general relativity. Then the requirement of causality dictates the choice of retarded solutions at the classical level. However both the problems of themore » classical theory of the electron, leading to the choice of (1/2) (retarded + advanced) solutions, and the regularization of quantum field theory, leading to the Feynman propagator, introduce anticipatory aspects. The determination of the relativistic Darwin potential as a semi-classical approximation to the Lienard-Wiechert solution for particles with Grassmann-valued electric charges, regularizing the Coulomb self-energies, shows that these anticipatory effects live beyond the semi-classical approximation (tree level) under the form of radiative corrections, at least for the electro-magnetic interaction.Talk and 'best contribution' at The Sixth International Conference on Computing Anticipatory Systems CASYS'03, Liege August 11-16, 2003.« less

  10. Decrease in early right alpha band phase synchronization and late gamma band oscillations in processing syntax in music.

    PubMed

    Ruiz, María Herrojo; Koelsch, Stefan; Bhattacharya, Joydeep

    2009-04-01

    The present study investigated the neural correlates associated with the processing of music-syntactical irregularities as compared with regular syntactic structures in music. Previous studies reported an early ( approximately 200 ms) right anterior negative component (ERAN) by traditional event-related-potential analysis during music-syntactical irregularities, yet little is known about the underlying oscillatory and synchronization properties of brain responses which are supposed to play a crucial role in general cognition including music perception. First we showed that the ERAN was primarily represented by low frequency (<8 Hz) brain oscillations. Further, we found that music-syntactical irregularities as compared with music-syntactical regularities, were associated with (i) an early decrease in the alpha band (9-10 Hz) phase synchronization between right fronto-central and left temporal brain regions, and (ii) a late ( approximately 500 ms) decrease in gamma band (38-50 Hz) oscillations over fronto-central brain regions. These results indicate a weaker degree of long-range integration when the musical expectancy is violated. In summary, our results reveal neural mechanisms of music-syntactic processing that operate at different levels of cortical integration, ranging from early decrease in long-range alpha phase synchronization to late local gamma oscillations. 2008 Wiley-Liss, Inc.

  11. Efficiency of synaptic transmission of single-photon events from rod photoreceptor to rod bipolar dendrite.

    PubMed

    Schein, Stan; Ahmad, Kareem M

    2006-11-01

    A rod transmits absorption of a single photon by what appears to be a small reduction in the small number of quanta of neurotransmitter (Q(count)) that it releases within the integration period ( approximately 0.1 s) of a rod bipolar dendrite. Due to the quantal and stochastic nature of release, discrete distributions of Q(count) for darkness versus one isomerization of rhodopsin (R*) overlap. We suggested that release must be regular to narrow these distributions, reduce overlap, reduce the rate of false positives, and increase transmission efficiency (the fraction of R* events that are identified as light). Unsurprisingly, higher quantal release rates (Q(rates)) yield higher efficiencies. Focusing here on the effect of small changes in Q(rate), we find that a slightly higher Q(rate) yields greatly reduced efficiency, due to a necessarily fixed quantal-count threshold. To stabilize efficiency in the face of drift in Q(rate), the dendrite needs to regulate the biochemical realization of its quantal-count threshold with respect to its Q(count). These considerations reveal the mathematical role of calcium-based negative feedback and suggest a helpful role for spontaneous R*. In addition, to stabilize efficiency in the face of drift in degree of regularity, efficiency should be approximately 50%, similar to measurements.

  12. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  13. Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines

    PubMed Central

    Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram

    2014-01-01

    When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002

  14. The use of salinity contrast for density difference compensation to improve the thermal recovery efficiency in high-temperature aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    van Lopik, Jan H.; Hartog, Niels; Zaadnoordijk, Willem Jan

    2016-08-01

    The efficiency of heat recovery in high-temperature (>60 °C) aquifer thermal energy storage (HT-ATES) systems is limited due to the buoyancy of the injected hot water. This study investigates the potential to improve the efficiency through compensation of the density difference by increased salinity of the injected hot water for a single injection-recovery well scheme. The proposed method was tested through numerical modeling with SEAWATv4, considering seasonal HT-ATES with four consecutive injection-storage-recovery cycles. Recovery efficiencies for the consecutive cycles were investigated for six cases with three simulated scenarios: (a) regular HT-ATES, (b) HT-ATES with density difference compensation using saline water, and (c) theoretical regular HT-ATES without free thermal convection. For the reference case, in which 80 °C water was injected into a high-permeability aquifer, regular HT-ATES had an efficiency of 0.40 after four consecutive recovery cycles. The density difference compensation method resulted in an efficiency of 0.69, approximating the theoretical case (0.76). Sensitivity analysis showed that the net efficiency increase by using the density difference compensation method instead of regular HT-ATES is greater for higher aquifer hydraulic conductivity, larger temperature difference between injection water and ambient groundwater, smaller injection volume, and larger aquifer thickness. This means that density difference compensation allows the application of HT-ATES in thicker, more permeable aquifers and with larger temperatures than would be considered for regular HT-ATES systems.

  15. Born approximation for scattering by evanescent waves: Comparison with exact scattering by an infinite fluid cylinder

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.

    2004-05-01

    In some situations, evanescent waves can be an important component of the acoustic field within the sea bottom. For this reason (as well as to advance the understanding of scattering processes) it can be helpful to examine the modifications to scattering theory resulting from evanescence. Modifications to ray theory were examined in a prior approximation [P. L. Marston, J. Acoust. Soc. Am. 113, 2320 (2003)]. The new research concerns the modifications to the low-frequency Born approximation and confirmation by comparison with the exact two-dimensional scattering by a fluid cylinder. In the case of a circular cylinder having the same density as the surroundings but having a compressibility contrast with the surroundings, the Born approximation with a nonevanescent incident wave gives only monopole scattering. When the cylinder has a density contrast and the same compressibility as the surroundings the regular Born approximation gives only dipole scattering (with the dipole oriented along to the incident wavevector). In both cases when the Born approximation is modified to include the evanescence of the incident wave, an additional dipole scattering term is evident. In each case the new dipole is oriented along to the decay axis of the evanescent wave. [Research supported by ONR.

  16. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  17. Core surface magnetic field evolution 2000-2010

    NASA Astrophysics Data System (ADS)

    Finlay, C. C.; Jackson, A.; Gillet, N.; Olsen, N.

    2012-05-01

    We present new dedicated core surface field models spanning the decade from 2000.0 to 2010.0. These models, called gufm-sat, are based on CHAMP, Ørsted and SAC-C satellite observations along with annual differences of processed observatory monthly means. A spatial parametrization of spherical harmonics up to degree and order 24 and a temporal parametrization of sixth-order B-splines with 0.25 yr knot spacing is employed. Models were constructed by minimizing an absolute deviation measure of misfit along with measures of spatial and temporal complexity at the core surface. We investigate traditional quadratic or maximum entropy regularization in space, and second or third time derivative regularization in time. Entropy regularization allows the construction of models with approximately constant spectral slope at the core surface, avoiding both the divergence characteristic of the crustal field and the unrealistic rapid decay typical of quadratic regularization at degrees above 12. We describe in detail aspects of the models that are relevant to core dynamics. Secular variation and secular acceleration are found to be of lower amplitude under the Pacific hemisphere where the core field is weaker. Rapid field evolution is observed under the eastern Indian Ocean associated with the growth and drift of an intense low latitude flux patch. We also find that the present axial dipole decay arises from a combination of subtle changes in the southern hemisphere field morphology.

  18. Wave drift damping acting on multiple circular cylinders (model tests)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kinoshita, Takeshi; Sunahara, Shunji; Bao, W.

    1995-12-31

    The wave drift damping for the slow drift motion of a four-column platform is experimentally investigated. The estimation of damping force of the slow drift motion of moored floating structures in ocean waves, is one of the most important topics. Bao et al. calculated an interaction of multiple circular cylinders based on the potential flow theory, and showed that the wave drift damping is significantly influenced by the interaction between cylinders. This calculation method assumes that the slow drift motion is approximately replaced by steady current, that is, structures on slow drift motion are supposed to be equivalent to onesmore » in both regular waves and slow current. To validate semi-analytical solutions of Bao et al., experiments were carried out. At first, added resistance due to waves acting on a structure composed of multiple (four) vertical circular cylinders fixed to a slowly moving carriage, was measured in regular waves. Next, the added resistance of the structure moored by linear spring to the slowly moving carriage were measured in regular waves. Furthermore, to validate the assumption that the slow drift motion is replaced by steady current, free decay tests in still water and in regular waves were compared with the simulation of the slow drift motion using the wave drift damping coefficient obtained by the added resistance tests.« less

  19. Reasons for and costs of hospitalization for pediatric asthma: a prospective 1-year follow-up in a population-based setting.

    PubMed

    Korhonen, K; Reijonen, T M; Remes, K; Malmström, K; Klaukka, T; Korppi, M

    2001-12-01

    The aims of this study were to examine the frequency of, and the reasons for, emergency hospitalization for asthma among children. In addition, the costs of hospital treatment, preventive medication, and productivity losses of the caregivers were evaluated in a population-based setting during 1 year. Data on purchases of regular asthma medication were obtained from the Social Insurance Institution. In total, 106 (2.3/1000) children aged up to 15 years were admitted 136 times for asthma exacerbation to the Kuopio University Hospital in 1998. This represented approximately 5% of all children with asthma in the area. The trigger for the exacerbation was respiratory infection in 63% of the episodes, allergen exposure in 24%, and unknown in 13%. The age-adjusted risk for admittance was 5.3% in children on inhaled steroids, 5.8% in those on cromones, and 7.9% in those with no regular medication for asthma. The mean direct cost for an admission was $1,209 (median $908; range $454-6,812) and the indirect cost was $358 ($316; $253-1,139). The cost of regular medication for asthma was, on average, $272 per admitted child on maintenance. The annual total cost as a result of asthma rose eight-fold if a child on regular medication was admitted for asthma.

  20. [Health status, health perception, and health promotion behaviors of low-income community dwelling elderly].

    PubMed

    Lee, Tae-Wha; Ko, Il-Sun; Lee, Kyung-Ja; Kang, Kyeong-Hwa

    2005-04-01

    The purpose of the study was to investigate the health status(present illness, ADL and IADL), health perception, and health promotion behaviors of low-income elderly who are receiving the visiting nurse service in the community. The sample of the study was 735 elderly over 65 years old with basic livelihood security, who were conveniently selected from 245 public health centers nation-wide. Data collection was done using a structured questionnaire through interviews by visiting nurses. The average number of present illnesses in the study subjects was 4.18. The average scores of ADL and IADL were 15.903.39 and 9.772.97 respectively, which indicates a relatively independent everyday life. However, 64.2% of the subjects perceived their health status as 'not healthy'. In terms of health promotion behaviors, 77.8% of the subjects had ceased smoking, 83.9% stopped drinking, 56.4% had a regular diet, 45.8% received regular physical check-ups during the past two years, and 66% received flu shots. Approximately 50% of the subjects were practicing 3-4 health promotion behaviors. Significant factors associated with health promotion behaviors were ADL, IADL and self-efficacy. Health promotion programs which focus on regular diet, exercise, and regular physical check-ups should be developed to improve independence of everyday life and quality of life among low-income elderly.

  1. The attachment of collagenous ligament to stereom in primary spines of the sea-urchin, Eucidaris tribuloides.

    PubMed

    Smith, D S; Del Castillo, J; Morales, M; Luke, B

    1990-01-01

    The similar proximal and distal attachments to the stereom of primary spine ligament in the echinoid Eucidaris tribuloides are described, from thin sections and SEM studies on frozen and fractured spine articulations and ligaments from decalcified material. The orthogonal structure of the general stereom is modified on the attachment zones where bundles of collagen cylinders enter approximately hexagonally arranged channels. Straps of collagen extend in parallel series between adjacent bundles via regularly placed ports and collagen loops rather than non-striated 'tendons' pass over skeletal trabeculae. The regular pattern of collagen straps is most evident on the proximal and distal attachment zones. Mechanical features of the non-adhesive mode of attachment are considered, together with similarities and differences between insertion of muscle cells and mutable collagenous tissue (ligament) in echinoderms.

  2. Formation of volatile N-nitrosamines from food products, especially fish, under simulated gastric conditions.

    PubMed

    Groenen, P J; Luten, J B; Dhont, J H; de Cock-Bethbeder, M W; Prins, L A; Vreeken, J W

    1982-01-01

    Most food products do not form volatile nitrosamines under the simulated gastric conditions employed in the present study. Fish and other seafood products, however, regularly form nitrosodimethylamine (NDMA), sometimes in amounts of tens of micrograms per 'portion'. These results corroborate the tentative conclusions of a previous report from this laboratory. An attempt has been made to assess the influences of fish species, method of processing (freezing, smoking, canning, marinating, boiling, frying) and degree of freshness, but no particular type of product can be singled out as being a regular source of exceptional NDMA formation. If the model system employed is a valid approximation to the conditions obtaining in the human stomach, these studies suggest that the amounts of NDMA formed in vivo from certain fish samples might far exceed those already present in food products before consumption.

  3. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics.

    PubMed

    Martínez, Enrique; Cawkwell, Marc J; Voter, Arthur F; Niklasson, Anders M N

    2015-04-21

    Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached at each time step. The thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.

  4. Polling Places, Pharmacies, and Public Health: Vote & Vax 2012

    PubMed Central

    Moore, Ryan T.; Benson, William; Anderson, Lynda A.

    2015-01-01

    US national elections, which draw sizable numbers of older voters, take place during flu-shot season and represent an untapped opportunity for large-scale delivery of vaccinations. In 2012, Vote & Vax deployed a total of 1585 clinics in 48 states; Washington, DC; Guam; Puerto Rico; and the US Virgin Islands. Approximately 934 clinics were located in pharmacies, and 651 were near polling places. Polling place clinics delivered significantly more vaccines than did pharmacies (5710 vs 3669). The delivery of vaccines was estimated at 9379, and approximately 45% of the recipients identified their race/ethnicity as African American or Hispanic. More than half of the White Vote & Vax recipients and more than two thirds of the non-White recipients were not regular flu shot recipients. PMID:25879150

  5. Polling places, pharmacies, and public health: Vote & Vax 2012.

    PubMed

    Shenson, Douglas; Moore, Ryan T; Benson, William; Anderson, Lynda A

    2015-06-01

    US national elections, which draw sizable numbers of older voters, take place during flu-shot season and represent an untapped opportunity for large-scale delivery of vaccinations. In 2012, Vote & Vax deployed a total of 1585 clinics in 48 states; Washington, DC; Guam; Puerto Rico; and the US Virgin Islands. Approximately 934 clinics were located in pharmacies, and 651 were near polling places. Polling place clinics delivered significantly more vaccines than did pharmacies (5710 vs 3669). The delivery of vaccines was estimated at 9379, and approximately 45% of the recipients identified their race/ethnicity as African American or Hispanic. More than half of the White Vote & Vax recipients and more than two thirds of the non-White recipients were not regular flu shot recipients.

  6. Differential privacy based on importance weighting

    PubMed Central

    Ji, Zhanglong

    2014-01-01

    This paper analyzes a novel method for publishing data while still protecting privacy. The method is based on computing weights that make an existing dataset, for which there are no confidentiality issues, analogous to the dataset that must be kept private. The existing dataset may be genuine but public already, or it may be synthetic. The weights are importance sampling weights, but to protect privacy, they are regularized and have noise added. The weights allow statistical queries to be answered approximately while provably guaranteeing differential privacy. We derive an expression for the asymptotic variance of the approximate answers. Experiments show that the new mechanism performs well even when the privacy budget is small, and when the public and private datasets are drawn from different populations. PMID:24482559

  7. Learning Representation and Control in Markov Decision Processes

    DTIC Science & Technology

    2013-10-21

    π. Figure 3 shows that Drazin bases outperforms the other bases on a two-room MDP. However, a drawback of Drazin bases is that they are...stochastic matrices. One drawback of diffusion wavelets is that it can gen- erate a large number of overcomplete bases, which needs to be effectively...proposed in [52], overcoming some of the drawbacks of LARS-TD. An approximate linear programming for finding l1 regularized solutions of the Bellman

  8. Strategy for chemotherapeutic delivery using a nanosized porous metal-organic framework with a central composite design

    PubMed Central

    Li, Yingpeng; Li, Xiuyan; Guan, Qingxia; Zhang, Chunjing; Xu, Ting; Dong, Yujing; Bai, Xinyu; Zhang, Weiping

    2017-01-01

    Background Enhancing drug delivery is an ongoing endeavor in pharmaceutics, especially when the efficacy of chemotherapy for cancer is concerned. In this study, we prepared and evaluated nanosized HKUST-1 (nanoHKUST-1), nanosized metal-organic drug delivery framework, loaded with 5-fluorouracil (5-FU) for potential use in cancer treatment. Materials and methods NanoHKUST-1 was prepared by reacting copper (II) acetate [Cu(OAc)2] and benzene-1,3,5-tricarboxylic acid (H3BTC) with benzoic acid (C6H5COOH) at room temperature (23.7°C±2.4°C). A central composite design was used to optimize 5-FU-loaded nanoHKUST-1. Contact time, ethanol concentration, and 5-FU:material ratios were the independent variables, and the entrapment efficiency of 5-FU was the response parameter measured. Powder X-ray diffraction, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and nitrogen adsorption were used to determine the morphology of nanoHKUST-1. In addition, 5-FU release studies were conducted, and the in vitro cytotoxicity was evaluated. Results Entrapment efficiency and drug loading were 9.96% and 40.22%, respectively, while the small-angle X-ray diffraction patterns confirmed a regular porous structure. The SEM and TEM images of the nanoHKUST-1 confirmed the presence of round particles (diameter: approximately 100 nm) and regular polygon arrays of mesoporous channels of approximately 2–5 nm. The half-maximal lethal concentration (LC50) of the 5-FU-loaded nanoHKUST-1 was approximately 10 µg/mL. Conclusion The results indicated that nanoHKUST-1 is a potential vector worth developing as a cancer chemotherapeutic drug delivery system. PMID:28260892

  9. Strategy for chemotherapeutic delivery using a nanosized porous metal-organic framework with a central composite design.

    PubMed

    Li, Yingpeng; Li, Xiuyan; Guan, Qingxia; Zhang, Chunjing; Xu, Ting; Dong, Yujing; Bai, Xinyu; Zhang, Weiping

    2017-01-01

    Enhancing drug delivery is an ongoing endeavor in pharmaceutics, especially when the efficacy of chemotherapy for cancer is concerned. In this study, we prepared and evaluated nanosized HKUST-1 (nanoHKUST-1), nanosized metal-organic drug delivery framework, loaded with 5-fluorouracil (5-FU) for potential use in cancer treatment. NanoHKUST-1 was prepared by reacting copper (II) acetate [Cu(OAc) 2 ] and benzene-1,3,5-tricarboxylic acid (H 3 BTC) with benzoic acid (C 6 H 5 COOH) at room temperature (23.7°C±2.4°C). A central composite design was used to optimize 5-FU-loaded nanoHKUST-1. Contact time, ethanol concentration, and 5-FU:material ratios were the independent variables, and the entrapment efficiency of 5-FU was the response parameter measured. Powder X-ray diffraction, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and nitrogen adsorption were used to determine the morphology of nanoHKUST-1. In addition, 5-FU release studies were conducted, and the in vitro cytotoxicity was evaluated. Entrapment efficiency and drug loading were 9.96% and 40.22%, respectively, while the small-angle X-ray diffraction patterns confirmed a regular porous structure. The SEM and TEM images of the nanoHKUST-1 confirmed the presence of round particles (diameter: approximately 100 nm) and regular polygon arrays of mesoporous channels of approximately 2-5 nm. The half-maximal lethal concentration (LC 50 ) of the 5-FU-loaded nanoHKUST-1 was approximately 10 µg/mL. The results indicated that nanoHKUST-1 is a potential vector worth developing as a cancer chemotherapeutic drug delivery system.

  10. Evolutionary modification of T-brain (tbr) expression patterns in sand dollar.

    PubMed

    Minemura, Keiko; Yamaguchi, Masaaki; Minokawa, Takuya

    2009-10-01

    The sand dollars are a group of irregular echinoids that diverged from other regular sea urchins approximately 200 million years ago. We isolated two orthologs of T-brain (tbr), Smtbr and Pjtbr, from the indirect developing sand dollar Scaphechinus mirabilis and the direct developing sand dollar Peronella japonica, respectively. The expression patterns of Smtbr and Pjtbr during early development were examined by whole mount in situ hybridization. The expression of Smtbr was first detected in micromere descendants in early blastula stage, similar to tbr expression in regular sea urchins. However, unlike in regular sea urchin, Smtbr expression in middle blastula stage was detected in micromere-descendent cells and a subset of macromere-descendant cells. At gastrula stage, expression of Smtbr was detected in part of the archenteron as well as primary mesenchyme cells. A similar pattern of tbr expression was observed in early Peronella embryos. A comparison of tbr expression patterns between sand dollars and other echinoderm species suggested that broader expression in the endomesoderm is an ancestral character of echinoderms. In addition to the endomesoderm, Pjtbr expression was detected in the apical organ, the animal-most part of the ectoderm.

  11. Investigation of wall-bounded turbulence over regularly distributed roughness

    NASA Astrophysics Data System (ADS)

    Placidi, Marco; Ganapathisubramani, Bharathram

    2012-11-01

    The effects of regularly distributed roughness elements on the structure of a turbulent boundary layer are examined by performing a series of Planar (high resolution l+ ~ 30) and Stereoscopic Particle Image Velocimetry (PIV) experiments in a wind tunnel. An adequate description of how to best characterise a rough wall, especially one where the density of roughness elements is sparse, is yet to be developed. In this study, rough surfaces consisting of regularly and uniformly distributed LEGO® blocks are used. Twelve different patterns are adopted in order to systematically examine the effects of frontal solidity (λf, frontal area of the roughness elements per unit wall-parallel area) and plan solidity (λp, plan area of roughness elements per unit wall-parallel area), on the turbulence structure. The Karman number, Reτ , is approximately 4000 across the different cases. Spanwise 3D vector fields at two different wall-normal locations (top of the canopy and within the log-region) are also compared to examine the spanwise homogeneity of the flow across different surfaces. In the talk, a detailed analysis of mean and rms velocity profiles, Reynolds stresses, and quadrant decomposition for the different patterns will be presented.

  12. Testing for the Endogenous Nature between Women's Empowerment and Antenatal Health Care Utilization: Evidence from a Cross-Sectional Study in Egypt

    PubMed Central

    Hussein, Mohamed Ali

    2014-01-01

    Women's relative lack of decision-making power and their unequal access to employment, finances, education, basic health care, and other resources are considered to be the root causes of their ill-health and that of their children. The main purpose of this paper is to examine the interactive relation between women's empowerment and the use of maternal health care. Two model specifications are tested. One assumes no correlation between empowerment and antenatal care while the second specification allows for correlation. Both the univariate and the recursive bivariate probit models are tested. The data used in this study is EDHS 2008. Factor Analysis Technique is also used to construct some of the explanatory variables such as the availability and quality of health services indicators. The findings show that women's empowerment and receiving regular antenatal care are simultaneously determined and the recursive bivariate probit is a better approximation to the relationship between them. Women's empowerment has significant and positive impact on receiving regular antenatal care. The availability and quality of health services do significantly increase the likelihood of receiving regular antenatal care. PMID:25140310

  13. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  14. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  15. Efficacy of 3 toothbrush treatments on plaque removal in orthodontic patients assessed with digital plaque imaging: a randomized controlled trial.

    PubMed

    Erbe, Christina; Klukowska, Malgorzata; Tsaknaki, Iris; Timm, Hans; Grender, Julie; Wehrbein, Heinrich

    2013-06-01

    Good oral hygiene is a challenge for orthodontic patients because food readily becomes trapped around the brackets and under the archwires, and appliances are an obstruction to mechanical brushing. The purpose of this study was to compare plaque removal efficacy of 3 toothbrush treatments in orthodontic subjects. This was a replicate-use, single-brushing, 3-treatment, examiner-blind, randomized, 6-period crossover study with washout periods of approximately 24 hours between visits. Forty-six adolescent and young adult patients with fixed orthodontics from a university clinic in Germany were randomized, based on computer-generated randomization, to 1 of 3 treatments: (1) oscillating-rotating electric toothbrush with a specially designed orthodontic brush head (Oral-B Triumph, OD17; Procter & Gamble, Cincinnati, Ohio); (2) the same electric toothbrush handle with a regular brush head (EB25; Procter & Gamble); and (3) a regular manual toothbrush (American Dental Association, Chicago, Ill). The primary outcome was the plaque score change from baseline, which we determined using digital plaque image analysis. Forty-five subjects completed the study. The differences in mean plaque removal (95% confidence interval) between the electric toothbrush with an orthodontic brush head (6% [4.4%-7.6%]) or a regular brush head (3.8% [2.2%-5.3%]) and the manual toothbrush were significant (P <0.001). Plaque removal with the electric toothbrush with the orthodontic brush head was superior (2.2%; P = 0.007) to the regular brush head. No adverse events were seen. The electric toothbrush, with either brush head, demonstrated significantly greater plaque removal over the manual brush. The orthodontic brush head was superior to the regular head. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  16. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  17. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  18. Dependency links can hinder the evolution of cooperation in the prisoner's dilemma game on lattices and networks.

    PubMed

    Wang, Xuwen; Nie, Sen; Wang, Binghong

    2015-01-01

    Networks with dependency links are more vulnerable when facing the attacks. Recent research also has demonstrated that the interdependent groups support the spreading of cooperation. We study the prisoner's dilemma games on spatial networks with dependency links, in which a fraction of individual pairs is selected to depend on each other. The dependency individuals can gain an extra payoff whose value is between the payoff of mutual cooperation and the value of temptation to defect. Thus, this mechanism reflects that the dependency relation is stronger than the relation of ordinary mutual cooperation, but it is not large enough to cause the defection of the dependency pair. We show that the dependence of individuals hinders, promotes and never affects the cooperation on regular ring networks, square lattice, random and scale-free networks, respectively. The results for the square lattice and regular ring networks are demonstrated by the pair approximation.

  19. Settling velocity of microplastic particles of regular shapes.

    PubMed

    Khatmullina, Liliya; Isachenko, Igor

    2017-01-30

    Terminal settling velocity of around 600 microplastic particles, ranging from 0.5 to 5mm, of three regular shapes was measured in a series of sink experiments: Polycaprolactone (material density 1131kgm -3 ) spheres and short cylinders with equal dimensions, and long cylinders cut from fishing lines (1130-1168kgm -3 ) of different diameters (0.15-0.71mm). Settling velocities ranging from 5 to 127mms -1 were compared with several semi-empirical predictions developed for natural sediments showing reasonable consistency with observations except for the case of long cylinders, for which the new approximation is proposed. The effect of particle's shape on its settling velocity is highlighted, indicating the need of further experiments with real marine microplastics of different shapes and the necessity of the development of reasonable parameterization of microplastics settling for proper modeling of their transport in the water column. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brambilla, N.; Prosperi, G.M.

    1992-08-01

    We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less

  1. Is the Filinov integral conditioning technique useful in semiclassical initial value representation methods?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spanner, Michael; Batista, Victor S.; Brumer, Paul

    2005-02-22

    The utility of the Filinov integral conditioning technique, as implemented in semiclassical initial value representation (SC-IVR) methods, is analyzed for a number of regular and chaotic systems. For nonchaotic systems of low dimensionality, the Filinov technique is found to be quite ineffective at accelerating convergence of semiclassical calculations since, contrary to the conventional wisdom, the semiclassical integrands usually do not exhibit significant phase oscillations in regions of large integrand amplitude. In the case of chaotic dynamics, it is found that the regular component is accurately represented by the SC-IVR, even when using the Filinov integral conditioning technique, but that quantummore » manifestations of chaotic behavior was easily overdamped by the filtering technique. Finally, it is shown that the level of approximation introduced by the Filinov filter is, in general, comparable to the simpler ad hoc truncation procedure introduced by Kay [J. Chem. Phys. 101, 2250 (1994)].« less

  2. Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Dunca, Argus A.

    2017-12-01

    This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.

  3. Improving Social Engagement and Initiations between Children with Autism Spectrum Disorder and Their Peers in Inclusive Settings

    PubMed Central

    Koegel, Lynn Kern; Vernon, Ty; Koegel, Robert L.; Koegel, Brittany L.; Paullin, Anne W.

    2013-01-01

    Children with Asperger’s Disorder often have difficulty with peer relationships and socialization. The current study assessed whether peer social interactions would improve in school settings if an intervention was designed that incorporated the children with Asperger’s interests. Three children who were fully-included in regular education classes but did not interact with peers prior to intervention participated in this research. Social lunch clubs, open to both the study participants and their typical peers, were implemented twice weekly during regular lunchtime periods. Results showed that all three children increased their time engaged with peers as a result of the clubs. While their initiations greatly improved over baseline levels and approximated their peers, they were often initiating below the level of most of their peers. Implications for improving peer social interactions for children with Asperger’s Disorder are discussed. PMID:25328380

  4. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.

    Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less

  5. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics

    DOE PAGES

    Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.; ...

    2015-04-21

    Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less

  6. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  7. Acute effects of caffeine in volunteers with different patterns of regular consumption.

    PubMed

    Hewlett, Paul; Smith, Andrew

    2006-04-01

    The effects of caffeine on mood and performance are well established. One explanation of these effects is that caffeine removes negative effects induced by prior caffeine withdrawal. This was tested here by comparing effects of caffeine in withdrawn consumers and non-consumers (who by definition were not withdrawn). The present study aimed to determine whether caffeine withdrawal influenced mood and performance by comparing regular consumers who had been withdrawn from caffeine overnight with non-consumers. Following this the effects of acute caffeine challenges were compared in withdrawn consumers and non-consumers. In addition, comparisons were made between those with higher and lower caffeine consumption. One hundred seventy-six volunteers participated in the study. Regular caffeine consumption was assessed by questionnaire and this showed that 56 of the sample did not regularly consume caffeinated beverages. Volunteers were instructed to abstain from caffeine overnight and then completed a baseline session measuring mood and a range of cognitive functions at 08.00 the next day. Following this approximately half of the volunteers were given 1 mg/kg caffeine in a milkshake or water (in the 'no caffeine' condition they were given just the milkshake or water) and the test battery repeated one hour later. A second test battery was carried out at 12.00 and a second caffeine challenge at 13.00. A final test session was carried out at 15.00. The baseline data revealed little evidence of effects of caffeine withdrawal on performance and mood. In contrast to this, caffeine produced a number of significant improvements in performance. There were some differences in the effects of caffeine on regular and non-consumers, with caffeine tending to reduce reaction time in regular consumers while the opposite was true for non-consumers. The present results show little evidence of effects of caffeine withdrawal on performance. In contrast, caffeine challenge produced improvements in aspects of performance and these were often not modified by regular caffeine consumption patterns. The differences in effects of caffeine that were observed between non-consumers and regular consumers were in functions that were unaffected by caffeine withdrawal. These findings show that the observed beneficial effects of caffeine cannot be interpreted in terms of a reversal of caffeine withdrawal. Copyright (c) 2006 John Wiley & Sons, Ltd.

  8. Relations between breast and cervical cancer prevention behaviour of female students at a school of health and their healthy life style.

    PubMed

    Malak, Arzu Tuna; Yilmaz, Derya; Tuna, Aslan; Gümüs, Aysun Babacan; Turgay, Ayse San

    2010-01-01

    Regular breast self-examination (BSE) and pap-smear tests are the two of the positive heath behaviors for improving, promoting and protecting the health of adolescent girls. The present quasi-experimental research was carried out with the purpose of analyzing the relations between breast and cervical cancer prevention behavior of female students at a School of Health and their health lifestyle. The research was conducted at Canakkale Onsekiz Mart University School of Health between November 2008 and February 2009. A total of 77 female students attending the first and second grades were included in the sample. Education pertinent to the matter was provided and evaluation was made three months later. A knowledge evaluation form for breast and gynecological examination, the Healthy Life-Style Behavior Scale (HPLP), was used in data collection. Number percentages, the McNemar Bowker test, the t test and the Mann Whitney U test were used in the evaluation. Despite the information they had received, not all of the students performed regular breast self-examination (BSE) prior to the education. For 24.7% (n=19) the reason for not doing regular BSE was their having no symptoms and for 29.9% (n=23) it was due to thinking that they would not have breast cancer. The reason for not having pap smear test was a virgin status. Three months after the education, knowledge level scores of the students increased approximately three and a half times (from 23.8-9.8) to 81.2-8.0). The rate of having regular BSE was 88.3% after three months, however; there was no pap smear test probably due to the fact that it was a taboo. When the rate of having regular BSE three months after the education and HLPL scores were compared, the scores of those having it regularly and the scores of those not having it regularly were found to be close and no statistically significant difference was detected (p> 0.05). In conclusion, consultancy service units should be established to comprehend the barriers perceived by adolescent girls who do not have regular health screening, to make appropriate strategic planning in order to eradicate the hindrances in Muslim societies and to enhance the motivation of youth with continuous education.

  9. Cooperative SIS epidemics can lead to abrupt outbreaks

    NASA Astrophysics Data System (ADS)

    Ghanbarnejad, Fakhteh; Chen, Li; Cai, Weiran; Grassberger, Peter

    2015-03-01

    In this paper, we study spreading of two cooperative SIS epidemics in mean field approximations and also within an agent based framework. Therefore we investigate dynamics on different topologies like Erdos-Renyi networks and regular lattices. We show that cooperativity of two diseases can lead to strongly first order outbreaks, while the dynamics still might present some scaling laws typical for second order phase transitions. We argue how topological network features might be related to this interesting hybrid behaviors.

  10. Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach

    NASA Astrophysics Data System (ADS)

    Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan

    2017-05-01

    Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.

  11. Decay, excitation, and ionization of lithium Rydberg states by blackbody radiation

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, V. D.; Glukhov, I. L.

    2010-09-01

    Details of interaction between the blackbody radiation and neutral lithium atoms were studied in the temperature ranges T = 100-2000 K. The rates of thermally induced decays, excitations and ionization were calculated for S-, P- and D-series of Rydberg states in the Fues' model potential approach. The quantitative regularities for the states of the maximal rates of blackbody-radiation-induced processes were determined. Approximation formulas were proposed for analytical representation of the depopulation rates.

  12. Semi-inner-products in Banach Spaces with Applications to Regularized Learning, Sampling, and Sparse Approximation

    DTIC Science & Technology

    2016-03-13

    dynamics of percept formation modeled as operant (selectionist) process, Cognitive Neurodynamics , (08 2013): 0. doi: 10.1007/s11571-013-9262-0 Jun Zhang... cognitive principles for categorization. Our execution plan include three specific topics (“Aims”) 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13...while rooted in human cognitive principles for categorization. Our execution plan include three specific topics (“Aims”) 1. Apply RKBS theory to

  13. Modelling Trial-by-Trial Changes in the Mismatch Negativity

    PubMed Central

    Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.

    2013-01-01

    The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989

  14. RESONANT AMPLIFICATION OF TURBULENCE BY THE BLAST WAVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zankovich, A. M.; Kovalenko, I. G., E-mail: ilya.g.kovalenko@gmail.com

    2015-02-10

    We discuss the idea of whether spherical blast waves can amplify by a nonlocal resonant hydrodynamic mechanism inhomogeneities formed by turbulence or phase segregation in the interstellar medium. We consider the problem of a blast-wave-turbulence interaction in the Linear Interaction Approximation. Mathematically, this is an eigenvalue problem for finding the structure and amplitude of eigenfunctions describing the response of the shock-wave flow to forced oscillations by external perturbations in the ambient interstellar medium. Linear analysis shows that the blast wave can amplify density and vorticity perturbations for a wide range of length scales with amplification coefficients of up to 20,more » with increasing amplification the larger the length. There also exist resonant harmonics for which the gain becomes formally infinite in the linear approximation. Their orbital wavenumbers are within the range of macro- (l ∼ 1), meso- (l ∼ 20), and microscopic (l > 200) scales. Since the resonance width is narrow (typically, Δl < 1), resonance should select and amplify discrete isolated harmonics. We speculate on a possible explanation of an observed regular filamentary structure of regularly shaped round supernova remnants such as SNR 1572, 1006, or 0509-67.5. Resonant mesoscales found (l ≈ 18) are surprisingly close to the observed scales (l ≈ 15) of ripples in the shell's surface of SNR 0509-67.5.« less

  15. Exact Markov chain and approximate diffusion solution for haploid genetic drift with one-way mutation.

    PubMed

    Hössjer, Ola; Tyvand, Peder A; Miloh, Touvia

    2016-02-01

    The classical Kimura solution of the diffusion equation is investigated for a haploid random mating (Wright-Fisher) model, with one-way mutations and initial-value specified by the founder population. The validity of the transient diffusion solution is checked by exact Markov chain computations, using a Jordan decomposition of the transition matrix. The conclusion is that the one-way diffusion model mostly works well, although the rate of convergence depends on the initial allele frequency and the mutation rate. The diffusion approximation is poor for mutation rates so low that the non-fixation boundary is regular. When this happens we perturb the diffusion solution around the non-fixation boundary and obtain a more accurate approximation that takes quasi-fixation of the mutant allele into account. The main application is to quantify how fast a specific genetic variant of the infinite alleles model is lost. We also discuss extensions of the quasi-fixation approach to other models with small mutation rates. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Approximate matching of structured motifs in DNA sequences.

    PubMed

    El-Mabrouk, Nadia; Raffinot, Mathieu; Duchesne, Jean-Eudes; Lajoie, Mathieu; Luc, Nicolas

    2005-04-01

    Several methods have been developed for identifying more or less complex RNA structures in a genome. All these methods are based on the search for conserved primary and secondary sub-structures. In this paper, we present a simple formal representation of a helix, which is a combination of sequence and folding constraints, as a constrained regular expression. This representation allows us to develop a well-founded algorithm that searches for all approximate matches of a helix in a genome. The algorithm is based on an alignment graph constructed from several copies of a pushdown automaton, arranged one on top of another. This is a first attempt to take advantage of the possibilities of pushdown automata in the context of approximate matching. The worst time complexity is O(krpn), where k is the error threshold, n the size of the genome, p the size of the secondary expression, and r its number of union symbols. We then extend the algorithm to search for pseudo-knots and secondary structures containing an arbitrary number of helices.

  17. Drop impact upon micro- and nanostructured superhydrophobic surfaces.

    PubMed

    Tsai, Peichun; Pacheco, Sergio; Pirat, Christophe; Lefferts, Leon; Lohse, Detlef

    2009-10-20

    We experimentally investigate drop impact dynamics onto different superhydrophobic surfaces, consisting of regular polymeric micropatterns and rough carbon nanofibers, with similar static contact angles. The main control parameters are the Weber number We and the roughness of the surface. At small We, i.e., small impact velocity, the impact evolutions are similar for both types of substrates, exhibiting Fakir state, complete bouncing, partial rebouncing, trapping of an air bubble, jetting, and sticky vibrating water balls. At large We, splashing impacts emerge forming several satellite droplets, which are more pronounced for the multiscale rough carbon nanofiber jungles. The results imply that the multiscale surface roughness at nanoscale plays a minor role in the impact events for small We less than or approximately equal 120 but an important one for large We greater than or approximately equal 120. Finally, we find the effect of ambient air pressure to be negligible in the explored parameter regime We less than or approximately equal 150.

  18. Extended Hansen solubility approach: naphthalene in individual solvents.

    PubMed

    Martin, A; Wu, P L; Adjei, A; Beerbower, A; Prausnitz, J M

    1981-11-01

    A multiple regression method using Hansen partial solubility parameters, delta D, delta p, and delta H, was used to reproduce the solubilities of naphthalene in pure polar and nonpolar solvents and to predict its solubility in untested solvents. The method, called the extended Hansen approach, was compared with the extended Hildebrand solubility approach and the universal-functional-group-activity-coefficient (UNIFAC) method. The Hildebrand regular solution theory was also used to calculate naphthalene solubility. Naphthalene, an aromatic molecule having no side chains or functional groups, is "well-behaved', i.e., its solubility in active solvents known to interact with drug molecules is fairly regular. Because of its simplicity, naphthalene is a suitable solute with which to initiate the difficult study of solubility phenomena. The three methods tested (Hildebrand regular solution theory was introduced only for comparison of solubilities in regular solution) yielded similar results, reproducing naphthalene solubilities within approximately 30% of literature values. In some cases, however, the error was considerably greater. The UNIFAC calculation is superior in that it requires only the solute's heat of fusion, the melting point, and a knowledge of chemical structures of solute and solvent. The extended Hansen and extended Hildebrand methods need experimental solubility data on which to carry out regression analysis. The extended Hansen approach was the method of second choice because of its adaptability to solutes and solvents from various classes. Sample calculations are included to illustrate methods of predicting solubilities in untested solvents at various temperatures. The UNIFAC method was successful in this regard.

  19. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  20. Weak Galerkin method for the Biot’s consolidation model

    DOE PAGES

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    2017-08-23

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  1. Damageable contact between an elastic body and a rigid foundation

    NASA Astrophysics Data System (ADS)

    Campo, M.; Fernández, J. R.; Silva, A.

    2009-02-01

    In this work, the contact problem between an elastic body and a rigid obstacle is studied, including the development of material damage which results from internal compression or tension. The variational problem is formulated as a first-kind variational inequality for the displacements coupled with a parabolic partial differential equation for the damage field. The existence of a unique local weak solution is stated. Then, a fully discrete scheme is introduced using the finite element method to approximate the spatial variable and an Euler scheme to discretize the time derivatives. Error estimates are derived on the approximate solutions, from which the linear convergence of the algorithm is deduced under suitable regularity conditions. Finally, three two-dimensional numerical simulations are performed to demonstrate the accuracy and the behaviour of the scheme.

  2. Weak Galerkin method for the Biot’s consolidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  3. Nonlinear Solver Approaches for the Diffusive Wave Approximation to the Shallow Water Equations

    NASA Astrophysics Data System (ADS)

    Collier, N.; Knepley, M.

    2015-12-01

    The diffusive wave approximation to the shallow water equations (DSW) is a doubly-degenerate, nonlinear, parabolic partial differential equation used to model overland flows. Despite its challenges, the DSW equation has been extensively used to model the overland flow component of various integrated surface/subsurface models. The equation's complications become increasingly problematic when ponding occurs, a feature which becomes pervasive when solving on large domains with realistic terrain. In this talk I discuss the various forms and regularizations of the DSW equation and highlight their effect on the solvability of the nonlinear system. In addition to this analysis, I present results of a numerical study which tests the applicability of a class of composable nonlinear algebraic solvers recently added to the Portable, Extensible, Toolkit for Scientific Computation (PETSc).

  4. The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs

    NASA Astrophysics Data System (ADS)

    Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.

    2017-12-01

    The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.

  5. Use of genomic recursions and algorithm for proven and young animals for single-step genomic BLUP analyses--a simulation study.

    PubMed

    Fragomeni, B O; Lourenco, D A L; Tsuruta, S; Masuda, Y; Aguilar, I; Misztal, I

    2015-10-01

    The purpose of this study was to examine accuracy of genomic selection via single-step genomic BLUP (ssGBLUP) when the direct inverse of the genomic relationship matrix (G) is replaced by an approximation of G(-1) based on recursions for young genotyped animals conditioned on a subset of proven animals, termed algorithm for proven and young animals (APY). With the efficient implementation, this algorithm has a cubic cost with proven animals and linear with young animals. Ten duplicate data sets mimicking a dairy cattle population were simulated. In a first scenario, genomic information for 20k genotyped bulls, divided in 7k proven and 13k young bulls, was generated for each replicate. In a second scenario, 5k genotyped cows with phenotypes were included in the analysis as young animals. Accuracies (average for the 10 replicates) in regular EBV were 0.72 and 0.34 for proven and young animals, respectively. When genomic information was included, they increased to 0.75 and 0.50. No differences between genomic EBV (GEBV) obtained with the regular G(-1) and the approximated G(-1) via the recursive method were observed. In the second scenario, accuracies in GEBV (0.76, 0.51 and 0.59 for proven bulls, young males and young females, respectively) were also higher than those in EBV (0.72, 0.35 and 0.49). Again, no differences between GEBV with regular G(-1) and with recursions were observed. With the recursive algorithm, the number of iterations to achieve convergence was reduced from 227 to 206 in the first scenario and from 232 to 209 in the second scenario. Cows can be treated as young animals in APY without reducing the accuracy. The proposed algorithm can be implemented to reduce computing costs and to overcome current limitations on the number of genotyped animals in the ssGBLUP method. © 2015 Blackwell Verlag GmbH.

  6. Regular exercise prevents the development of hyperglucocorticoidemia via adaptations in the brain and adrenal glands in male Zucker diabetic fatty rats.

    PubMed

    Campbell, Jonathan E; Király, Michael A; Atkinson, Daniel J; D'souza, Anna M; Vranic, Mladen; Riddell, Michael C

    2010-07-01

    We determined the effects of voluntary wheel running on the hypothalamic-pituitary-adrenal (HPA) axis, and the peripheral determinants of glucocorticoids action, in male Zucker diabetic fatty (ZDF) rats. Six-week-old euglycemic ZDF rats were divided into Basal, Sedentary, and Exercise groups (n = 8-9 per group). Basal animals were immediately killed, whereas Sedentary and Exercising rats were monitored for 10 wk. Basal (i.e., approximately 0900 AM in the resting state) glucocorticoid levels increased 2.3-fold by week 3 in Sedentary rats where they remained elevated for the duration of the study. After an initial elevation in basal glucocorticoid levels at week 1, Exercise rats maintained low glucocorticoid levels from week 3 through week 10. Hyperglycemia was evident in Sedentary animals by week 7, whereas Exercising animals maintained euglycemia throughout. At the time of death, the Sedentary group had approximately 40% lower glucocorticoid receptor (GR) content in the hippocampus, compared with the Basal and Exercise groups (P < 0.05), suggesting that the former group had impaired negative feedback regulation of the HPA axis. Both Sedentary and Exercise groups had elevated ACTH compared with Basal rats, indicating that central drive of the axis was similar between groups. However, Sedentary, but not Exercise, animals had elevated adrenal ACTH receptor and steroidogenic acute regulatory protein content compared with the Basal animals, suggesting that regular exercise protects against elevations in glucocorticoids by a downregulation of adrenal sensitivity to ACTH. GR and 11beta-hydroxysteroid dehydrogenase type 1 content in skeletal muscle and liver were similar between groups, however, GR content in adipose tissue was elevated in the Sedentary groups compared with the Basal and Exercise (P < 0.05) groups. Thus, the gradual elevations in glucocorticoid levels associated with the development of insulin resistance in male ZDF rats can be prevented with regular exercise, likely because of adaptations that occur primarily in the adrenal glands.

  7. Regularized Transformation-Optics Cloaking for the Helmholtz Equation: From Partial Cloak to Full Cloak

    NASA Astrophysics Data System (ADS)

    Li, Jingzhi; Liu, Hongyu; Rondi, Luca; Uhlmann, Gunther

    2015-04-01

    We develop a very general theory on the regularized approximate invisibility cloaking for the wave scattering governed by the Helmholtz equation in any space dimensions via the approach of transformation optics. There are four major ingredients in our proposed theory: (1) The non-singular cloaking medium is obtained by the push-forwarding construction through a transformation that blows up a subset in the virtual space, where is an asymptotic regularization parameter. will degenerate to K 0 as , and in our theory K 0 could be any convex compact set in , or any set whose boundary consists of Lipschitz hypersurfaces, or a finite combination of those sets. (2) A general lossy layer with the material parameters satisfying certain compatibility integral conditions is employed right between the cloaked and cloaking regions. (3) The contents being cloaked could also be extremely general, possibly including, at the same time, generic mediums and, sound-soft, sound-hard and impedance-type obstacles, as well as some sources or sinks. (4) In order to achieve a cloaking device of compact size, particularly for the case when is not "uniformly small", an assembly-by-components, the (ABC) geometry is developed for both the virtual and physical spaces and the blow-up construction is based on concatenating different components. Within the proposed framework, we show that the scattered wave field corresponding to a cloaking problem will converge to u 0 as , with u 0 being the scattered wave field corresponding to a sound-hard K 0. The convergence result is used to theoretically justify the approximate full and partial invisibility cloaks, depending on the geometry of K 0. On the other hand, the convergence results are conducted in a much more general setting than what is needed for the invisibility cloaking, so they are of significant mathematical interest for their own sake. As for applications, we construct three types of full and partial cloaks. Some numerical experiments are also conducted to illustrate our theoretical results.

  8. A guide to experimental particle physics literature, 1991-1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezhela, V.V.; Filimonov, B.B.; Lugovsky, S.B.

    1996-10-01

    We present an indexed guide to experimental particle physics literature for the years 1991 - 1996. Approximately 4200 papers are indexed by (1) Beam/Target/Momentum (2) Reaction/Momentum/Data-Descriptor (including the final state) (3) Particle/Decay (4) Accelerator/Experiment/Detector. All indices are cross-referenced to the paper`s title and references in the ID/Reference/Title index. The information presented in this guide is also publicly available on a regularly-updated DATAGUIDE database from the World Wide Web.

  9. Androgenic Regulation of White Adipose Tissue-Prostate Cancer Interactions

    DTIC Science & Technology

    2015-08-01

    compared to shamed mice but much higher level in ASC from Glipr1-/- than ASC from Glipr1+/+ male mice. Thus, concluding that the castrated Glipr1...mRNA for each ASC to the shamed Glipr1+/+ (ShWT). The amount of Glipr1 mRNA reduced approximately 40% after the castration. The amount of PLF mRNA in...Received 2 June 2011 Accepted 5 July 2011 Available online 12 July 2011 Keywords: Castration Regular diet High- fat diet Epididymal white adipose tissue

  10. Birds of Oregon: A general reference

    USGS Publications Warehouse

    Marshall, David B.; Hunter, Matthew G.; Contreras, Alan

    2003-01-01

    Birds of Oregon is the first complete reference work on Oregon's birds to be published since Gabrielson and Jewett's landmark book in 1940. This comprehensive volume includes individual accounts of the approximately 500 species now known to occur in Oregon (about 150 more than in 1940), including detailed accounts of the 353 species that regularly occur and briefer accounts of another 133 species that are considered vagrants. A separate chapter covers extirpated and questionable species as well as those which have been introduced but have not become established.

  11. Saddlepoint Approximations in Conditional Inference

    DTIC Science & Technology

    1990-06-11

    Then the inverse transform can be written as (%, Y) = (T, q(T, Z)) for some function q. When the transform is not one to one, the domain should be...general regularity conditions described at the beginning of this section hold and that the solution t1 in (9) exists. Denote the inverse transform by (X, Y...density hn(t 0 l z) are desired. Then the inverse transform (Y, ) = (T, q(T, Z)) exists and the variable v in the cumulant generating function K(u, v

  12. RLE Progress Report Number 122.

    DTIC Science & Technology

    1980-01-01

    generator capable of delivering 20 kA of current at 1.5 MV. Both the pipe and the diode region are immersed in the uniform axial magnetic field of a...it decays into a slow space-charge wave and a TM wave of the guide. The dispersion PR No. 122 100 I ____ W/Wp (wi,ki) BEAM FRAME (a) .. ( 3, k3) ka ...to regular operation with well-confined plasmas and plasma currents of approximately as high as 300 kA . We recall that the reference design value of

  13. Nano spray-dried sodium chloride and its effects on the microbiological and sensory characteristics of surface-salted cheese crackers.

    PubMed

    Moncada, Marvin; Astete, Carlos; Sabliov, Cristina; Olson, Douglas; Boeneke, Charles; Aryana, Kayanush J

    2015-09-01

    Reducing particle size of salt to approximately 1.5 µm would increase its surface area, leading to increased dissolution rate in saliva and more efficient transfer of ions to taste buds, and hence, perhaps, a saltier perception of foods. This has a potential for reducing the salt level in surface-salted foods. Our objective was to develop a salt using a nano spray-drying method, to use the developed nano spray-dried salt in surface-salted cheese cracker manufacture, and to evaluate the microbiological and sensory characteristics of cheese crackers. Sodium chloride solution (3% wt/wt) was sprayed through a nano spray dryer. Particle sizes were determined by dynamic light scattering, and particle shapes were observed by scanning electron microscopy. Approximately 80% of the salt particles produced by the nano spray dryer, when drying a 3% (wt/wt) salt solution, were between 500 and 1,900 nm. Cheese cracker treatments consisted of 3 different salt sizes: regular salt with an average particle size of 1,500 µm; a commercially available Microsized 95 Extra Fine Salt (Cargill Salt, Minneapolis, MN) with an average particle size of 15 µm; and nano spray-dried salt with an average particle size of 1.5 µm, manufactured in our laboratory and 3 different salt concentrations (1, 1.5, and 2% wt/wt). A balanced incomplete block design was used to conduct consumer analysis of cheese crackers with nano spray-dried salt (1, 1.5, and 2%), Microsized salt (1, 1.5, and 2%) and regular 2% (control, as used by industry) using 476 participants at 1wk and 4mo. At 4mo, nano spray-dried salt treatments (1, 1.5, and 2%) had significantly higher preferred saltiness scores than the control (regular 2%). Also, at 4mo, nano spray-dried salt (1.5 and 2%) had significantly more just-about-right saltiness scores than control (regular 2%). Consumers' purchase intent increased by 25% for the nano spray-dried salt at 1.5% after they were notified about the 25% reduction in sodium content of the cheese cracker. We detected significantly lower yeast counts for nano spray-dried salt treatments (1, 1.5, and 2%) at 4mo compared with control (regular) salt (1, 1.5 and 2%). We detected no mold growth in any of the treatments at any time. At 4mo, we found no significant differences in sensory color, aroma, crunchiness, overall liking, or acceptability scores of cheese crackers using 1.5 and 1% nano spray-dried salt compared with control. Therefore, 25 to 50% less salt would be suitable for cheese crackers if the particle size of regular salt was reduced 3 log to form nano spray-dried salt. A 3-log reduction in sodium chloride particle size from regular salt to nano spray-dried salt increased saltiness, but a 1-log reduction in salt size from Microsized salt to nano spray-dried salt did not increase saltiness of surface-salted cheese crackers. The use of salt with reduced particle size by nano spray drying is recommended for use in surface-salted cheese crackers to reduce sodium intake. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Activity of selected oxidizing microbicides against the spores of Clostridium difficile: relevance to environmental control.

    PubMed

    Perez, Justo; Springthorpe, V Susan; Sattar, Syed A

    2005-08-01

    Clostridium difficile is an increasingly common nosocomial pathogen, and its spores are resistant to common environmental surface disinfectants. Many high-level disinfectants (eg, aldehydes) are unsuitable for environmental decontamination because they need several hours of contact to be sporicidal. This study tested the potential of selected oxidative microbicides to inactivate C. difficile spores on hard surfaces in relatively short contact times at room temperature. The spores of a clinical isolate of C. difficile were tested using disks (1 cm diameter) of brushed stainless steel in a quantitative carrier test. The spores of C. sporogenes and Bacillus subtilis, common surrogates for evaluating sporicides, were included for comparison. The clostridia were grown separately in Columbia broth (CB), and B. subtilis was grown in a 1:10 dilution of CB. Each disk received 10 microL test spores with an added soil load, and the inoculum was dried. One disk each was placed in a glass vial and overlaid with 50 microL test formulation; controls received an equivalent volume of normal saline with 0.1% Tween 80. At the end of the contact time the microbicide was neutralized, the inoculum recovered from the disks by vortexing, the eluates were membrane filtered, and the filters placed on plates of recovery medium. The colony-forming units (CFU) on the plates were recorded after 5 days of incubation. The performance criterion was > or = 6 log(10) (> or = 99.9999%) reduction in the viability titer of the spores. The microbicides tested were domestic bleach with free-chlorine (FC) levels of 1000, 3000, and 5000 mg/L; an accelerated hydrogen peroxide (AHP)-based product with 70,000 mg/L H2O2 (Virox STF); chlorine dioxide (600 mg/L FC); and acidified domestic bleach (5000 mg/L FC). Acidified bleach and the highest concentration of regular bleach tested could inactivate all the spores in < or = 10 minutes; Virox STF could do the same in < or = 13 minutes. Regular bleach with 3000 mg/L FC required up to 20 minutes to reduce the viability of the all the spores tested to undetectable levels; chlorine dioxide and the lowest concentration of regular bleach tested needed approximately 30 minutes for the same level of activity. Acidified bleach, Virox STF, and regular bleach (3000-5000 mg/L FC) could inactivate C. difficile spores on hard environmental surfaces in approximately 10 to 15 minutes under ambient conditions. All of these products are strong oxidizers and should be handled with care for protection of staff, but acidified and regular bleach with high levels of FC also release chlorine gas, which can be hazardous if inhaled by staff or patients.

  15. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  16. Color Image Restoration Using Nonlocal Mumford-Shah Regularizers

    NASA Astrophysics Data System (ADS)

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.

    We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.

  17. Differentially Private Empirical Risk Minimization

    PubMed Central

    Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.

    2011-01-01

    Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342

  18. Regularized solution of a nonlinear problem in electromagnetic sounding

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe

    2014-12-01

    Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.

  19. Male Alcohol use and unprotected sex with non-regular partners: Evidence from wine shops in Chennai, India

    PubMed Central

    Sivaram, S.; Srikrishnan, A.K.; Latkin, C.; Iriondo-Perez, J.; Go, V.F.; Solomon, S.; Celentano, D.D.

    2008-01-01

    Background In India, heterosexual transmission accounts for approximately 80% of the spread of HIV, the virus that causes AIDS. Male alcohol use and its putative association with sexual risk are explored to inform HIV prevention interventions. Methods A survey of 1196 male patrons of wine shops or bars was conducted from August 2002 - Jan 2003 as part of an ongoing HIV prevention trial in Chennai city in south India. In the analysis, we explored associations between covariates related to sexual behavior and alcohol use and our outcome of unprotected sexual intercourse with non-regular partners among men Results Nearly half (43%) of the respondents reported any unprotected sex with non-regular partners and 24% had four or more recent sexual partners. Over 85% reported using alcohol at least 10 days a month (17% reported drinking everyday). During a typical drinking day, 49% reported consuming five or more drinks. Alcohol use before sex was reported by 89% of respondents. Unprotected sex with non-regular partners was significantly higher among unmarried men (OR=3.25), those who reported irregular income (OR=1.38), who used alcohol before sex (OR=1.75) and who had higher numbers of sexual partners (OR=14.5). Conclusions Our findings suggest that future HIV prevention interventions in India might consider discussing responsible alcohol use and its possible role in sexual risk. These interventions should particularly consider involving unmarried men and weigh the role of structural factors such as access to income in developing prevention messages. PMID:18187270

  20. Prevalence Comparison of Past-year Mental Disorders and Suicidal Behaviours in the Canadian Armed Forces and the Canadian General Population

    PubMed Central

    Zamorski, Mark A.; Boulos, David; Garber, Bryan G.

    2016-01-01

    Objective: Military personnel in Canada and elsewhere have been found to have higher rates of certain mental disorders relative to their corresponding general populations. However, published Canadian data have only adjusted for age and sex differences between the populations. Additional differences in the sociodemographic composition, labour force characteristics, and childhood trauma exposure in the populations could be driving these prevalence differences. Our objective is to compare the prevalence of past-year mental disorders and suicidal behaviours in the Canadian Armed Forces Regular Force with the rates in a representative, matched sample of Canadians in the general population (CGP). Methods: Data sources were the 2013 Canadian Forces Mental Health Survey and the 2012 Canadian Community Health Survey–Mental Health. CGP sample was restricted to match the age range, employment status, and history of chronic conditions of Regular Force personnel. An iterative proportional fitting method was used to approximate the marginal distribution of sociodemographic and childhood trauma variables in both samples. Results: Relative to the matched CGP, Regular Force personnel had significantly higher rates of past-year major depressive episode, generalized anxiety disorder, and suicide ideation. However, lower rates of alcohol use disorder were seen in Regular Force personnel relative to the matched CGP sample. Conclusions: Factors other than differences in sociodemographic composition and history of childhood trauma account for the excess burden of mental disorders and suicidal behaviours in the Canadian Armed Forces. Explanations to explore in future research include occupational trauma, selection effects, and differences in the context of administration of the 2 surveys. PMID:27270741

  1. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  2. Comparing methods for modelling spreading cell fronts.

    PubMed

    Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E

    2014-07-21

    Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography

    PubMed Central

    Sánchez, Adrian A.

    2016-01-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968

  4. Extinction threshold for spatial forest dynamics with height structure.

    PubMed

    Garcia-Domingo, Josep L; Saldaña, Joan

    2011-05-07

    We present a pair-approximation model for spatial forest dynamics defined on a regular lattice. The model assumes three possible states for a lattice site: empty (gap site), occupied by an immature tree, and occupied by a mature tree, and considers three nonlinearities in the dynamics associated to the processes of light interference, gap expansion, and recruitment. We obtain an expression of the basic reproduction number R(0) which, in contrast to the one obtained under the mean-field approach, uses information about the spatial arrangement of individuals close to extinction. Moreover, we analyze the corresponding survival-extinction transition of the forest and the spatial correlations among gaps, immature and mature trees close to this critical point. Predictions of the pair-approximation model are compared with those of a cellular automaton. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. SEVO (Space Environment Viability of Organics) Preliminary Results from Orbit

    NASA Technical Reports Server (NTRS)

    Cook, A.; Ehrenfreund, P.; Mattioda, A.; Quinn, R.; Ricco, A. J.; Bramall, N.; Chittenden, J.; Bryson, K.; Minelli, G.

    2012-01-01

    SEVO (Space Environment Viability of Organics) is one of two astrobiology experiments onboard the NASA Organism/Organics Exposure to Orbital Stresses (O/OREOS) cubesat, launched in November 2010. The satellite is still operational with nominal performance and records data on a regular basis. In the SEVO experiment, four astrobiologically relevant organic thin films are exposed to radiation in low-earth orbit, including the unfiltered solar spectrum from approximately 120 - 2600 nm. The thin films are contained in each of four separate micro-environments: an atmosphere containing CO2, a low relative humidity (approximately 2%) atmosphere, an inert atmosphere representative of interstellar/interplanetary space, and a SiO2 mineral surface to measure the effects of surface catalysis. The UV/Vis spectrum of each sample is monitored in situ, with a spectrometer onboard the satellite.

  6. Noise effects in nonlinear biochemical signaling

    NASA Astrophysics Data System (ADS)

    Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert

    2012-01-01

    It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.

  7. Estimation of noise properties for TV-regularized image reconstruction in computed tomography.

    PubMed

    Sánchez, Adrian A

    2015-09-21

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  8. Estimation of noise properties for TV-regularized image reconstruction in computed tomography

    NASA Astrophysics Data System (ADS)

    Sánchez, Adrian A.

    2015-09-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  9. Examining the association between possessing a regular source of healthcare and adherence with cancer screenings among Haitian households in Little Haiti, Miami-Dade County, Florida

    PubMed Central

    Pang, Hauchie; Cataldi, Mariel; Allseits, Emmanuelle; Ward-Peterson, Melissa; de la Vega, Pura Rodríguez; Castro, Grettel; Acuña, Juan Manuel

    2017-01-01

    Abstract Immigrant minorities regularly experience higher incidence and mortality rates of cancer. Frequently, a variety of social determinants create obstacles for those individuals to get the screenings they need. This is especially true for Haitian immigrants, a particularly vulnerable immigrant population in South Florida, who have been identified as having low cancer screening rates. While Haitian immigrants have some of the lowest cancer screening rates in the country, there is little existing literature that addresses barriers to cancer screenings among the population of Little Haiti in Miami-Dade County, Florida. The objective of this study was to evaluate the association between having a regular source of healthcare and adherence to recommended cancer screenings in the Little Haiti population of Miami. This secondary analysis utilized data collected from a random-sample, population-based household survey conducted from November 2011 to December 2012 among a geographic area approximating Little Haiti in Miami-Dade County, Florida. A total of 421 households identified as Haitian. The main exposure of interest was whether households possessed a regular source of care. Three separate outcomes were considered: adherence with colorectal cancer screening, mammogram adherence, and Pap smear adherence. Analysis was limited to households who met the age criteria for each outcome of interest. Bivariate associations were examined using the chi square test and Fisher exact test. Binary logistic regression was used to estimate unadjusted and adjusted odds ratios (ORs) with 95% confidence intervals (CIs). After adjusting for the head of household's education and household insurance status, households without a regular source of care were significantly less likely to adhere with colorectal cancer screening (OR = 0.33; 95% CI: 0.14–0.80) or mammograms (OR = 0.28; 95% CI: 0.11–0.75). Households with insurance coverage gaps were significantly less likely to adhere with mammograms (OR = 0.40; 95% CI: 0.17–0.97) or Pap smears (OR = 0.28; 95% CI: 0.13–0.58). Our study explored adherence with multiple cancer screenings. We found a strong association between possessing a regular source of care and adherence with colorectal cancer screening and mammogram adherence. Targeted approaches to improving access to regular care may improve adherence to cancer screening adherence among this unique immigrant population. PMID:28796056

  10. Practice Patterns Regarding Multidisciplinary Cancer Management and Suggestions for Further Refinement: Results from a National Survey in Korea.

    PubMed

    Lee, Yun-Gyoo; Oh, Sukjoong; Kimm, Heejin; Koo, Dong-Hoe; Kim, Do Yeun; Kim, Bong-Seog; Lee, Seung-Sei

    2017-10-01

    This study was conducted to explore the process and operation of a cancer multidisciplinary team (MDT) after the reimbursement decision in Korea, and to identify ways to overcome the major barriers to effective and sustainable MDTs. Approximately 1,000 cancer specialists, including medical oncologists, surgical oncologists, radiation oncologists, pathologists, and radiologists in general hospitals in Koreawere invited to complete the survey. The questionnaire covered the following topics: organizational structure of MDTs, candidates for consulting, the clinical decision-making initiative, and responsibility for dealing with legal disputes. We collected a total of 179 responses (18%) from physicians at institutions where an MDT approach was active. A surgical oncologist (91%), internist (90%),radiologist (89%),radiation oncologist (86%), pathologist (71%), and trainees (20%) regularly participated in MDT operations. Approximately 55% of respondents stated that MDTs met regularly. In cases of a split opinion, the physician in charge (69%) or chairperson (17%) made the final decision, and most (86%) stated they followed the final decision. About 15% and 32% of respondents were "very satisfied" and "satisfied," respectively, with the current MDT's operations. Among 38 institutional representatives, 34% responded that the MDT operation became more active and 18% stated an MDT was newly implemented after the reimbursement decision. The reimbursement decision invigorated MDT operations in almost half of eligible hospitals. Dissatisfaction regarding current MDTs was over 50%, and the high discordance rates regarding risk sharing suggest that it is necessary to revise the current system of MDTs.

  11. Efferent-mediated responses in vestibular nerve afferents of the alert macaque.

    PubMed

    Sadeghi, Soroush G; Goldberg, Jay M; Minor, Lloyd B; Cullen, Kathleen E

    2009-02-01

    The peripheral vestibular organs have long been known to receive a bilateral efferent innervation from the brain stem. However, the functional role of the efferent vestibular system has remained elusive. In this study, we investigated efferent-mediated responses in vestibular afferents of alert behaving primates (macaque monkey). We found that efferent-mediated rotational responses could be obtained from vestibular nerve fibers innervating the semicircular canals after conventional afferent responses were nulled by placing the corresponding canal plane orthogonal to the plane of motion. Responses were type III, i.e., excitatory for rotational velocity trapezoids (peak velocity, 320 degrees/s) in both directions of rotation, consistent with those previously reported in the decerebrate chinchilla. Responses consisted of both fast and slow components and were larger in irregular (approximately 10 spikes/s) than in regular afferents (approximately 2 spikes/s). Following unilateral labyrinthectomy (UL) on the side opposite the recording site, similar responses were obtained. To confirm the vestibular source of the efferent-mediated responses, the ipsilateral horizontal and posterior canals were plugged following the UL. Responses to high-velocity rotations were drastically reduced when the superior canal (SC), the only intact canal, was in its null position, compared with when the SC was pitched 50 degrees upward from the null position. Our findings show that vestibular afferents in alert primates show efferent-mediated responses that are related to the discharge regularity of the afferent, are of vestibular origin, and can be the result of both afferent excitation and inhibition.

  12. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  13. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  14. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  15. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  16. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  17. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  18. Two-level schemes for the advection equation

    NASA Astrophysics Data System (ADS)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  19. Combination of budesonide/formoterol on demand improves asthma control by reducing exercise-induced bronchoconstriction

    PubMed Central

    Lazarinis, Nikolaos; Jørgensen, Leif; Ekström, Tommy; Bjermer, Leif; Dahlén, Barbro; Pullerits, Teet; Hedlin, Gunilla; Carlsen, Kai-Håkon; Larsson, Kjell

    2014-01-01

    Background In mild asthma exercise-induced bronchoconstriction (EIB) is usually treated with inhaled short-acting β2 agonists (SABAs) on demand. Objective The hypothesis was that a combination of budesonide and formoterol on demand diminishes EIB equally to regular inhalation of budesonide and is more effective than terbutaline inhaled on demand. Methods Sixty-six patients with asthma (>12 years of age) with verified EIB were randomised to terbutaline (0.5 mg) on demand, regular budesonide (400 μg) and terbutaline (0.5 mg) on demand, or a combination of budesonide (200 μg)  + formoterol (6 μg) on demand in a 6-week, double-blind, parallel-group study (ClinicalTrials.gov identifier: NCT00989833). The patients were instructed to perform three to four working sessions per week. The main outcome was EIB 24 h after the last dosing of study medication. Results After 6 weeks of treatment with regular budesonide or budesonide+formoterol on demand the maximum post-exercise forced expiratory volume in 1 s fall, 24 h after the last medication, was 6.6% (mean; 95% CI −10.3 to −3.0) and 5.4% (−8.9 to −1.8) smaller, respectively. This effect was superior to inhalation of terbutaline on demand (+1.5%; −2.1 to +5.1). The total budesonide dose was approximately 2.5 times lower in the budesonide+formoterol group than in the regular budesonide group. The need for extra medication was similar in the three groups. Conclusions The combination of budesonide and formoterol on demand improves asthma control by reducing EIB in the same order of magnitude as regular budesonide treatment despite a substantially lower total steroid dose. Both these treatments were superior to terbutaline on demand, which did not alter the bronchial response to exercise. The results question the recommendation of prescribing SABAs as the only treatment for EIB in mild asthma. PMID:24092567

  20. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407

  1. Regularities of Spatial and Temporal Distribution in Earthquakes in the Eastern Pacific Tectonic Belt

    NASA Astrophysics Data System (ADS)

    Maslov, L. A.; Choi, D. R.

    2014-12-01

    Earthquake epicenters in the Eastern Pacific Tectonic Belt (Pacific - North and South American continents tectonic margin) are distributed symmetrically about latitude with the following three minima: around the equator, at 35o N latitude, and at 35o S latitude, Figure 1a. In analysing the data, we looked at two characteristics - occurance dates, and epicenter latitudes. We calculated the power spectrum Sd(f) for occurance dates, and found that this spectrum can be approximated by the function Cfα, where α<0, Figure 1b. To interpret the data, we have also shown a graph of Ln(fα), Figure 1c. This graph shows that the exponent α is not a constant, but varies with the frequency. In addition, we calculated the power spectrum for epicenter latitudes Sl(f), Figure 1d, and found that this spectrum can be similarly approximated by the function Cfβ, where β<0. As with the occurance dates, we show a graph of Ln(fβ), Figure 1e, which indicates that β also varies with the frequency. This result is quite different from the well-known Gutenberg-Richter "frequency-magnitude" relation represented in bilogatithmic coordinates by a straight line. Coefficients α and β vary approximately from -2.5 to -1.5, depending on the "length" of the calculated spectrum subset used to plot the trend line. Based on the fact that the power spectrum has the form Cfα, -2.5<α<-1.5, we conclude that a long-time and long-distance correlation exists between earthquakes in the Eastern Pacific Tectonic Belt. In this work, we present an interpretation of the regularities in the spatial and temporal distribution of earthquakes in the Eastern Pacific Tectonic Belt. Earthquake data were taken from http://www.iris.edu/ieb/index.html.

  2. Dependency Links Can Hinder the Evolution of Cooperation in the Prisoner’s Dilemma Game on Lattices and Networks

    PubMed Central

    Wang, Xuwen; Nie, Sen; Wang, Binghong

    2015-01-01

    Networks with dependency links are more vulnerable when facing the attacks. Recent research also has demonstrated that the interdependent groups support the spreading of cooperation. We study the prisoner’s dilemma games on spatial networks with dependency links, in which a fraction of individual pairs is selected to depend on each other. The dependency individuals can gain an extra payoff whose value is between the payoff of mutual cooperation and the value of temptation to defect. Thus, this mechanism reflects that the dependency relation is stronger than the relation of ordinary mutual cooperation, but it is not large enough to cause the defection of the dependency pair. We show that the dependence of individuals hinders, promotes and never affects the cooperation on regular ring networks, square lattice, random and scale-free networks, respectively. The results for the square lattice and regular ring networks are demonstrated by the pair approximation. PMID:25798579

  3. Effects of Individual's Self-Examination on Cooperation in Prisoner's Dilemma Game

    NASA Astrophysics Data System (ADS)

    Guan, Jian-Yue; Sun, Jin-Tu; Wang, Ying-Hai

    We study a spatial evolutionary prisoner's dilemma game on regular network's one-dimensional regular ring and two-dimensional square lattice. The individuals located on the sites of networks can either cooperate with their neighbors or defect. The effects of individual's self-examination are introduced. Using Monte Carlo simulations and pair approximation method, we investigate the average density of cooperators in the stationary state for various values of payoff parameters b and the time interval Δt. The effects of the fraction p of players in the system who are using the self-examination on cooperation are also discussed. It is shown that compared with the case of no individual's self-examination, the persistence of cooperation is inhibited when the payoff parameter b is small and at certain Δt (Δt > 0) or p (p > 0), cooperation is mostly inhibited, while when b is large, the emergence of cooperation can be remarkably enhanced and mostly enhanced at Δt = 0 or p = 1.

  4. Physiological time-series analysis: what does regularity quantify?

    NASA Technical Reports Server (NTRS)

    Pincus, S. M.; Goldberger, A. L.

    1994-01-01

    Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity that appears to have potential application to a wide variety of physiological and clinical time-series data. The focus here is to provide a better understanding of ApEn to facilitate its proper utilization, application, and interpretation. After giving the formal mathematical description of ApEn, we provide a multistep description of the algorithm as applied to two contrasting clinical heart rate data sets. We discuss algorithm implementation and interpretation and introduce a general mathematical hypothesis of the dynamics of a wide class of diseases, indicating the utility of ApEn to test this hypothesis. We indicate the relationship of ApEn to variability measures, the Fourier spectrum, and algorithms motivated by study of chaotic dynamics. We discuss further mathematical properties of ApEn, including the choice of input parameters, statistical issues, and modeling considerations, and we conclude with a section on caveats to ensure correct ApEn utilization.

  5. Investigation to improve the resolution and range of a light imaging system for very thick tissues

    NASA Astrophysics Data System (ADS)

    Wist, Abund O.; Moon, Peter; Herr, Steven L.; Fatouros, Panos P.

    1995-05-01

    A high resolution light imaging system has been developed utilizing an HeNe (628 nm, 32 mW) and a receiver with post collimation mounted on an x, y table to scan the object. The image can be either recorded on a film or stored in a computer for display on a terminal. Tests show that the system in the regular mode is capable of detecting the spine and soft tissues in anesthetized mice, and of transilluminating fully an adult skull bone with a resolution for details better than one third mm. In teeth, all regular carious lesions, including incipient lesions larger than one third of a mm, can be seen in front or in the back of the tooth, none of which could be detected by dental x-ray systems. Applying a new high resolution mode, the resolution can be increased in teeth to less than 0.1 mm. Some difficulty still exists in detecting small lesions on occlusal or approximal surfaces.

  6. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  7. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    PubMed

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  8. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  9. Field measurement of nitromethane from automotive emissions at a busy intersection using proton-transfer-reaction mass spectrometry

    NASA Astrophysics Data System (ADS)

    Inomata, Satoshi; Fujitani, Yuji; Fushimi, Akihiro; Tanimoto, Hiroshi; Sekimoto, Kanako; Yamada, Hiroyuki

    2014-10-01

    Field measurements of seven nitro-organic compounds including nitromethane and ten related volatile organic compounds were carried out using proton-transfer-reaction mass spectrometry at a busy intersection of an urban city, Kawasaki, Japan from 26th February to 6th March, 2011. Among the nitro-organic compounds, nitromethane was usually observed along with air pollutants emitted from automobiles. The mixing ratios of nitromethane varied substantially and sometimes clearly varied at an approximately constant interval. The interval corresponded to the cycle of the traffic signals at the intersection and the regular peaks of nitromethane concentrations were caused by emissions from diesel trucks running with high speed. In addition to the regular peaks, sharp increases of nitromethane concentrations were often observed irregularly from diesel trucks accelerating in front of the measurement site. For other nitro-organic compounds such as nitrophenol, nitrocresol, dihydroxynitrobenzene, nitrobenzene, nitrotoluene, and nitronaphthalene, most of the data fluctuated within the detection limits.

  10. Regular and Random Components in Aiming-Point Trajectory During Rifle Aiming and Shooting

    PubMed Central

    Goodman, Simon; Haufler, Amy; Shim, Jae Kun; Hatfield, Bradley

    2009-01-01

    The authors examined the kinematic qualities of the aiming trajectory as related to expertise. In all, 2 phases of the trajectory were discriminated. The first phase was regular approximation to the target accompanied by substantial fluctuations obeying the Weber–Fechner law. During the first phase, shooters did not initiate the triggering despite any random closeness of the aiming point (AP) to the target. In the second phase, beginning at 0.6–0.8 s before the trigger pull, shooters applied a different control strategy: They waited until the following random fluctuation brought the AP closer to the target and then initiated triggering. This strategy is tenable when sensitivity of perception is greater than precision of the motor action, and could be considered a case of stochastic resonance. The strategies that novices and experts used distinguished only in the values of parameters. The authors present an analytical model explaining the main properties of shooting. PMID:19508963

  11. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  12. A Pacific population's access to and use of health services in Dunedin.

    PubMed

    Sopoaga, Faafetai; Parkin, Lianne; Gray, Andrew

    2012-10-26

    Pacific peoples in New Zealand (mostly of Samoan, Tongan, or Cook Islands origin) have poor health compared to the total New Zealand population. Understanding their access to and use of health services is important in resolving this. A survey of Pacific peoples in Dunedin obtained information about their access to and use of health services. 372 questionnaires were analysed. Approximately one-quarter did not have a regular doctor or health service. At least 50% used hospital emergency services for non-urgent illnesses. Nearly two-thirds used a "walk-in" primary care service. A significant proportion of Pacific peoples did not have a regular GP or health service in Dunedin. It was surprising students were more likely to be in this category because student health services should be more affordable. A "walk-in" primary care facility has a role in the delivery of primary care services. Pacific organisations can assist primary care providers to encourage access to and the appropriate use of health services.

  13. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  14. Regularized Stokeslet representations for the flow around a human sperm

    NASA Astrophysics Data System (ADS)

    Ishimoto, Kenta; Gadelha, Hermes; Gaffney, Eamonn; Smith, David; Kirkman-Brown, Jackson

    2017-11-01

    The sperm flagellum does not simply push the sperm. We have established a new theoretical scheme for the dimensional reduction of swimming sperm dynamics, via high-frame-rate digital microscopy of a swimming human sperm cell. This has allowed the reconstruction of the flagellar waveform as a limit cycle in a phase space of PCA modes. With this waveform, boundary element numerical simulation has successfully captured fine-scale sperm swimming trajectories. Further analyses on the flow field around the cell has also demonstrated a pusher-type time-averaged flow, though the instantaneous flow field can temporarily vary in a more complicated manner - even pulling the sperm. Applying PCA to the flow field, we have further found that a small number of PCA modes explain the temporal patterns of the flow, whose core features are well approximated by a few regularized Stokeslets. Such representations provide a methodology for coarse-graining the time-dependent flow around a human sperm and other flagellar microorganisms for use in developing population level models that retain individual cell dynamics.

  15. Thyrotropin secretion in mild and severe primary hypothyroidism is distinguished by amplified burst mass and Basal secretion with increased spikiness and approximate entropy.

    PubMed

    Roelfsema, Ferdinand; Pereira, Alberto M; Adriaanse, Ria; Endert, Erik; Fliers, Eric; Romijn, Johannes A; Veldhuis, Johannes D

    2010-02-01

    Twenty-four-hour TSH secretion profiles in primary hypothyroidism have been analyzed with methods no longer in use. The insights afforded by earlier methods are limited. We studied TSH secretion in patients with primary hypothyroidism (eight patients with severe and eight patients with mild hypothyroidism) with up-to-date analytical tools and compared the results with outcomes in 38 healthy controls. Patients and controls underwent a 24-h study with 10-min blood sampling. TSH data were analyzed with a newly developed automated deconvolution program, approximate entropy, spikiness assessment, and cosinor regression. Both basal and pulsatile TSH secretion rates were increased in hypothyroid patients, the latter by increased burst mass with unchanged frequency. Secretory regularity (approximate entropy) was diminished, and spikiness was increased only in patients with severe hypothyroidism. A diurnal TSH rhythm was present in all but two patients, although with an earlier acrophase in severe hypothyroidism. The estimated slow component of the TSH half-life was shortened in all patients. Increased TSH concentrations in hypothyroidism are mediated by amplification of basal secretion and burst size. Secretory abnormalities quantitated by approximate entropy and spikiness were only present in patients with severe disease and thus are possibly related to the increased thyrotrope cell mass.

  16. Analyzing contraction of full thickness skin grafts in time: Choosing the donor site does matter.

    PubMed

    Stekelenburg, Carlijn M; Simons, Janine M; Tuinebreijer, Wim E; van Zuijlen, Paul P M

    2016-11-01

    In reconstructive burn surgery full thickness skin grafts (FTSGs) are frequently preferred over split thickness skin grafts because they are known to provide superior esthetic results and less contraction. However, the contraction rate of FTSGs on the long term has never been studied. The surface area of FTSGs of consecutive patients was measured during surgery and at their regular follow up (at approximately 1, 6,13 and 52 weeks postoperatively) by means of 3D-stereophotogrammetry. Linear regression analysis was conducted to assess the influence of age, recipient- and donor site and operation indication. 38 FTSGs in 26 patients, with a mean age of 37.4 (SD 21.9) were evaluated. A significant reduction in remaining surface area to 79.1% was observed after approximately 6 weeks (p=0.002), to 85.9% after approximately 13 weeks (p=0.040) and to 91.5% after approximately 52 weeks (p=0.033). Grafts excised from the trunk showed significantly less contraction than grafts excised from the extremities (94.0% vs. 75.7% p=0.036). FTSGs showed a significant reduction in surface area, followed by a relaxation phase, but remained significantly smaller. Furthermore, the trunk should be preferred as donor site location over the extremities. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  17. Sparse generalized linear model with L0 approximation for feature selection and prediction with big omics data.

    PubMed

    Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P

    2017-01-01

    Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.

  18. Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, Remi

    1992-01-01

    An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.

  19. The Hawking-Penrose Singularity Theorem for C 1,1-Lorentzian Metrics

    NASA Astrophysics Data System (ADS)

    Graf, Melanie; Grant, James D. E.; Kunzinger, Michael; Steinbauer, Roland

    2018-06-01

    We show that the Hawking-Penrose singularity theorem, and the generalisation of this theorem due to Galloway and Senovilla, continue to hold for Lorentzian metrics that are of C 1,1-regularity. We formulate appropriate weak versions of the strong energy condition and genericity condition for C 1,1-metrics, and of C 0-trapped submanifolds. By regularisation, we show that, under these weak conditions, causal geodesics necessarily become non-maximising. This requires a detailed analysis of the matrix Riccati equation for the approximating metrics, which may be of independent interest.

  20. Nonequilibrium diffusive gas dynamics: Poiseuille microflow

    NASA Astrophysics Data System (ADS)

    Abramov, Rafail V.; Otto, Jasmine T.

    2018-05-01

    We test the recently developed hierarchy of diffusive moment closures for gas dynamics together with the near-wall viscosity scaling on the Poiseuille flow of argon and nitrogen in a one micrometer wide channel, and compare it against the corresponding Direct Simulation Monte Carlo computations. We find that the diffusive regularized Grad equations with viscosity scaling provide the most accurate approximation to the benchmark DSMC results. At the same time, the conventional Navier-Stokes equations without the near-wall viscosity scaling are found to be the least accurate among the tested closures.

  1. Comprehensive whole-body counter surveys of Miharu-town school children for three consecutive years after the Fukushima NPP accident.

    PubMed

    Hayano, Ryugo S; Tsubokura, Masaharu; Miyazaki, Makoto; Satou, Hideo; Sato, Katsumi; Masaki, Shin; Sakuma, Yu

    2014-01-01

    Comprehensive whole-body counter surveys covering over 93% of the school children between the ages of 6 and 15 in Miharu town, Fukushima Prefecture, have been conducted for three consecutive years, in 2011, 2012 and 2013. Although the results of a questionnaire indicate that approximately 60% of the children have been regularly eating local or home-grown rice, in 2012 and 2013 no child was found to exceed the (137)Cs detection limit of 300 Bq/body.

  2. Sound propagation in a duct of periodic wall structure. [numerical analysis

    NASA Technical Reports Server (NTRS)

    Kurze, U.

    1978-01-01

    A boundary condition, which accounts for the coupling in the sections behind the duct boundary, is given for the sound-absorbing duct with a periodic structure of the wall lining and using regular partition walls. The soundfield in the duct is suitably described by the method of differences. For locally active walls this renders an explicit approximate solution for the propagation constant. Coupling may be accounted for by the method of differences in a clear manner. Numerical results agree with measurements and yield information which has technical applications.

  3. Numerical solution of inverse scattering for near-field optics.

    PubMed

    Bao, Gang; Li, Peijun

    2007-06-01

    A novel regularized recursive linearization method is developed for a two-dimensional inverse medium scattering problem that arises in near-field optics, which reconstructs the scatterer of an inhomogeneous medium located on a substrate from data accessible through photon scanning tunneling microscopy experiments. Based on multiple frequency scattering data, the method starts from the Born approximation corresponding to weak scattering at a low frequency, and each update is obtained by continuation on the wavenumber from solutions of one forward problem and one adjoint problem of the Helmholtz equation.

  4. The superficial temporal fat pad and its ramifications for temporalis muscle construction in facial approximation.

    PubMed

    Stephan, Carl N; Devine, Matthew

    2009-10-30

    The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.

  5. Development of a Fully Automated, Web-Based, Tailored Intervention Promoting Regular Physical Activity Among Insufficiently Active Adults With Type 2 Diabetes: Integrating the I-Change Model, Self-Determination Theory, and Motivational Interviewing Components

    PubMed Central

    Moreau, Michel; Gagnon, Marie-Pierre

    2015-01-01

    Background Type 2 diabetes is a major challenge for Canadian public health authorities, and regular physical activity is a key factor in the management of this disease. Given that fewer than half of people with type 2 diabetes in Canada are sufficiently active to meet the recommendations, effective programs targeting the adoption of regular physical activity (PA) are in demand for this population. Many researchers argue that Web-based, tailored interventions targeting PA are a promising and effective avenue for sedentary populations like Canadians with type 2 diabetes, but few have described the detailed development of this kind of intervention. Objective This paper aims to describe the systematic development of the Web-based, tailored intervention, Diabète en Forme, promoting regular aerobic PA among adult Canadian francophones with type 2 diabetes. This paper can be used as a reference for health professionals interested in developing similar interventions. We also explored the integration of theoretical components derived from the I-Change Model, Self-Determination Theory, and Motivational Interviewing, which is a potential path for enhancing the effectiveness of tailored interventions on PA adoption and maintenance. Methods The intervention development was based on the program-planning model for tailored interventions of Kreuter et al. An additional step was added to the model to evaluate the intervention’s usability prior to the implementation phase. An 8-week intervention was developed. The key components of the intervention include a self-monitoring tool for PA behavior, a weekly action planning tool, and eight tailored motivational sessions based on attitude, self-efficacy, intention, type of motivation, PA behavior, and other constructs and techniques. Usability evaluation, a step added to the program-planning model, helped to make several improvements to the intervention prior to the implementation phase. Results The intervention development cost was about CDN $59,700 and took approximately 54 full-time weeks. The intervention officially started on September 29, 2014. Out of 2300 potential participants targeted for the tailored intervention, approximately 530 people visited the website, 170 people completed the registration process, and 83 corresponded to the selection criteria and were enrolled in the intervention. Conclusions Usability evaluation is an essential step in the development of a Web-based tailored intervention in order to make pre-implementation improvements. The effectiveness and relevance of the theoretical framework used for the intervention will be analyzed following the process and impact evaluation. Implications for future research are discussed. PMID:25691346

  6. Development of a fully automated, web-based, tailored intervention promoting regular physical activity among insufficiently active adults with type 2 diabetes: integrating the I-change model, self-determination theory, and motivational interviewing components.

    PubMed

    Moreau, Michel; Gagnon, Marie-Pierre; Boudreau, François

    2015-02-17

    Type 2 diabetes is a major challenge for Canadian public health authorities, and regular physical activity is a key factor in the management of this disease. Given that fewer than half of people with type 2 diabetes in Canada are sufficiently active to meet the recommendations, effective programs targeting the adoption of regular physical activity (PA) are in demand for this population. Many researchers argue that Web-based, tailored interventions targeting PA are a promising and effective avenue for sedentary populations like Canadians with type 2 diabetes, but few have described the detailed development of this kind of intervention. This paper aims to describe the systematic development of the Web-based, tailored intervention, Diabète en Forme, promoting regular aerobic PA among adult Canadian francophones with type 2 diabetes. This paper can be used as a reference for health professionals interested in developing similar interventions. We also explored the integration of theoretical components derived from the I-Change Model, Self-Determination Theory, and Motivational Interviewing, which is a potential path for enhancing the effectiveness of tailored interventions on PA adoption and maintenance. The intervention development was based on the program-planning model for tailored interventions of Kreuter et al. An additional step was added to the model to evaluate the intervention's usability prior to the implementation phase. An 8-week intervention was developed. The key components of the intervention include a self-monitoring tool for PA behavior, a weekly action planning tool, and eight tailored motivational sessions based on attitude, self-efficacy, intention, type of motivation, PA behavior, and other constructs and techniques. Usability evaluation, a step added to the program-planning model, helped to make several improvements to the intervention prior to the implementation phase. The intervention development cost was about CDN $59,700 and took approximately 54 full-time weeks. The intervention officially started on September 29, 2014. Out of 2300 potential participants targeted for the tailored intervention, approximately 530 people visited the website, 170 people completed the registration process, and 83 corresponded to the selection criteria and were enrolled in the intervention. Usability evaluation is an essential step in the development of a Web-based tailored intervention in order to make pre-implementation improvements. The effectiveness and relevance of the theoretical framework used for the intervention will be analyzed following the process and impact evaluation. Implications for future research are discussed.

  7. Describing the dynamics of processes consisting simultaneously of Poissonian and non-Poissonian kinetics

    NASA Astrophysics Data System (ADS)

    Eule, S.; Friedrich, R.

    2013-03-01

    Dynamical processes exhibiting non-Poissonian kinetics with nonexponential waiting times are frequently encountered in nature. Examples are biochemical processes like gene transcription which are known to involve multiple intermediate steps. However, often a second process, obeying Poissonian statistics, affects the first one simultaneously, such as the degradation of mRNA in the above example. The aim of the present article is to provide a concise treatment of such random systems which are affected by regular and non-Poissonian kinetics at the same time. We derive the governing master equation and provide a controlled approximation scheme for this equation. The simplest approximation leads to generalized reaction rate equations. For a simple model of gene transcription we solve the resulting equation and show how the time evolution is influenced significantly by the type of waiting time distribution assumed for the non-Poissonian process.

  8. Neuroendocrine Causes of Amenorrhea—An Update

    PubMed Central

    Fourman, Lindsay T.

    2015-01-01

    Context: Secondary amenorrhea—the absence of menses for three consecutive cycles—affects approximately 3–4% of reproductive age women, and infertility—the failure to conceive after 12 months of regular intercourse—affects approximately 6–10%. Neuroendocrine causes of amenorrhea and infertility, including functional hypothalamic amenorrhea and hyperprolactinemia, constitute a majority of these cases. Objective: In this review, we discuss the physiologic, pathologic, and iatrogenic causes of amenorrhea and infertility arising from perturbations in the hypothalamic-pituitary-adrenal axis, including potential genetic causes. We focus extensively on the hormonal mechanisms involved in disrupting the hypothalamic-pituitary-ovarian axis. Conclusions: A thorough understanding of the neuroendocrine causes of amenorrhea and infertility is critical for properly assessing patients presenting with these complaints. Prompt evaluation and treatment are essential to prevent loss of bone mass due to hypoestrogenemia and/or to achieve the time-sensitive treatment goal of conception. PMID:25581597

  9. Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2016-11-01

    We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.

  10. Speed-up of the volumetric method of moments for the approximate RCS of large arbitrary-shaped dielectric targets

    NASA Astrophysics Data System (ADS)

    Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe

    2017-08-01

    A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.

  11. Pion distribution amplitude from lattice QCD.

    PubMed

    Cloët, I C; Chang, L; Roberts, C D; Schmidt, S M; Tandy, P C

    2013-08-30

    A method is explained through which a pointwise accurate approximation to the pion's valence-quark distribution amplitude (PDA) may be obtained from a limited number of moments. In connection with the single nontrivial moment accessible in contemporary simulations of lattice-regularized QCD, the method yields a PDA that is a broad concave function whose pointwise form agrees with that predicted by Dyson-Schwinger equation analyses of the pion. Under leading-order evolution, the PDA remains broad to energy scales in excess of 100 GeV, a feature which signals persistence of the influence of dynamical chiral symmetry breaking. Consequently, the asymptotic distribution φπ(asy)(x) is a poor approximation to the pion's PDA at all such scales that are either currently accessible or foreseeable in experiments on pion elastic and transition form factors. Thus, related expectations based on φ φπ(asy)(x) should be revised.

  12. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  13. Scattering of clusters of spherical particles—Modeling and inverse problem solution in the Rayleigh-Gans approximation

    NASA Astrophysics Data System (ADS)

    Eliçabe, Guillermo E.

    2013-09-01

    In this work, an exact scattering model for a system of clusters of spherical particles, based on the Rayleigh-Gans approximation, has been parameterized in such a way that it can be solved in inverse form using Thikhonov Regularization to obtain the morphological parameters of the clusters. That is to say, the average number of particles per cluster, the size of the primary spherical units that form the cluster, and the Discrete Distance Distribution Function from which the z-average square radius of gyration of the system of clusters is obtained. The methodology is validated through a series of simulated and experimental examples of x-ray and light scattering that show that the proposed methodology works satisfactorily in unideal situations such as: presence of error in the measurements, presence of error in the model, and several types of unideallities present in the experimental cases.

  14. Association between obesity and depressive symptoms among U.S. Military active duty service personnel, 2002.

    PubMed

    Kress, Amii M; Peterson, Michael R; Hartzell, Michael C

    2006-03-01

    The association between obesity and depression remains equivocal. The purpose of this study was to describe the prevalence and association of obesity and depressive symptoms among military personnel. A cross-sectional analysis was performed using data (N=10,040) from the U.S. Department of Defense (DoD) Survey of Health-Related Behaviors. Prevalence odds ratios were calculated to describe the association between obesity and depressive symptoms. Approximately 10% of active duty men and 4% of active duty women were obese. The prevalence of depressive symptoms ranged from approximately 16% of overweight men to 49% of obese women. Obese men and women and underweight men had increased odds of depressive symptoms as compared with normal-weight individuals. The DoD should emphasize prevention and regular screening for obesity and depressive symptoms to improve readiness and reduce health care costs and disease burden in this cohort.

  15. Insertable cardiac event recorder in detection of atrial fibrillation after cryptogenic stroke: an audit report.

    PubMed

    Etgen, Thorleif; Hochreiter, Manfred; Mundel, Markus; Freudenberger, Thomas

    2013-07-01

    Atrial fibrillation (AF) is the most frequent risk factor in ischemic stroke but often remains undetected. We analyzed the value of insertable cardiac event recorder in detection of AF in a 1-year cohort of patients with cryptogenic ischemic stroke. All patients with cryptogenic stroke and eligibility for oral anticoagulation were offered the insertion of a cardiac event recorder. Regular follow-up for 1 year recorded the incidence of AF. Of the 393 patients with ischemic stroke, 65 (16.5%) had a cryptogenic stroke, and in 22 eligible patients, an event recorder was inserted. After 1 year, in 6 of 22 patients (27.3%), AF was detected. These preliminary data show that insertion of cardiac event recorder was eligible in approximately one third of patients with cryptogenic stroke and detected in approximately one quarter of these patients new AF.

  16. The high-resolution cross-dispersed echelle white-pupil spectrometer of the McDonald Observatory 2.7-m telescope

    NASA Technical Reports Server (NTRS)

    Tull, Robert G.; Macqueen, Phillip J.; Sneden, Christopher; Lambert, David L.

    1995-01-01

    A new high-resolution cross-dispersed echelle spectrometer has been installed at the coude focus of the McDonald Observatory 2.7-m telescope. Its primary goal was simultaneously to gather spectra over as much of the spectral range 3400 A to 1 micrometer as practical, at a resolution R identical with lambda/Delta lambda which approximately = 60,000 with signal-to-noise ratio of approximately 100 for stars down to magnitude 11, using 1-h exposures. In the instrument as built, two exposures are all that are needed to cover the full range. Featuring a white-pupil design, fused silica prism cross disperser, and folded Schmidt camera with a Tektronix 2048x2048 CCD used at either of two foci, it has been in regularly scheduled operation since 1992 April. Design details and performance are described.

  17. Sparse Covariance Matrix Estimation by DCA-Based Algorithms.

    PubMed

    Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham

    2017-11-01

    This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.

  18. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  19. Linking species abundance distributions in numerical abundance and biomass through simple assumptions about community structure.

    PubMed

    Henderson, Peter A; Magurran, Anne E

    2010-05-22

    Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance.

  20. Paediatric dentistry education of atraumatic restorative treatment (ART) in Brazilian dental schools.

    PubMed

    Camargo, L B; Fell, C; Bonini, G C; Marquezan, M; Imparato, J C P; Mendes, F M; Raggio, D P

    2011-12-01

    To evaluate the degree of knowledge, use and teaching of atraumatic restorative treatment (ART) of paediatric dentistry lecturers in dental schools throughout Brazil. A structured questionnaire was applied, containing questions regarding the use of ART, socio-demographic characteristics and academic degree background. Descriptive analysis and Poisson's regression were conducted in order to verify the association between exploratory variables and ART teaching (α=5%). Of the 721 questionnaires sent to dental schools, approximately 40% were returned (n=285). Some 98.2% of the participants teach ART. Concerning dental lecturers who teach ART, in multiple regression model, considering ART indication (emergency versus restorative treatment) the lecturers residents of the Mid-West (PR=1.66; CI:1.13-2.45) and Northeast region (PR=1.33; CI:1.02-1.72) and lecturers who use ART regularly (PR=3.73; CI:2.11-5.59) teach ART as restorative treatment. When the question was about reason for using ART (conservative technique versus other techniques failures/fast treatment), lecturers with a longer period of TG (time elapsed since graduation) (PR=1.30; CI:1.08- 1.56) and also lecturers who use ART regularly (PR=2.87; CI:1.95-4.22), teach it as being a conservative technique. Regarding the patients' age covered by ART (versus without limitation), women (PR=1.26; CI:1.06-1.50) and lecturers who use ART regularly (PR=1.28; CI:1.06-1.54), teach that there is no age restriction. ART has been widely taught in Brazilian dental schools, is regularly used in lecturer's clinical practices and has positively influenced the appropriate teaching of this technique.

  1. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  2. Regular exercise and related factors in patients with Parkinson's disease: Applying zero-inflated negative binomial modeling of exercise count data.

    PubMed

    Lee, JuHee; Park, Chang Gi; Choi, Moonki

    2016-05-01

    This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    PubMed Central

    2010-01-01

    Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/. PMID:20089173

  4. Linking species abundance distributions in numerical abundance and biomass through simple assumptions about community structure

    PubMed Central

    Henderson, Peter A.; Magurran, Anne E.

    2010-01-01

    Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance. PMID:20071388

  5. Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho

    NASA Astrophysics Data System (ADS)

    Uieda, Leonardo; Barbosa, Valéria C. F.

    2017-01-01

    Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.

  6. Evolution of the regions of the 3D particle motion in the regular polygon problem of (N+1) bodies with a quasi-homogeneous potential

    NASA Astrophysics Data System (ADS)

    Fakis, Demetrios; Kalvouridis, Tilemahos

    2017-09-01

    The regular polygon problem of (N+1) bodies deals with the dynamics of a small body, natural or artificial, in the force field of N big bodies, the ν=N-1 of which have equal masses and form an imaginary regular ν -gon, while the Nth body with a different mass is located at the center of mass of the system. In this work, instead of considering Newtonian potentials and forces, we assume that the big bodies create quasi-homogeneous potentials, in the sense that we insert to the inverse square Newtonian law of gravitation an inverse cube corrective term, aiming to approximate various phenomena due to their shape or to the radiation emitting from the primaries. Based on this new consideration, we apply a general methodology in order to investigate by means of the zero-velocity surfaces, the regions where 3D motions of the small body are allowed, their evolutions and parametric variations, their topological bifurcations, as well as the existing trapping domains of the particle. Here we note that this process is definitely a fundamental step of great importance in the study of many dynamical systems characterized by a Jacobian-type integral of motion in the long way of searching for solutions of any kind.

  7. Energy drinks, soft drinks, and substance use among United States secondary school students.

    PubMed

    Terry-McElrath, Yvonne M; OʼMalley, Patrick M; Johnston, Lloyd D

    2014-01-01

    Examine energy drink/shot and regular and diet soft drink use among United States secondary school students in 2010-2011, and associations between such use and substance use. We used self-reported data from cross-sectional surveys of nationally representative samples of 8th-, 10th-, and 12th-grade students and conducted multivariate analyses examining associations between beverage and substance use, controlling for individual and school characteristics. Approximately 30% of students reported consuming energy drinks or shots; more than 40% reported daily regular soft drink use, and about 20% reported daily diet soft drink use. Beverage consumption was strongly and positively associated with past 30-day alcohol, cigarette, and illicit drug use. The observed associations between energy drinks and substance use were significantly stronger than those between regular or diet soft drinks and substance use. This correlational study indicates that adolescent consumption of energy drinks/shots is widespread and that energy drink users report heightened risk for substance use. This study does not establish causation between the behaviors. Education for parents and prevention efforts among adolescents should include education on the masking effects of caffeine in energy drinks on alcohol- and other substance-related impairments, and recognition that some groups (such as high sensation-seeking youth) may be particularly likely to consume energy drinks and to be substance users.

  8. Energy drinks, soft drinks, and substance use among US secondary school students

    PubMed Central

    Terry-McElrath, Yvonne M.; O’Malley, Patrick M.; Johnston, Lloyd D.

    2014-01-01

    Objectives Examine energy drink/shot and regular and diet soft drink use among US secondary school students in 2010–2011, and associations between such use and substance use. Methods We used self-reported data from cross-sectional surveys of nationally representative samples of 8th, 10th, and 12th grade students and conducted multivariate analyses examining associations between beverage and substance use controlling for individual and school characteristics. Results Approximately 30% of students reported consuming energy drinks or shots; more than 40% reported daily regular soft drink use, and about 20% reported daily diet soft drink use. Beverage consumption was strongly and positively associated with past 30-day alcohol, cigarette, and illicit drug use. The observed associations between energy drinks and substance use were significantly stronger than those between regular or diet soft drinks and substance use. Conclusions This correlational study indicates that adolescent consumption of energy drinks/shots is wide-spread, and that energy drink users report heightened risk for substance use. This study does not establish causation between the behaviors. Education for parents and prevention efforts among adolescents should include education on the masking effects of caffeine in energy drinks on alcohol- and other substance-related impairments, and recognition that some groups (such as high sensation-seeking youth) may be particularly likely to consume energy drinks and to be substance users. PMID:24481080

  9. Civic engagement among orphans and non-orphans in five low- and middle-income countries.

    PubMed

    Gray, Christine L; Pence, Brian W; Messer, Lynne C; Ostermann, Jan; Whetten, Rachel A; Thielman, Nathan M; O'Donnell, Karen; Whetten, Kathryn

    2016-10-11

    Communities and nations seeking to foster social responsibility in their youth are interested in understanding factors that predict and promote youth involvement in public activities. Orphans and separated children (OSC) are a vulnerable population whose numbers are increasing, particularly in resource-poor settings. Understanding whether and how OSC are engaged in civic activities is important for community and world leaders who need to provide care for OSC and ensure their involvement in sustainable development. The Positive Outcomes for Orphans study (POFO) is a multi-country, longitudinal cohort study of OSC randomly sampled from institution-based care and from family-based care, and of non-OSC sampled from the same study regions. Participants represent six sites in five low-and middle-income countries. We examined civic engagement activities and government trust among subjects > =16 years old at 90-month follow-up (approximately 7.5 years after baseline). We calculated prevalences and estimated the association between key demographic variables and prevalence of regular volunteer work using multivariable Poisson regression, with sampling weights to accounting for the complex sampling design. Among the 1,281 POFO participants > =16 who were assessed at 90-month follow-up, 45 % participated in regular community service or volunteer work; two-thirds of those volunteers did so on a strictly voluntary basis. While government trust was fairly high, at approximately 70 % for each level of government, participation in voting was only 15 % among those who were > =18 years old. We did not observe significant associations between demographic characteristics and regular volunteer work, with the exception of large variation by study site. As the world's leaders grapple with the many competing demands of global health, economic security, and governmental stability, the participation of today's youth in community and governance is essential for sustainability. This study provides a first step in understanding the degree to which OSC from different care settings across multiple low- and middle-income countries are engaged in their communities.

  10. Impact of regular physical activity on blood glucose control and cardiovascular risk factors in adolescents with type 2 diabetes mellitus--a multicenter study of 578 patients from 225 centres.

    PubMed

    Herbst, A; Kapellen, T; Schober, E; Graf, C; Meissner, T; Holl, R W

    2015-05-01

    Regular physical activity (RPA) is a major therapeutic recommendation in children and adolescents with type 2 diabetes mellitus (T2DM). We evaluated the association between frequency of RPA and metabolic control, cardiovascular risk factors, and treatment regimes. The Pediatric Quality Initiative (DPV), including data from 225 centers in Germany and Austria, provided anonymous data of 578 patients (10-20 yr; mean 15.7 ± 2.1 yr; 61.9% girls) with T2DM. Patients were grouped by the frequency of their self-reported RPA per week: RPA 0, none; RPA 1, 1-2×/wk; RPA 2, >2×/wk. The frequency of RPA ranged from 0 to 9×/wk (mean 1.1×/wk ±1.5). 55.7% of the patients reported no RPA (58.1% of the girls). Hemoglobin A1c (HbA1c) differed significantly among RPA groups (p < 0.002), being approximately 0.8 percentage points lower in RPA 2 compared to RPA 0. Body mass index (BMI-SDS) was higher in the groups with less frequent RPA (p < 0.00001). Multiple regression analysis revealed a negative association between RPA and HbA1c (p < 0.0001) and between RPA and BMI-SDS (p < 0.01). The association between RPA and high density lipoprotein (HDL)-cholesterol was positive (p < 0.05), while there was no association to total cholesterol, low density lipoprotein (LDL)-cholesterol or triglycerides. Approximately 80% of the patients received pharmacological treatment (oral antidiabetic drugs and/or insulin) without differences between RPA groups. More than half of the adolescents with T2DM did not perform RPA. Increasing physical activity was associated with a lower HbA1c, a lower BMI-SDS, a higher HDL-cholesterol, but not with a difference in treatment regime. These results suggest that regular exercise is a justified therapeutic recommendation for children and adolescents with T2DM. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Symmetric Positive 4th Order Tensors & Their Estimation from Diffusion Weighted MRI⋆

    PubMed Central

    Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C.; Shepherd, Timothy M.

    2009-01-01

    In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. It is now well known that this 2nd-order approximation fails to approximate complex local tissue structures, such as fibers crossings. In this paper we employ a 4th order symmetric positive semi-definite (PSD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the PSD property. There have been several published articles in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positive semi-definite constraint, which is a fundamental constraint since negative values of the diffusivity coefficients are not meaningful. In our methods, we parameterize the 4th order tensors as a sum of squares of quadratic forms by using the so called Gram matrix method from linear algebra and its relation to the Hilbert’s theorem on ternary quartics. This parametric representation is then used in a nonlinear-least squares formulation to estimate the PSD tensors of order 4 from the data. We define a metric for the higher-order tensors and employ it for regularization across the lattice. Finally, performance of this model is depicted on synthetic data as well as real DW-MRI from an isolated rat hippocampus. PMID:17633709

  12. Accurate low-dose iterative CT reconstruction from few projections by Generalized Anisotropic Total Variation minimization for industrial CT.

    PubMed

    Debatin, Maurice; Hesser, Jürgen

    2015-01-01

    Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.

  13. A comparison of 3 wound measurement techniques: effects of pressure ulcer size and shape.

    PubMed

    Bilgin, Mehtap; Güneş, Ulkü Yapucu

    2013-01-01

    The aim of this study was to examine the levels of agreement among 3 techniques used in wound measurement comparing more spherical versus irregularly shaped wounds. The design of this study is evaluative research. Sixty-five consecutive patients with 80 pressure ulcers of various sizes referred from a university hospital in Izmir, Turkey, were evaluated. The 80 pressure ulcers identified on the 65 participants were divided into 2 groups based on pressure ulcer shape and wound surface area. Twenty-four of the 80 ulcers (30%) were characterized as irregularly shaped and greater than 10 cm. Fifty-six were regularly shaped (approximating a circle) and less than 10 cm. Pressure ulcer areas were measured using 3 techniques: measurement with a ruler (wound area was calculated by measuring and multiplying the greatest length by the greatest width perpendicular to the greatest length), wound tracing using graduated acetate paper, and digital planimetry. The level of agreement among the techniques was explored using the intraclass correlation coefficient (ICC). Strong agreement was observed among the techniques when assessing small, more regularly shaped wounds (ICC = 0.95). Modest agreement was achieved when measuring larger, irregularly shaped wounds (ICC = 0.70). Each of these techniques is adequate for measuring surface areas of smaller wounds with an approximately circular shape. Measurement of pressure ulcer area via the ruler method tended to overestimate surface area in larger and more irregularly shaped wounds when compared to acetate and digital planimetry. We recommend digital planimetry or acetate tracing for measurement of larger and more irregularly shaped pressure ulcers in the clinical setting.

  14. Sri Lanka drops leading condom.

    PubMed

    1984-01-01

    Sri Lanka's Family Planning Association has stopped selling its Preethi Regular condom, the backbone of its social marketing program for nearly a decade. Last year nearly 7 times as many Preethi condoms were sold as all other brands combined. The decision was reported to be caused by budget constraints following the International Planned Parenthood Federation's (IPPF) new policy of limiting the number of Preethi Regular condoms supplied to Sri Lanka. IPPF's Asian Regional Officer reported that the Preethi condom is a costly product, and that as many as needed of a US Agency for International Development (USAID) supplied product will be sent to Sri Lanka. The Contraceptive Retail Sales (CRS) program has devised a new sales strategy, based partly on the introduction of a high-priced condom to fill the gap left by the discontinuation of the Preethi Regular. The new Preethi Gold condom is expected to help the project become more financially self-reliant while taing advantage of Preethi's marketplace popularity. Preethi Gold is manufactured by the Malaysia Rubber Company and costs the project US $4.85/gross. It is sold for US $.14 for 3, about 3 times the price of a Preethi Regular. The project is also pushing the Panther condom, donated to IPPF by USAID. 2 Panther condoms sell for about 3.6U, about the cost of Preethi Regulars. The project also sells Moonbeam, Rough Rider, and Stimula condoms, the latter 2 at full commercial prices. A smooth transfer of demand from Preethi to Panther had been desired, but by the end of 1983 some retailers were hesitating to make the product switch because some Preethi Regulars were still available. Total condom sales in 1983 were down by nearly 590,000 from the approximately 6,860,000 sold in 1982. Total condom sales for the 1st quarter of 1984 were slightly over 1,218,000 pieces, compared to about 1,547,000 for the same quarter in 1983, a decline of 21%. The Family Planning Association is gearing up to reverse the downward trend. Panther sales increased from, 38,000 condoms in the 1st quarter of 1983 to 462,000 in the same period of 1984. The project is intensifying its market coverage by increasing the number of sales divisions from 5 to 7 to help maintain sales momentum for the new product.

  15. Role of Vancomycin as a Component of Oral Nonabsorbable Antibiotics for Microbial Suppression in Leukemic Patients

    PubMed Central

    Bender, John F.; Schimpff, Stephen C.; Young, Viola Mae; Fortner, Clarence L.; Brouillet, Mary D.; Love, Lillian J.; Wiernik, Peter H.

    1979-01-01

    A total of 38 adult patients with acute leukemia who were undergoing remission induction chemotherapy in regular patient rooms were randomly allocated to one of two oral nonabsorbable antibiotic regimens for infection prophylaxis (gentamicin, vancomycin, and nystatin [GVN] or gentamicin and nystatin [GN]) to evaluate whether vancomycin was a necessary component. The patient population in both groups were comparable. Tolerance to GVN was less than GN but compliance was approximately equal (>85% in both groups). Patients receiving vancomycin demonstrated greater overall alimentary tract microbial suppression; however, acquisition of potential pathogens was approximately equal in both groups. The incidence of bacteremia, as well as the overall incidence of infection as related to the number of days at various granulocyte levels, was also approximately equal in both groups. Group D Streptococcus species were poorly suppressed by GN compared with GVN, although no patient developed an infection with these organisms. Colonization by newly acquired gram-negative bacilli was significantly less in the GN group (GN, 3 colonizations; GVN, 13 colonizations; P < 0.01). It is concluded that vancomycin may be safely eliminated from the GVN regimen provided microbiological data is monitored to detect resistant organisms. PMID:464573

  16. Two-dimensional self-organization of an ordered Au silicide nanowire network on a Si(110)-16 x 2 surface.

    PubMed

    Hong, Ie-Hong; Yen, Shang-Chieh; Lin, Fu-Shiang

    2009-08-17

    A well-ordered two-dimensional (2D) network consisting of two crossed Au silicide nanowire (NW) arrays is self-organized on a Si(110)-16 x 2 surface by the direct-current heating of approximately 1.5 monolayers of Au on the surface at 1100 K. Such a highly regular crossbar nanomesh exhibits both a perfect long-range spatial order and a high integration density over a mesoscopic area, and these two self-ordering crossed arrays of parallel-aligned NWs have distinctly different sizes and conductivities. NWs are fabricated with widths and pitches as small as approximately 2 and approximately 5 nm, respectively. The difference in the conductivities of two crossed-NW arrays opens up the possibility for their utilization in nanodevices of crossbar architecture. Scanning tunneling microscopy/spectroscopy studies show that the 2D self-organization of this perfect Au silicide nanomesh can be achieved through two different directional electromigrations of Au silicide NWs along different orientations of two nonorthogonal 16 x 2 domains, which are driven by the electrical field of direct-current heating. Prospects for this Au silicide nanomesh are also discussed.

  17. Energy expenditure, heart rate response, and metabolic equivalents (METs) of adults taking part in children's games.

    PubMed

    Fischer, S L; Watts, P B; Jensen, R L; Nelson, J

    2004-12-01

    The needs of physical activity can be seen through the lack of numbers participating in regular physical activity as well as the increase in prevalence of certain diseases such as Type II diabetes (especially in children), cardiovascular diseases, and some cancers. With the increase in preventable diseases that are caused in part by a sedentary lifestyle, a closer look needs to be taken into the role of family interaction as a means of increasing physical activity for both adults and children. Because of the many benefits of physical activity in relation to health, a family approach to achieving recommended levels of physical activity may be quite applicable. Forty volunteers were recruited from the community (20 subjects and 20 children). The volunteers played 2 games: soccer and nerfball. Data was collected over 10 minutes (5 min per game). Expired air analysis was used to calculate energy expenditure and metabolic equivalents (METs). Descriptive statistics were calculated along with a regression analysis to determine differences between the 2 games, and an ACOVA to determine any significant effects of age, child age, gender, and physical activity level on the results. For both games, average heart rate measured approximately 88%max; average METs measured approximately 6, average energy expenditure measured approximately 40 kcal. S: This study showed that adults can achieve recommended physical activity levels through these specific activities if sustained for approximately 20 min.

  18. Low energy, left-right symmetry restoration in SO(N) GUTS

    NASA Technical Reports Server (NTRS)

    Holman, R.

    1982-01-01

    A general n step symmetry breaking pattern of SO(4K+2) down to SU sub C (3)xSU sub L (2)xU sub Y (1), which uses regular subgroups only, does not allow low energy left right symmetry restoration. In these theories, the smallest mass scale at which such restoration is possible is approximately one billion GeV as in the SO(10) case. The unification mass in SO(4K+2) GUTS must be at least as large as that in SU(5). These results assumed standard values of the Weinberg angle and strong coupling constant.

  19. Epidemic spreading in weighted networks: an edge-based mean-field solution.

    PubMed

    Yang, Zimo; Zhou, Tao

    2012-05-01

    Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.

  20. Smectic phases in hard particle mixtures: Koda's theory

    NASA Astrophysics Data System (ADS)

    Vesely, Franz J.

    Mixtures of parallel linear particles and spheres tend to demix upon compression. The linear species usually concentrates in regular layers, thus forming a smectic phase. With increasing concentration of spheres this 'smectic demixing' transition occurs at ever lower packing densities. For the specific case of hard spherocylinders and spheres Koda et al. [T. Koda, M. Numajiri, S. Ikeda, J. Phys. Jap., 65, 3551 (1996)] have explained the layering effect in terms of a second virial approximation to the free energy. We extend this approach from spherocylinders to other linear particles, namely fused spheres, ellipsoids and sphero-ellipsoids.

  1. Boltzmann equation and hydrodynamics beyond Navier-Stokes.

    PubMed

    Bobylev, A V

    2018-04-28

    We consider in this paper the problem of derivation and regularization of higher (in Knudsen number) equations of hydrodynamics. The author's approach based on successive changes of hydrodynamic variables is presented in more detail for the Burnett level. The complete theory is briefly discussed for the linearized Boltzmann equation. It is shown that the best results in this case can be obtained by using the 'diagonal' equations of hydrodynamics. Rigorous estimates of accuracy of the Navier-Stokes and Burnett approximations are also presented.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  2. On the singular perturbations for fractional differential equation.

    PubMed

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.

  3. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  4. Further Exploration of Post-Flare Giant Arches

    NASA Astrophysics Data System (ADS)

    West, Matthew; Seaton, Daniel B.; Dennis, Brian R.; feng, Li

    2017-08-01

    Recent observations from the SWAP EUV imager on-board PROBA2 and SXI X-ray observations from the GOES satellite have shown that post-flare giant arches and regular post-flare loops are one and the same thing. However, it is still not clear how certain loop systems are able to sustain prolonged growth to heights of approximately 400000 km (>0.5 solar-radii). In this presentation we further explore the energy deposition rate above post-flare loop systems through high-energy RHESSI observations. We also explore the difference between the loop systems through a multi-wavelength epoch analysis.

  5. Magnetic field effects on peristaltic flow of blood in a non-uniform channel

    NASA Astrophysics Data System (ADS)

    Latha, R.; Rushi Kumar, B.

    2017-11-01

    The objective of this paper is to carry out the effect of the MHD on the peristaltic transport of blood in a non-uniform channel have been explored under long wavelength approximation with low (zero) Reynolds number. Blood is made of an incompressible, viscous and electrically conducting. Explicit expressions for the axial velocity, axial pressure gradient are derived using long wavelength assumptions with slip and regularity conditions. It is determined that the pressure gradient diminishes as the couple stress parameter increments and it decreases as the magnetic parameter increments. We additionally concentrate the embedded parameters through graphs.

  6. Around Marshall

    NASA Image and Video Library

    1999-09-30

    Through Marshall Space Flight Center (MSFC) Education Department, over 400 MSFC employees have volunteered to support educational program during regular work hours. Project LASER (Learning About Science, Engineering, and Research) provides support for mentor/tutor requests, education tours, classroom presentations, and curriculum development. This program is available to teachers and students living within commuting distance of the NASA/MSFC in Huntsville, Alabama (approximately 50-miles radius). This image depicts students viewing their reflections in an x-ray mirror with Marshall optic engineer Vince Huegele at the Discovery Laboratory, which is an onsite MSFC laboratory facility that provides hands-on educational workshop sessions for teachers and students learning activities.

  7. Registration of segmented histological images using thin plate splines and belief propagation

    NASA Astrophysics Data System (ADS)

    Kybic, Jan

    2014-03-01

    We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.

  8. The fine structure of the sperm of the round goby (Neogobius melanostomus)

    USGS Publications Warehouse

    Allen, Jeffrey D.; Walker, Glenn K.; Nichols, Susan J.; Sorenson, Dorothy

    2004-01-01

    The fine structural details of the spermatozoon of the round goby are presented for the first time in this study. Scanning and transmission electron microscopic examination of testis reveals an anacrosomal spermatozoon with a slightly elongate head and uniformly compacted chromatin. The midpiece contains a single, spherical mitochondrion. Two perpendicularly oriented centrioles lie in a deep, eccentric nuclear fossa with no regularly observed connection to the nucleus. The flagellum develops bilateral fins soon after emerging from the fossa; each extends approximately 1 A?m from the axoneme and persists nearly the length of the flagellum.

  9. Approximate Global Convergence and Quasi-Reversibility for a Coefficient Inverse Problem with Backscattering Data

    DTIC Science & Technology

    2011-04-01

    L1u. Assume that geodesic lines, generated by the eikonal equation corresponding to the function c (x) are regular, i.e. any two points in R3 can be...source x0 is located far from Ω, then similarly with (107) ∆l (x, x0) ≈ 0 in Ω. The function l (x, x0) satisfies the eikonal equation [38] |∇xl (x, x0...called “inverse kinematic problem” which aims to recover the function c (x) from the eikonal equation assuming that the function l (x, x0) is known for

  10. Is the party over? Cannabis and juvenile psychiatric disorder: the past 10 years.

    PubMed

    Rey, Joseph M; Martin, Andrés; Krabman, Peter

    2004-10-01

    To critically review cannabis research during the past 10 years in relation to rates of use, behavioral problems, and mental disorders in young people. Studies published in English between 1994 and 2004 were identified through systematic searches of literature databases. The material was selectively reviewed focusing on child and adolescent data. In the 27 years between 1976 and 2002, approximately half of all 12th graders had been exposed to cannabis in the United States. There is growing evidence that early and regular marijuana use is associated with later increases in depression, suicidal behavior, and psychotic illness and may bring forward the onset of schizophrenia. Most of the recent data reject the view that marijuana is used to self-medicate psychotic or depressive symptoms. Research on treatment is very limited. Research on the mental health effects of cannabis has increased dramatically. Although doubts still remain about the role of cannabis in the causation of juvenile psychiatric disorder, the weight of the evidence points in the direction of early and regular cannabis use having substantial negative effects on psychosocial functioning and psychopathology.

  11. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2015-06-01

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.

  12. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  13. Regularized variational theories of fracture: A unified approach

    NASA Astrophysics Data System (ADS)

    Freddi, Francesco; Royer-Carfagni, Gianni

    2010-08-01

    The fracture pattern in stressed bodies is defined through the minimization of a two-field pseudo-spatial-dependent functional, with a structure similar to that proposed by Bourdin-Francfort-Marigo (2000) as a regularized approximation of a parent free-discontinuity problem, but now considered as an autonomous model per se. Here, this formulation is altered by combining it with structured deformation theory, to model that when the material microstructure is loosened and damaged, peculiar inelastic (structured) deformations may occur in the representative volume element at the price of surface energy consumption. This approach unifies various theories of failure because, by simply varying the form of the class for admissible structured deformations, different-in-type responses can be captured, incorporating the idea of cleavage, deviatoric, combined cleavage-deviatoric and masonry-like fractures. Remarkably, this latter formulation rigorously avoid material overlapping in the cracked zones. The model is numerically implemented using a standard finite-element discretization and adopts an alternate minimization algorithm, adding an inequality constraint to impose crack irreversibility ( fixed crack model). Numerical experiments for some paradigmatic examples are presented and compared for various possible versions of the model.

  14. The effect of loss of immunity on noise-induced sustained oscillations in epidemics.

    PubMed

    Chaffee, J; Kuske, R

    2011-11-01

    The effect of loss of immunity on sustained population oscillations about an endemic equilibrium is studied via a multiple scales analysis of a SIRS model. The analysis captures the key elements supporting the nearly regular oscillations of the infected and susceptible populations, namely, the interaction of the deterministic and stochastic dynamics together with the separation of time scales of the damping and the period of these oscillations. The derivation of a nonlinear stochastic amplitude equation describing the envelope of the oscillations yields two criteria providing explicit parameter ranges where they can be observed. These conditions are similar to those found for other applications in the context of coherence resonance, in which noise drives nearly regular oscillations in a system that is quiescent without noise. In this context the criteria indicate how loss of immunity and other factors can lead to a significant increase in the parameter range for prevalence of the sustained oscillations, without any external driving forces. Comparison of the power spectral densities of the full model and the approximation confirms that the multiple scales analysis captures nonlinear features of the oscillations.

  15. Work and health, a blind spot in curative healthcare? A pilot study.

    PubMed

    Lötters, Freek J B; Foets, Marleen; Burdorf, Alex

    2011-09-01

    Most workers with musculoskeletal disorders on sick leave often consult with regular health care before entering a specific work rehabilitation program. However, it remains unclear to what extent regular healthcare contributes to the timely return to work (RTW). Moreover, several studies have indicated that it might postpone RTW. There is a need to establish the influence of regular healthcare on RTW as outcome; "Does visiting a regular healthcare provider influence the duration of sickness absence and recurrent sick leave due to musculoskeletal disorders?". A cohort of workers on sick leave for 2-6 weeks due to a-specific musculoskeletal disorders was followed for 12 months. The main outcomes for the present analysis were: duration of sickness absence till 100% return to work and recurrent sick leave after initial RTW. Cox regression analyses were conducted with visiting a general health practitioner, physical therapist, or medical specialist during the sick leave period as independent variables. Each regression model was adjusted for variables known to influence health care utilization like age, sex, diagnostic group, pain intensity, functional disability, general health perception, severity of complaints, job control, and physical load at work. Patients visiting a medical specialist reported higher pain intensity and more functional limitations and also had a worse health perception at start of the sick leave period compared with those not visiting a specialist. Visiting a medical specialist delayed return to work significantly (HR = 2.10; 95%CI 1.43-3.07). After approximately 8 weeks on sick leave workers visiting a physical therapist returned to work faster than other workers. A recurrent episode of sick leave during the follow up quick was initiated by higher pain intensity and more functional limitations at the moment of fully return to work. Visiting a primary healthcare provider during the sickness absence period did not influence the occurrence of a new sick leave period. Despite the adjustment for severity of the musculoskeletal disorder, visiting a medical specialist was associated with a delayed full return to work. More attention to the factor 'labor' in the regular healthcare is warranted, especially for those patients experiencing substantial functional limitations due to musculoskeletal disorders.

  16. A comparative study of the microstructures observed in statically cast and continuously cast Bi-In-Sn ternary eutectic alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, S.; Soda, H.; McLean, A.

    2000-01-01

    A ternary eutectic alloy with a composition of 57.2 pct Bi, 24.8 pct In, and 18 pct Sn was continuously cast into wire of 2 mm diameter with casting speeds of 14 and 79 mm/min using the Ohno Continuous Casting (OCC) process. The microstructures obtained were compared with those of statically cast specimens. Extensive segregation of massive Bi blocks, Bi complex structures, and tin-rich dendrites was found in specimens that were statically cast. Decomposition of {radical}Sn by a eutectoid reaction was confirmed based on microstructural evidence. Ternary eutectic alloy with a cooling rate of approximately 1 C/min formed a doublemore » binary eutectic. The double binary eutectic consisted of regions of BiIn and decomposed {radical}Sn in the form of a dendrite cell structure and regions of Bi and decomposed {radical}Sn in the form of a complex-regular cell. The Bi complex-regular cells, which are a ternary eutectic constituent, existed either along the boundaries of the BiIn-decomposed {radical}Sn dendrite cells or at the front of elongated dendrite cell structures. In the continuously cast wires, primary Sn dendrites coupled with a small Bi phase were uniformly distributed within the Bi-In alloy matrix. Neither massive Bi phase, Bi complex-regular cells, no BiIn eutectic dendrite cells were observed, resulting in a more uniform microstructure in contrast to the heavily segregated structures of the statically cast specimens.« less

  17. Regularized quasinormal modes for plasmonic resonators and open cavities

    NASA Astrophysics Data System (ADS)

    Kamandar Dezfouli, Mohsen; Hughes, Stephen

    2018-03-01

    Optical mode theory and analysis of open cavities and plasmonic particles is an essential component of optical resonator physics, offering considerable insight and efficiency for connecting to classical and quantum optical properties such as the Purcell effect. However, obtaining the dissipative modes in normalized form for arbitrarily shaped open-cavity systems is notoriously difficult, often involving complex spatial integrations, even after performing the necessary full space solutions to Maxwell's equations. The formal solutions are termed quasinormal modes, which are known to diverge in space, and additional techniques are frequently required to obtain more accurate field representations in the far field. In this work, we introduce a finite-difference time-domain technique that can be used to obtain normalized quasinormal modes using a simple dipole-excitation source, and an inverse Green function technique, in real frequency space, without having to perform any spatial integrations. Moreover, we show how these modes are naturally regularized to ensure the correct field decay behavior in the far field, and thus can be used at any position within and outside the resonator. We term these modes "regularized quasinormal modes" and show the reliability and generality of the theory by studying the generalized Purcell factor of dipole emitters near metallic nanoresonators, hybrid devices with metal nanoparticles coupled to dielectric waveguides, as well as coupled cavity-waveguides in photonic crystals slabs. We also directly compare our results with full-dipole simulations of Maxwell's equations without any approximations, and show excellent agreement.

  18. Evaluation of Fermented Sausages Manufactured with Reduced-fat and Functional Starter Cultures on Physicochemical, Functional and Flavor Characteristics

    PubMed Central

    Yoo, Seung Seok

    2014-01-01

    Fermented foods with probiotics having functional properties may provide beneficial effects on health. These effects are varied, depending on the type of lactic acid bacteria (LAB). Different probiotic LAB might have different functional properties. Thus, this study was performed to evaluate the quality of fermented sausages manufactured with functional starter cultures (Lactobacillus plantarum 115 and 167, and Pediococcus damnosus L12) and different fat levels, and to determine the optimum condition for the manufacture of these products. Medium-fat (~15%) fermented sausages reduced the drying time and cholesterol contents, as compared to regular-fat counterparts. In proximate analysis, the contents of moisture and protein of regular-fat products were lower than medium-fat with reduced fat content. The regular-fat products also had a lighter color and less redness, due to reduced fat content. Approximately 35 volatile compounds were identified in functional fermented sausages, and hexanal, trans-caryophyllene, and tetradecanal were the major volatile compounds. Selected mixed starter culture showed the potential possibility of replacing the commercial starter culture (LK30 plus) in flavor profiles. However, medium-fat fermented sausage containing selected mixed starter culture tended to be less acceptable than their high-fat counterparts, due to excess dry ring developed in the surface. These results indicate that the use of combinations of L. plantarum 115 and 167, and P. damnosus L12 as a starter culture, will prove useful for manufacturing the fermented sausage. PMID:26761176

  19. Solving the Rational Polynomial Coefficients Based on L Curve

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.

    2018-05-01

    The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.

  20. A regularized auxiliary particle filtering approach for system state estimation and battery life prediction

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wang, Wilson; Ma, Fai

    2011-07-01

    System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.

  1. [Investigation of the Safety of and Patient Satisfaction with iEat®, the Support Food for the Recovery of Eating Function in Patients with Carcinomatosis - Related Gastrointestinal Passage Disorder].

    PubMed

    Matsuoka, Mio; Shinoki, Keiji; Makari, Yoichi; Iijima, Shohei

    2015-12-01

    iEat®, a support food for the recovery of eating function, is food that can be easily masticated with little power and has suitable fluidity for enzyme processing, regardless of its normal appearance. We provided iEat® to 5 patients with carcinomatosis-related gastrointestinal passage disorder who could take fluid foods and investigated the stability of iEat® and patient satisfaction with the food. We provided regular diets for lunch on the first and 7th day, and provided iEat® from the 2nd to the 6th day. The stability of iEat(R) was evaluated based on the presence and grade of abdominal pain, diarrhea, sense of abdominal distension, nausea, and vomiting, according to the Common Terminology Criteria for Adverse Events (CTCAE v4.0, JCOG). The patients assessed their satisfaction by using 6 grades of taste, appearance, amount, difficulty of intake, and overall valuation. One patient could not continue the study because of vomiting from overeating of iEat(R). In the other patients, iEat(R) induced approximately the same adverse events as did the regular diets. All of the patients expressed better satisfaction with iEat® than with the regular diets. Although patient management for overeating is necessary, iEat® might provide good quality of life in terms of eating satisfaction to the patients with carcinomatosis-related gastrointestinal passage disorder.

  2. Continuous time limits of the utterance selection model

    NASA Astrophysics Data System (ADS)

    Michaud, Jérôme

    2017-02-01

    In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.

  3. Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen

    1997-01-01

    In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.

  4. Effects of a free school breakfast programme on school attendance, achievement, psychosocial function, and nutrition: a stepped wedge cluster randomised trial

    PubMed Central

    2010-01-01

    Background Approximately 55,000 children in New Zealand do not eat breakfast on any given day. Regular breakfast skipping has been associated with poor diets, higher body mass index, and adverse effects on children's behaviour and academic performance. Research suggests that regular breakfast consumption can improve academic performance, nutrition and behaviour. This paper describes the protocol for a stepped wedge cluster randomised trial of a free school breakfast programme. The aim of the trial is to determine the effects of the breakfast intervention on school attendance, achievement, psychosocial function, dietary habits and food security. Methods/Design Sixteen primary schools in the North Island of New Zealand will be randomised in a sequential stepped wedge design to a free before-school breakfast programme consisting of non-sugar coated breakfast cereal, milk products, and/or toast and spreads. Four hundred children aged 5-13 years (approximately 25 per school) will be recruited. Data collection will be undertaken once each school term over the 2010 school year (February to December). The primary trial outcome is school attendance, defined as the proportion of students achieving an attendance rate of 95% or higher. Secondary outcomes are academic achievement (literacy, numeracy, self-reported grades), sense of belonging at school, psychosocial function, dietary habits, and food security. A concurrent process evaluation seeks information on parents', schools' and providers' perspectives of the breakfast programme. Discussion This randomised controlled trial will provide robust evidence of the effects of a school breakfast programme on students' attendance, achievement and nutrition. Furthermore the study provides an excellent example of the feasibility and value of the stepped wedge trial design in evaluating pragmatic public health intervention programmes. Trial Registration Number Australian New Zealand Clinical Trials Registry (ANZCTR) - ACTRN12609000854235 PMID:21114862

  5. Pituitary-hormone secretion by thyrotropinomas.

    PubMed

    Roelfsema, Ferdinand; Kok, Simon; Kok, Petra; Pereira, Alberto M; Biermasz, Nienke R; Smit, Jan W; Frolich, Marijke; Keenan, Daniel M; Veldhuis, Johannes D; Romijn, Johannes A

    2009-01-01

    Hormone secretion by somatotropinomas, corticotropinomas and prolactinomas exhibits increased pulse frequency, basal and pulsatile secretion, accompanied by greater disorderliness. Increased concentrations of growth hormone (GH) or prolactin (PRL) are observed in about 30% of thyrotropinomas leading to acromegaly or disturbed sexual functions beyond thyrotropin (TSH)-induced hyperthyroidism. Regulation of non-TSH pituitary hormones in this context is not well understood. We there therefore evaluated TSH, GH and PRL secretion in 6 patients with up-to-date analytical and mathematical tools by 24-h blood sampling at 10-min intervals in a clinical research laboratory. The profiles were analyzed with a new deconvolution method, approximate entropy, cross-approximate entropy, cross-correlation and cosinor regression. TSH burst frequency and basal and pulsatile secretion were increased in patients compared with controls. TSH secretion patterns in patients were more irregular, but the diurnal rhythm was preserved at a higher mean with a 2.5 h phase delay. Although only one patient had clinical acromegaly, GH secretion and IGF-I levels were increased in two other patients and all three had a significant cross-correlation between the GH and TSH. PRL secretion was increased in one patient, but all patients had a significant cross-correlation with TSH and showed decreased PRL regularity. Cross-ApEn synchrony between TSH and GH did not differ between patients and controls, but TSH and PRL synchrony was reduced in patients. We conclude that TSH secretion by thyrotropinomas shares many characteristics of other pituitary hormone-secreting adenomas. In addition, abnormalities in GH and PRL secretion exist ranging from decreased (joint) regularity to overt hypersecretion, although not always clinically obvious, suggesting tumoral transformation of thyrotrope lineage cells.

  6. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  7. Voluminator 2.0 - Speeding up the Approximation of the Volume of Defective 3d Building Models

    NASA Astrophysics Data System (ADS)

    Sindram, M.; Machl, T.; Steuer, H.; Pültz, M.; Kolbe, T. H.

    2016-06-01

    Semantic 3D city models are increasingly used as a data source in planning and analyzing processes of cities. They represent a virtual copy of the reality and are a common information base and source of information for examining urban questions. A significant advantage of virtual city models is that important indicators such as the volume of buildings, topological relationships between objects and other geometric as well as thematic information can be derived. Knowledge about the exact building volume is an essential base for estimating the building energy demand. In order to determine the volume of buildings with conventional algorithms and tools, the buildings may not contain any topological and geometrical errors. The reality, however, shows that city models very often contain errors such as missing surfaces, duplicated faces and misclosures. To overcome these errors (Steuer et al., 2015) have presented a robust method for approximating the volume of building models. For this purpose, a bounding box of the building is divided into a regular grid of voxels and it is determined which voxels are inside the building. The regular arrangement of the voxels leads to a high number of topological tests and prevents the application of this method using very high resolutions. In this paper we present an extension of the algorithm using an octree approach limiting the subdivision of space to regions around surfaces of the building models and to regions where, in the case of defective models, the topological tests are inconclusive. We show that the computation time can be significantly reduced, while preserving the robustness against geometrical and topological errors.

  8. Seasonal and interannual temperature variations in the tropical stratosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, G.C.

    1994-09-20

    Temperature variations in the tropical lower and middle stratosphere are influenced by at least five distinct driving forces. These are (1) the mechanism of the regular seasonal cycle, (2) the quasi-biennial oscillation (QBO) in zonal winds, (3) the semiannual zonal wind oscillation (SAO) at higher levels, (4) El Nino-Southern Oscillation (ENSO) effects driven by the underlying troposphere, and (5) radiative effects, including volcanic aerosol heating. Radiosonde measurements of temperatures from a number of tropical stations, mostly in the western Pacific region, are used in this paper to examine the characteristic annual and interannual temperature variability in the stratosphere below themore » 10-hPa pressure level ({approximately} 31 km) over a time period of 17 years, chosen to eliminate or at least minimize the effect of volcanic eruptions. Both annual and interannual variations are found to show a fairly distinct transition between the lower and the middle stratosphere at about the 35-hPa level ({approximately} 23 km). The lower stratosphere, below this transition level, is strongly influenced by the ENSO cycle as well as by the QBO. The overall result of the interaction is to modulate the amplitude of the normal stratospheric seasonal cycle and to impose a biennial component on it, so that alternate seasonal cycles are stronger or weaker than normal. Additional modulation by the ENSO cycle occurs at its quasi-period of 3-5 years, giving rise to a complex net behavior. In the middle stratosphere above the transition level, there is no discernible ENSO influence, and departures from the regular semiannual seasonal cycle are dominated by the QBO. Recent ideas on the underlying physical mechanisms governing these variations are discussed, as is the relationship of the radiosonde measurements to recent satellite remote-sensing observations. 37 refs., 8 figs., 1 tab.« less

  9. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  10. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps

    DOE PAGES

    Isotalo, Aarno; Pusa, Maria

    2016-05-01

    The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less

  11. Polarimetry of Pinctada fucata nacre indicates myostracal layer interrupts nacre structure.

    PubMed

    Metzler, Rebecca A; Jones, Joshua A; D'Addario, Anthony J; Galvez, Enrique J

    2017-02-01

    The inner layer of many bivalve and gastropod molluscs consists of iridescent nacre, a material that is structured like a brick wall with bricks consisting of crystalline aragonite and mortar of organic molecules. Myostracal layers formed during shell growth at the point of muscle attachment to the shell can be found interspersed within the nacre structure. Little has been done to examine the effect the myostracal layer has on subsequent nacre structure. Here we present data on the structure of the myostracal and nacre layers from a bivalve mollusc, Pinctada fucata . Scanning electron microscope imaging shows the myostracal layer consists of regular crystalline blocks. The nacre before the layer consists of tablets approximately 400 nm thick, while after the myostracal layer the tablets are approximately 500 nm thick. A new technique, imaging polarimetry, indicates that the aragonite crystals within the nacre following the myostracal layer have greater orientation uniformity than before the myostracal layer. The results presented here suggest a possible interaction between the myostracal layer and subsequent shell growth.

  12. Polarimetry of Pinctada fucata nacre indicates myostracal layer interrupts nacre structure

    NASA Astrophysics Data System (ADS)

    Metzler, Rebecca A.; Jones, Joshua A.; D'Addario, Anthony J.; Galvez, Enrique J.

    2017-02-01

    The inner layer of many bivalve and gastropod molluscs consists of iridescent nacre, a material that is structured like a brick wall with bricks consisting of crystalline aragonite and mortar of organic molecules. Myostracal layers formed during shell growth at the point of muscle attachment to the shell can be found interspersed within the nacre structure. Little has been done to examine the effect the myostracal layer has on subsequent nacre structure. Here we present data on the structure of the myostracal and nacre layers from a bivalve mollusc, Pinctada fucata. Scanning electron microscope imaging shows the myostracal layer consists of regular crystalline blocks. The nacre before the layer consists of tablets approximately 400 nm thick, while after the myostracal layer the tablets are approximately 500 nm thick. A new technique, imaging polarimetry, indicates that the aragonite crystals within the nacre following the myostracal layer have greater orientation uniformity than before the myostracal layer. The results presented here suggest a possible interaction between the myostracal layer and subsequent shell growth.

  13. Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation

    NASA Technical Reports Server (NTRS)

    Liandrat, J.; Tchamitchian, PH.

    1990-01-01

    The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.

  14. Nonlinear Analysis of Auscultation Signals in TCM Using the Combination of Wavelet Packet Transform and Sample Entropy.

    PubMed

    Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong

    2012-01-01

    Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%.

  15. Quantum Enhanced Inference in Markov Logic Networks

    NASA Astrophysics Data System (ADS)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  16. Approximate Entropy Values Demonstrate Impaired Neuromotor Control of Spontaneous Leg Activity in Infants with Myelomeningocele

    PubMed Central

    Smith, Beth A.; Teulier, Caroline; Sansom, Jennifer; Stergiou, Nicholas; Ulrich, Beverly D.

    2012-01-01

    Purpose One obstacle to providing early intervention to infants with myelomeningocele (MMC) is the challenge of quantifying impaired neuromotor control of movements early in life. Methods We used the nonlinear analysis tool Approximate Entropy (ApEn) to analyze periodicity and complexity of supine spontaneous lower extremity movements of infants with MMC and typical development (TD) at 1, 3, 6 and 9 months of age. Results Movements of infants with MMC were more regular and repeatable (lower ApEn values) than movements of infants with TD indicating less adaptive and flexible movement patterns. For both groups ApEn values decreased with age, and the movements of infants with MMC were less complex than movements of infants with TD. Further, for infants with MMC, lesion level and age of walking onset correlated negatively with ApEn values. Conclusions Our study begins to demonstrate the feasibility of ApEn to identify impaired neuromotor control in infants with MMC. PMID:21829116

  17. Robust and Efficient Biomolecular Clustering of Tumor Based on ${p}$ -Norm Singular Value Decomposition.

    PubMed

    Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan

    2017-07-01

    High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.

  18. Behavior of susceptible-infected-susceptible epidemics on heterogeneous networks with saturation

    NASA Astrophysics Data System (ADS)

    Joo, Jaewook; Lebowitz, Joel L.

    2004-06-01

    We investigate saturation effects in susceptible-infected-susceptible models of the spread of epidemics in heterogeneous populations. The structure of interactions in the population is represented by networks with connectivity distribution P(k) , including scale-free (SF) networks with power law distributions P(k)˜ k-γ . Considering cases where the transmission of infection between nodes depends on their connectivity, we introduce a saturation function C(k) which reduces the infection transmission rate λ across an edge going from a node with high connectivity k . A mean-field approximation with the neglect of degree-degree correlation then leads to a finite threshold λc >0 for SF networks with 2<γ⩽3 . We also find, in this approximation, the fraction of infected individuals among those with degree k for λ close to λc . We investigate via computer simulation the contact process on a heterogeneous regular lattice and compare the results with those obtained from mean-field theory with and without neglect of degree-degree correlations.

  19. Nonlinear Analysis of Auscultation Signals in TCM Using the Combination of Wavelet Packet Transform and Sample Entropy

    PubMed Central

    Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong

    2012-01-01

    Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%. PMID:22690242

  20. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  1. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1987-01-01

    A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  2. Free fall and the equivalence principle revisited

    NASA Astrophysics Data System (ADS)

    Pendrill, Ann-Marie

    2017-11-01

    Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton’s law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field. In addition, Newton’s third law of action and reaction causes the Earth to accelerate towards the falling object, bringing in a mass dependence in the time required for an object to reach ground—in spite of the equivalence between inertial and gravitational mass. These aspects are rarely discussed in textbooks when the motion of everyday objects are discussed. Although these effects are extremely small, it may still be important for teachers to make assumptions and approximations explicit, to be aware of small corrections, and also to be prepared to estimate their size. Even if the corrections are not part of regular teaching, some students may reflect on them, and their questions deserve to be taken seriously.

  3. Quantum Enhanced Inference in Markov Logic Networks.

    PubMed

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  4. Quantum Enhanced Inference in Markov Logic Networks

    PubMed Central

    Wittek, Peter; Gogolin, Christian

    2017-01-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning. PMID:28422093

  5. Transient motion of mucus plugs in respiratory airways

    NASA Astrophysics Data System (ADS)

    Zamankhan, Parsa; Hu, Yingying; Helenbrook, Brian; Takayama, Shuichi; Grotberg, James B.

    2011-11-01

    Airway closure occurs in lung diseases such as asthma, cystic fibrosis, or emphysema which have an excess of mucus that forms plugs. The reopening process involves displacement of mucus plugs in the airways by the airflow of respiration. Mucus is a non-Newtonian fluid with a yield stress; therefore its behavior can be approximated by a Bingham fluid constitutive equation. In this work the reopening process is approximated by simulation of a transient Bingham fluid plug in a 2D channel. The governing equations are solved by an Arbitrary Lagrangian Eulerian (ALE) finite element method through an in-house code. The constitutive equation for the Bingham fluid is implemented through a regularization method. The effects of the yield stress on the flow features and wall stresses are discussed with applications to potential injuries to the airway epithelial cells which form the wall. The minimum driving pressure for the initiation of the motion is computed and its value is related to the mucus properties and the plug shape. Supported by HL84370 and HL85156.

  6. Quantification of fetal heart rate regularity using symbolic dynamics

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.

    2007-03-01

    Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to examine the cardiovascular system may thus lead to new insight with respect to fetal development.

  7. Implementation of the updated 2015 Commission for Hospital Hygiene and Infection Prevention (KRINKO) recommendations "Prevention and control of catheter-associated urinary tract infections" in the hospitals in Frankfurt/Main, Germany.

    PubMed

    Heudorf, Ursel; Grünewald, Miriam; Otto, Ulla

    2016-01-01

    The Commission for Hospital Hygiene and Infection Prevention (KRINKO) updated the recommendations for the prevention of catheter-associated urinary tract infections in 2015. This article will describe the implementation of these recommendations in Frankfurt's hospitals in autumn, 2015. In two non-ICU wards of each of Frankfurt's 17 hospitals, inspections were performed using a checklist based on the new KRINKO recommendations. In one large hospital, a total of 5 wards were inspected. The inspections covered the structure and process quality (operating instructions, training, indication, the placement and maintenance of catheters) and the demonstration of the preparation for insertion of a catheter using an empty bed and an imaginary patient, or insertion in a model. Operating instructions were available in all hospital wards; approximately half of the wards regularly performed training sessions. The indications were largely in line with the recommendations of the KRINKO. Alternatives to urinary tract catheters were available and were used more often than the urinary tract catheters themselves (15.9% vs. 13.5%). In accordance with the recommendations, catheters were placed without antibiotic prophylaxis or the instillation of antiseptic or antimicrobial substances or catheter flushing solutions. The demonstration of catheter placement was conscientiously performed. Need for improvement was seen in the daily documentation and the regular verification of continuing indication for a urinary catheter, as well as the omission of regular catheter change. Overall, the recommendations of the KRINKO on the prevention of catheter-associated urinary tract infections were adequately implemented. However, it cannot be ruled out that in situations with time pressure and staff shortage, the handling of urinary tract catheters may be of lower quality than that observed during the inspections, when catheter insertion was done by two nurses. Against this background, a sufficient number of qualified staff and regular ward rounds by the hygiene staff appear recommendable.

  8. Lifestyle and health status in a sample of Swedish women four years after pregnancy: a comparison of women with a history of normal pregnancy and women with a history of gestational diabetes mellitus.

    PubMed

    Persson, Margareta; Winkvist, Anna; Mogren, Ingrid

    2015-03-13

    Despite the recommendations to continue the regime of healthy food and physical activity (PA) postpartum for women with previous gestational diabetes mellitus (GDM), the scientific evidence reveals that these recommendations may not be complied to. This study compared lifestyle and health status in women whose pregnancy was complicated by GDM with women who had a normal pregnancy and delivery. The inclusion criteria were women with GDM (ICD-10: O24.4 A and O24.4B) and women with uncomplicated pregnancy and delivery in 2005 (ICD-10: O80.0). A random sample of women fulfilling the criteria (n = 882) were identified from the Swedish Medical Birth Register. A questionnaire was sent by mail to eligible women approximately four years after the pregnancy. A total of 444 women (50.8%) agreed to participate, 111 diagnosed with GDM in their pregnancy and 333 with normal pregnancy/delivery. Women with previous GDM were significantly older, reported higher body weight and less PA before the index pregnancy. No major differences between the groups were noticed regarding lifestyle at the follow-up. Overall, few participants fulfilled the national recommendations of PA and diet. At the follow-up, 19 participants had developed diabetes, all with previous GDM. Women with previous GDM reported significantly poorer self-rated health (SRH), higher level of sick-leave and more often using medication on regular basis. However, a history of GDM or having overt diabetes mellitus showed no association with poorer SRH in the multivariate analysis. Irregular eating habits, no regular PA, overweight/obesity, and regular use of medication were associated with poorer SRH in all participants. Suboptimal levels of PA, and fruit and vegetable consumption were found in a sample of women with a history of GDM as well as for women with normal pregnancy approximately four years after index pregnancy. Women with previous GDM seem to increase their PA after childbirth, but still they perform their PA at lower intensity than women with a history of normal pregnancy. Having GDM at index pregnancy or being diagnosed with overt diabetes mellitus at follow-up did not demonstrate associations with poorer SRH four years after delivery.

  9. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  10. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  11. Chaos Versus Noisy Periodicity: Alternative Hypotheses for Childhood Epidemics

    NASA Astrophysics Data System (ADS)

    Olsen, L. F.; Schaffer, W. M.

    1990-08-01

    Whereas case rates for some childhood diseases (chickenpox) often vary according to an almost regular annual cycle, the incidence of more efficiently transmitted infections such as measles is more variable. Three hypotheses have been proposed to account for such fluctuations. (i) Irregular dynamics result from random shocks to systems with stable equilibria. (ii) The intrinsic dynamics correspond to biennial cycles that are subject to stochastic forcing. (iii) Aperiodic fluctuations are intrinsic to the epidemiology. Comparison of real world data and epidemiological models suggests that measles epidemics are inherently chaotic. Conversely, the extent to which chickenpox outbreaks approximate a yearly cycle depends inversely on the population size.

  12. Pseudodynamic systems approach based on a quadratic approximation of update equations for diffuse optical tomography.

    PubMed

    Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish

    2011-08-01

    We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data.

  13. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  14. Study of application of ERTS-A imagery to fracture related mine safety hazards in the coal mining industry

    NASA Technical Reports Server (NTRS)

    Wier, C. E.; Wobber, F. J. (Principal Investigator); Russell, O. R.; Amato, R. V.

    1973-01-01

    The author has identified the following significant results. The 70mm black and white infrared photography acquired in March 1973 at an approximate scale of 1:115,000 permits the identification of areas of mine subsidence not readily evident on other films. This is largely due to the high contrast rendition of water and land by this film and the excessive surface moisture conditions prevalent in the area at the time of photography. Subsided areas consist of shallow depressions which have impounded water. Patterns with a regularity indicative of the room and pillar configuration used in subsurface coal mining are evident.

  15. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  16. On the Singular Perturbations for Fractional Differential Equation

    PubMed Central

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357

  17. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  18. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  19. Project LASER Volunteer, Marshall Space Flight Center Education Program

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Through Marshall Space Flight Center (MSFC) Education Department, over 400 MSFC employees have volunteered to support educational program during regular work hours. Project LASER (Learning About Science, Engineering, and Research) provides support for mentor/tutor requests, education tours, classroom presentations, and curriculum development. This program is available to teachers and students living within commuting distance of the NASA/MSFC in Huntsville, Alabama (approximately 50-miles radius). This image depicts students viewing their reflections in an x-ray mirror with Marshall optic engineer Vince Huegele at the Discovery Laboratory, which is an onsite MSFC laboratory facility that provides hands-on educational workshop sessions for teachers and students learning activities.

  20. Hybrid near-optimal aeroassisted orbit transfer plane change trajectories

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Duckeman, Gregory A.

    1994-01-01

    In this paper, a hybrid methodology is used to determine optimal open loop controls for the atmospheric portion of the aeroassisted plane change problem. The method is hybrid in the sense that it combines the features of numerical collocation with the analytically tractable portions of the problem which result when the two-point boundary value problem is cast in the form of a regular perturbation problem. Various levels of approximation are introduced by eliminating particular collocation parameters and their effect upon problem complexity and required number of nodes is discussed. The results include plane changes of 10, 20, and 30 degrees for a given vehicle.

Top