Sample records for fdh regularization scheme

  1. γ5 in the four-dimensional helicity scheme

    NASA Astrophysics Data System (ADS)

    Gnendiger, C.; Signer, A.

    2018-05-01

    We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.

  2. Selenium-Dependent Biogenesis of Formate Dehydrogenase in Campylobacter jejuni Is Controlled by the fdhTU Accessory Genes

    PubMed Central

    Shaw, Frances L.; Mulholland, Francis; Le Gall, Gwénaëlle; Porcelli, Ida; Hart, Dave J.; Pearson, Bruce M.

    2012-01-01

    The food-borne bacterial pathogen Campylobacter jejuni efficiently utilizes organic acids such as lactate and formate for energy production. Formate is rapidly metabolized via the activity of the multisubunit formate dehydrogenase (FDH) enzyme, of which the FdhA subunit is predicted to contain a selenocysteine (SeC) amino acid. In this study we investigated the function of the cj1500 and cj1501 genes of C. jejuni, demonstrate that they are involved in selenium-controlled production of FDH, and propose the names fdhT and fdhU, respectively. Insertional inactivation of fdhT or fdhU in C. jejuni resulted in the absence of FdhA and FdhB protein expression, reduced fdhABC RNA levels, the absence of FDH enzyme activity, and the lack of formate utilization, as assessed by 1H nuclear magnetic resonance. The fdhABC genes are transcribed from a single promoter located two genes upstream of fdhA, and the decrease in fdhABC RNA levels in the fdhU mutant is mediated at the posttranscriptional level. FDH activity and the ability to utilize formate were restored by genetic complementation with fdhU and by supplementation of the growth media with selenium dioxide. Disruption of SeC synthesis by inactivation of the selA and selB genes also resulted in the absence of FDH activity, which could not be restored by selenium supplementation. Comparative genomic analysis suggests a link between the presence of selA and fdhTU orthologs and the predicted presence of SeC in FdhA. The fdhTU genes encode accessory proteins required for FDH expression and activity in C. jejuni, possibly by contributing to acquisition or utilization of selenium. PMID:22609917

  3. A Sulfurtransferase Is Essential for Activity of Formate Dehydrogenases in Escherichia coli*

    PubMed Central

    Thomé, Rémi; Gust, Alexander; Toci, René; Mendel, Ralf; Bittner, Florian; Magalon, Axel; Walburger, Anne

    2012-01-01

    l-Cysteine desulfurases provide sulfur to several metabolic pathways in the form of persulfides on specific cysteine residues of an acceptor protein for the eventual incorporation of sulfur into an end product. IscS is one of the three Escherichia coli l-cysteine desulfurases. It interacts with FdhD, a protein essential for the activity of formate dehydrogenases (FDHs), which are iron/molybdenum/selenium-containing enzymes. Here, we address the role played by this interaction in the activity of FDH-H (FdhF) in E. coli. The interaction of IscS with FdhD results in a sulfur transfer between IscS and FdhD in the form of persulfides. Substitution of the strictly conserved residue Cys-121 of FdhD impairs both sulfur transfer from IscS to FdhD and FdhF activity. Furthermore, inactive FdhF produced in the absence of FdhD contains both metal centers, albeit the molybdenum cofactor is at a reduced level. Finally, FdhF activity is sulfur-dependent, as it shows reversible sensitivity to cyanide treatment. Conclusively, FdhD is a sulfurtransferase between IscS and FdhF and is thereby essential to yield FDH activity. PMID:22194618

  4. Degradation of the stress-responsive enzyme formate dehydrogenase by the RING-type E3 ligase Keep on Going and the ubiquitin 26S proteasome system.

    PubMed

    McNeilly, Daryl; Schofield, Andrew; Stone, Sophia L

    2018-02-01

    KEG is involved in mediating the proteasome-dependent degradation of FDH, a stress-responsive enzyme. The UPS may function to suppress FDH mediated stress responses under favorable growth conditions. Formate dehydrogenase (FDH) has been studied in bacteria and yeasts for the purpose of industrial application of NADH co-factor regeneration. In plants, FDH is regarded as a universal stress protein involved in responses to various abiotic and biotic stresses. Here we show that FDH abundance is regulated by the ubiquitin proteasome system (UPS). FDH is ubiquitinated in planta and degraded by the 26S proteasome. Interaction assays identified FDH as a potential substrate for the RING-type ubiquitin ligase Keep on Going (KEG). KEG is capable of attaching ubiquitin to FDH in in vitro assays and the turnover of FDH was increased when co-expressed with a functional KEG in planta, suggesting that KEG contributes to FDH degradation. Consistent with a role in regulating FDH abundance, transgenic plants overexpressing KEG were more sensitive to the inhibitory effects of formate. In addition, FDH is a phosphoprotein and dephosphorylation was found to increase the stability of FDH in degradation assays. Based on results from this and previous studies, we propose a model where KEG mediates the ubiquitination and subsequent degradation of phosphorylated FDH and, in response to unfavourable growth conditions, reduction in FDH phosphorylation levels may prohibit turnover allowing the stabilized FDH to facilitate stress responses.

  5. Efficient CO2-Reducing Activity of NAD-Dependent Formate Dehydrogenase from Thiobacillus sp. KNK65MA for Formate Production from CO2 Gas

    PubMed Central

    Cho, Dae Haeng; Kim, Min Hoo; Lee, Sang Hyun; Jung, Kwang Deog; Kim, Yong Hwan

    2014-01-01

    NAD-dependent formate dehydrogenase (FDH) from Candida boidinii (CbFDH) has been widely used in various CO2-reduction systems but its practical applications are often impeded due to low CO2-reducing activity. In this study, we demonstrated superior CO2-reducing properties of FDH from Thiobacillus sp. KNK65MA (TsFDH) for production of formate from CO2 gas. To discover more efficient CO2-reducing FDHs than a reference enzyme, i.e. CbFDH, five FDHs were selected with biochemical properties and then, their CO2-reducing activities were evaluated. All FDHs including CbFDH showed better CO2-reducing activities at acidic pHs than at neutral pHs and four FDHs were more active than CbFDH in the CO2 reduction reaction. In particular, the FDH from Thiobacillus sp. KNK65MA (TsFDH) exhibited the highest CO2-reducing activity and had a dramatic preference for the reduction reaction, i.e., a 84.2-fold higher ratio of CO2 reduction to formate oxidation in catalytic efficiency (k cat/K B) compared to CbFDH. Formate was produced from CO2 gas using TsFDH and CbFDH, and TsFDH showed a 5.8-fold higher formate production rate than CbFDH. A sequence and structural comparison showed that FDHs with relatively high CO2-reducing activities had elongated N- and C-terminal loops. The experimental results demonstrate that TsFDH can be an alternative to CbFDH as a biocatalyst in CO2 reduction systems. PMID:25061666

  6. Molecular and biochemical characterization of two tungsten- and selenium-containing formate dehydrogenases from Eubacterium acidaminophilum that are associated with components of an iron-only hydrogenase.

    PubMed

    Graentzdoerffer, Andrea; Rauh, David; Pich, Andreas; Andreesen, Jan R

    2003-01-01

    Two gene clusters encoding similar formate dehydrogenases (FDH) were identified in Eubacterium acidaminophilum. Each cluster is composed of one gene coding for a catalytic subunit ( fdhA-I, fdhA-II) and one for an electron-transferring subunit ( fdhB-I, fdhB-II). Both fdhA genes contain a TGA codon for selenocysteine incorporation and the encoded proteins harbor five putative iron-sulfur clusters in their N-terminal region. Both FdhB subunits resemble the N-terminal region of FdhA on the amino acid level and contain five putative iron-sulfur clusters. Four genes thought to encode the subunits of an iron-only hydrogenase are located upstream of the FDH gene cluster I. By sequence comparison, HymA and HymB are predicted to contain one and four iron-sulfur clusters, respectively, the latter protein also binding sites for FMN and NAD(P). Thus, HymA and HymB seem to represent electron-transferring subunits, and HymC the putative catalytic subunit containing motifs for four iron-sulfur clusters and one H-cluster specific for Fe-only hydrogenases. HymD has six predicted transmembrane helices and might be an integral membrane protein. Viologen-dependent FDH activity was purified from serine-grown cells of E. acidaminophilum and the purified protein complex contained four subunits, FdhA and FdhB, encoded by FDH gene cluster II, and HymA and HymB, identified after determination of their N-terminal sequences. Thus, this complex might represent the most simple type of a formate hydrogen lyase. The purified formate dehydrogenase fraction contained iron, tungsten, a pterin cofactor, and zinc, but no molybdenum. FDH-II had a two-fold higher K(m) for formate (0.37 mM) than FDH-I and also catalyzed CO(2) reduction to formate. Reverse transcription (RT)-PCR pointed to increased expression of FDH-II in serine-grown cells, supporting the isolation of this FDH isoform. The fdhA-I gene was expressed as inactive protein in Escherichia coli. The in-frame UGA codon for selenocysteine incorporation was read in the heterologous system only as stop codon, although its potential SECIS element exhibited a quite high similarity to that of E. coli FDH.

  7. Does the Texas First Dental Home Program Improve Parental Oral Care Knowledge and Practices?

    PubMed

    Thompson, Charmaine L; McCann, Ann L; Schneiderman, Emet D

    2017-03-15

    This study evaluated the effectiveness of the Texas Medicaid First Dental Home (FDH) by comparing the oral health knowledge, practices, and opinions of participating vs. non-participating parents. A 29-question survey (English & Spanish) was developed and administered to 165 parents of children under three years old (FDH=49, Non-FDH=116) who visited qualifying Medicaid clinics in Texas. Mann Whitney U tests showed that FDH parents scored higher on overall knowledge (P=0.001) and practice scores (P<0.001). FDH parents responded correctly more often than non-FDH about the recommended amount of toothpaste for toddlers (P<0.001). More FDH parents knew tap water was a potential source of fluoride (P<0.001). The FDH parents scored marginally higher about when a child should have the first dental visit (P=0.051). More Non-FDH parents let their child go to sleep with a bottle, sippy cup or pacifier (P<0.001). FDH visits are having a positive impact on Texas parents by increasing their oral healthcare knowledge and practices. This is the first step towards improving the oral health of children.

  8. Function and Regulation of the Formate Dehydrogenase Genes of the Methanogenic Archaeon Methanococcus maripaludis

    PubMed Central

    Wood, Gwendolyn E.; Haydock, Andrew K.; Leigh, John A.

    2003-01-01

    Methanococcus maripaludis is a mesophilic species of Archaea capable of producing methane from two substrates: hydrogen plus carbon dioxide and formate. To study the latter, we identified the formate dehydrogenase genes of M. maripaludis and found that the genome contains two gene clusters important for formate utilization. Phylogenetic analysis suggested that the two formate dehydrogenase gene sets arose from duplication events within the methanococcal lineage. The first gene cluster encodes homologs of formate dehydrogenase α (FdhA) and β (FdhB) subunits and a putative formate transporter (FdhC) as well as a carbonic anhydrase analog. The second gene cluster encodes only FdhA and FdhB homologs. Mutants lacking either fdhA gene exhibited a partial growth defect on formate, whereas a double mutant was completely unable to grow on formate as a sole methanogenic substrate. Investigation of fdh gene expression revealed that transcription of both gene clusters is controlled by the presence of H2 and not by the presence of formate. PMID:12670979

  9. The Ferredoxin-Like Proteins HydN and YsaA Enhance Redox Dye-Linked Activity of the Formate Dehydrogenase H Component of the Formate Hydrogenlyase Complex.

    PubMed

    Pinske, Constanze

    2018-01-01

    Formate dehydrogenase H (FDH-H) and [NiFe]-hydrogenase 3 (Hyd-3) form the catalytic components of the hydrogen-producing formate hydrogenlyase (FHL) complex, which disproportionates formate to H 2 and CO 2 during mixed acid fermentation in enterobacteria. FHL comprises minimally seven proteins and little is understood about how this complex is assembled. Early studies identified a ferredoxin-like protein, HydN, as being involved in FDH-H assembly into the FHL complex. In order to understand how FDH-H and its small subunit HycB, which is also a ferredoxin-like protein, attach to the FHL complex, the possible roles of HydN and its paralogue, YsaA, in FHL complex stability and assembly were investigated. Deletion of the hycB gene reduced redox dye-mediated FDH-H activity to approximately 10%, abolished FHL-dependent H 2 -production, and reduced Hyd-3 activity. These data are consistent with HycB being an essential electron transfer component of the FHL complex. The FDH-H activity of the hydN and the ysaA deletion strains was reduced to 59 and 57% of the parental, while the double deletion reduced activity of FDH-H to 28% and the triple deletion with hycB to 1%. Remarkably, and in contrast to the hycB deletion, the absence of HydN and YsaA was without significant effect on FHL-dependent H 2 -production or total Hyd-3 activity; FDH-H protein levels were also unaltered. This is the first description of a phenotype for the E. coli ysaA deletion strain and identifies it as a novel factor required for optimal redox dye-linked FDH-H activity. A ysaA deletion strain could be complemented for FDH-H activity by hydN and ysaA , but the hydN deletion strain could not be complemented. Introduction of these plasmids did not affect H 2 production. Bacterial two-hybrid interactions showed that YsaA, HydN, and HycB interact with each other and with the FDH-H protein. Further novel anaerobic cross-interactions of 10 ferredoxin-like proteins in E. coli were also discovered and described. Together, these data indicate that FDH-H activity measured with the redox dye benzyl viologen is the sum of the FDH-H protein interacting with three independent small subunits and suggest that FDH-H can associate with different redox-protein complexes in the anaerobic cell to supply electrons from formate oxidation.

  10. Neonatal hydrocephalus is a result of a block in folate handling and metabolism involving 10-formyltetrahydrofolate dehydrogenase.

    PubMed

    Naz, Naila; Jimenez, Alicia Requena; Sanjuan-Vilaplana, Anna; Gurney, Megan; Miyan, Jaleel

    2016-08-01

    Folate is vital in a range of biological processes and folate deficiency is associated with neurodevelopmental disorders such as neural tube defects and hydrocephalus (HC). 10-formyl-tetrahydrofolate-dehydrogenase (FDH) is a key regulator for folate availability and metabolic interconversion for the supply of 1-carbon groups. In previous studies, we found a deficiency of FDH in CSF associated with the developmental deficit in congenital and neonatal HC. In this study, we therefore aimed to investigate the role of FDH in folate transport and metabolism during the brain development of the congenital hydrocephalic Texas (H-Tx) rat and normal (Sprague-Dawley) rats. We show that at embryonic (E) stage E18 and E20, FDH-positive cells and/or vesicles derived from the cortex can bind methyl-folate similarly to folate receptor alpha, the main folate transporter. Hydrocephalic rats expressed diminished nuclear FDH in both liver and brain at all postnatal (P) ages tested (P5, P15, and P20) together with a parallel increase in hepatic nuclear methyl-folate at P5 and cerebral methylfolate at P15 and P20. A similar relationship was found between FDH and 5-methyl cytosine, the main marker for DNA methylation. The data indicated that FDH binds and transports methylfolate in the brain and that decreased liver and brain nuclear expression of FDH is linked with decreased DNA methylation which could be a key factor in the developmental deficits associated with congenital and neonatal HC. Folate deficiency is associated with neurodevelopmental disorders such as neural tube defects and hydrocephalus. 10-formyl-tetrahydrofolate-dehydrogenase (FDH) is a key regulator for folate availability and metabolic interconversion. We show that FDH binds and transports methylfolate in the brain. Moreover, we found that a deficiency of FDH in the nucleus of brain and liver is linked with decreased DNA methylation which could be a key factor in the developmental deficits associated with congenital and neonatal hydrocephalus cells. © 2016 International Society for Neurochemistry.

  11. A Formate Dehydrogenase Confers Tolerance to Aluminum and Low pH1[OPEN

    PubMed Central

    Gong, Yu Long; Fan, Wei; Xu, Jia Meng; Liu, Yu; Cao, Meng Jie; Wang, Ming-Hu

    2016-01-01

    Formate dehydrogenase (FDH) is involved in various higher plant abiotic stress responses. Here, we investigated the role of rice bean (Vigna umbellata) VuFDH in Al and low pH (H+) tolerance. Screening of various potential substrates for the VuFDH protein demonstrated that it functions as a formate dehydrogenase. Quantitative reverse transcription-PCR and histochemical analysis showed that the expression of VuFDH is induced in rice bean root tips by Al or H+ stresses. Fluorescence microscopic observation of VuFDH-GFP in transgenic Arabidopsis plants indicated that VuFDH is localized in the mitochondria. Accumulation of formate is induced by Al and H+ stress in rice bean root tips, and exogenous application of formate increases internal formate content that results in the inhibition of root elongation and induction of VuFDH expression, suggesting that formate accumulation is involved in both H+- and Al-induced root growth inhibition. Over-expression of VuFDH in tobacco (Nicotiana tabacum) results in decreased sensitivity to Al and H+ stress due to less production of formate in the transgenic tobacco lines under Al and H+ stresses. Moreover, NtMATE and NtALS3 expression showed no changes versus wild type in these over-expression lines, suggesting that herein known Al-resistant mechanisms are not involved. Thus, the increased Al tolerance of VuFDH over-expression lines is likely attributable to their decreased Al-induced formate production. Taken together, our findings advance understanding of higher plant Al toxicity mechanisms, and suggest a possible new route toward the improvement of plant performance in acidic soils, where Al toxicity and H+ stress coexist. PMID:27021188

  12. Familial Dysalbuminemic Hyperthyroxinemia in a Japanese Man Caused by a Point Albumin Gene Mutation (R218P)

    PubMed Central

    Osaki, Yoshinori; Hayashi, Yoshitaka; Nakagawa, Yoshinori; Yoshida, Katsumi; Ozaki, Hiroshi; Fukazawa, Hiroshi

    2016-01-01

    Familial dysalbuminemic hyperthyroxinemia (FDH) is a familial autosomal dominant disease caused by mutation in the albumin gene that produces a condition of euthyroid hyperthyroxinemia. In patients with FDH, serum-free thyroxine (FT4) and free triiodothyronine (FT3) concentrations as measured by several commercial methods are often falsely increased with normal thyrotropin (TSH). Therefore, several diagnostic steps are needed to differentiate TSH-secreting tumor or generalized resistance to thyroid hormone from FDH. We herein report a case of a Japanese man born in Aomori prefecture, with FDH caused by a mutant albumin gene (R218P). We found that a large number of FDH patients reported in Japan to date might have been born in Aomori prefecture and have shown the R218P mutation. In conclusion, FDH needs to be considered among the differential diagnoses in Japanese patients born in Aomori prefecture and showing normal TSH levels and elevated FT4 levels. PMID:27081329

  13. Activation of p21-Dependent G1/G2 Arrest in the Absence of DNA Damage as an Antiapoptotic Response to Metabolic Stress

    PubMed Central

    Hoeferlin, L. Alexis; Oleinik, Natalia V.; Krupenko, Natalia I.

    2011-01-01

    The folate enzyme, FDH (10-formyltetrahydrofolate dehydrogenase, ALDH1L1), a metabolic regulator of proliferation, activates p53-dependent G1 arrest and apoptosis in A549 cells. In the present study, we have demonstrated that FDH-induced apoptosis is abrogated upon siRNA knockdown of the p53 downstream target PUMA. Conversely, siRNA knockdown of p21 eliminated FDH-dependent G1 arrest and resulted in an early apoptosis onset. The acceleration of FDH-dependent apoptosis was even more profound in another cell line, HCT116, in which the p21 gene was silenced through homologous recombination (p21−/− cells). In contrast to A549 cells, FDH caused G2 instead of G1 arrest in HCT116 p21+/+ cells; such an arrest was not seen in p21-deficient (HCT116 p21−/−) cells. In agreement with the cell cycle regulatory function of p21, its strong accumulation in nuclei was seen upon FDH expression. Interestingly, our study did not reveal DNA damage upon FDH elevation in either cell line, as judged by comet assay and the evaluation of histone H2AX phosphorylation. In both A549 and HCT116 cell lines, FDH induced a strong decrease in the intracellular ATP pool (2-fold and 30-fold, respectively), an indication of a decrease in de novo purine biosynthesis as we previously reported. The underlying mechanism for the drop in ATP was the strong decrease in intracellular 10-formyltetrahydrofolate, a substrate in two reactions of the de novo purine pathway. Overall, we have demonstrated that p21 can activate G1 or G2 arrest in the absence of DNA damage as a response to metabolite deprivation. In the case of FDH-related metabolic alterations, this response delays apoptosis but is not sufficient to prevent cell death. PMID:22593801

  14. Formate production through carbon dioxide hydrogenation with recombinant whole cell biocatalysts.

    PubMed

    Alissandratos, Apostolos; Kim, Hye-Kyung; Easton, Christopher J

    2014-07-01

    The biological conversion of CO2 and H2 into formate offers a sustainable route to a valuable commodity chemical through CO2 fixation, and a chemical form of hydrogen fuel storage. Here we report the first example of CO2 hydrogenation utilising engineered whole-cell biocatalysts. Escherichia coli JM109(DE3) cells transformed for overexpression of either native formate dehydrogenase (FDH), the FDH from Clostridium carboxidivorans, or genes from Pyrococcus furiosus and Methanobacterium thermoformicicum predicted to express FDH based on their similarity to known FDH genes were all able to produce levels of formate well above the background, when presented with H2 and CO2, the latter in the form of bicarbonate. In the case of the FDH from P. furiosus the yield was highest, reaching more than 1 g L(-1)h(-1) when a hydrogen-sparging reactor design was used. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Chaetomium thermophilum formate dehydrogenase has high activity in the reduction of hydrogen carbonate (HCO3 -) to formate.

    PubMed

    Aslan, Aşkın Sevinç; Valjakka, Jarkko; Ruupunen, Jouni; Yildirim, Deniz; Turner, Nicholas J; Turunen, Ossi; Binay, Barış

    2017-01-01

    While formate dehydrogenases (FDHs) have been used for cofactor recycling in chemoenzymatic synthesis, the ability of FDH to reduce CO 2 could also be utilized in the conversion of CO 2 to useful products via formate (HCOO - ). In this study, we investigated the reduction of CO 2 in the form of hydrogen carbonate (HCO 3 - ) to formate by FDHs from Candida methylica (CmFDH) and Chaetomium thermophilum (CtFDH) in a NADH-dependent reaction. The catalytic performance with HCO 3 - as a substrate was evaluated by measuring the kinetic rates and conducting productivity assays. CtFDH showed a higher efficiency in converting HCO 3 - to formate than CmFDH, whereas CmFDH was better in the oxidation of formate. The pH optimum of the reduction was at pH 7-8. However, the high concentrations of HCO 3 - reduced the reaction rate. CtFDH was modeled in the presence of HCO 3 - showing that it fits to the active site. The active site setting for hydride transfer in CO 2 reduction was modeled. The hydride donated by NADH would form a favorable contact to the carbon atom of HCO 3 - , resulting in a surplus of electrons within the molecule. This would cause the complex formed by hydrogen carbonate and the hydride to break into formate and hydroxide ions. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Stability and reactivity of liposome-encapsulated formate dehydrogenase and cofactor system in carbon dioxide gas-liquid flow.

    PubMed

    Yoshimoto, Makoto; Yamashita, Takayuki; Yamashiro, Takuya

    2010-01-01

    Formate dehydrogenase from Candida boidinii (CbFDH) is potentially applicable in reduction of CO(2) through oxidation of cofactor NADH into NAD(+). For this, the CbFDH activity needs to be maintained under practical reaction conditions, such as CO(2) gas-liquid flow. In this work, CbFDH and cofactor were encapsulated in liposomes and the liposomal enzymes were characterized in an external loop airlift bubble column. The airlift was operated at 45 degrees C with N(2) or CO(2) as gas phase at the superficial gas velocity U(G) of 2.0 or 3.0 cm/s. The activities of liposomal CbFDH/cofactor systems were highly stable in the airlift regardless of the type of gas phase because liposome membranes prevented interactions of the encapsulated enzyme and cofactor molecules with the gas-liquid interface of bubbles. On the other hand, free CbFDH was deactivated in the airlift especially at high U(G) with CO(2) bubbles. The liposomal CbFDH/NADH could catalyze reduction of CO(2) in the airlift giving the fractional oxidation of the liposomal NADH of 23% at the reaction time of 360 min. The cofactor was kept inside liposomes during the reaction operation with less than 10% of leakage. All of the results obtained demonstrate that the liposomal CbFDH/NADH functions as a stable catalyst for reduction of CO(2) in the airlift. (c) 2010 American Institute of Chemical Engineers

  17. Bioelectrochemical conversion of CO2 to value added product formate using engineered Methylobacterium extorquens.

    PubMed

    Jang, Jungho; Jeon, Byoung Wook; Kim, Yong Hwan

    2018-05-08

    The conversion of carbon dioxide to formate is a fundamental step for building C1 chemical platforms. Methylobacterium extorquens AM1 was reported to show remarkable activity converting carbon dioxide into formate. Formate dehydrogenase 1 from M. extorquens AM1 (MeFDH1) was verified as the key responsible enzyme for the conversion of carbon dioxide to formate in this study. Using a 2% methanol concentration for induction, microbial harboring the recombinant MeFDH1 expressing plasmid produced the highest concentration of formate (26.6 mM within 21 hours) in electrochemical reactor. 60 μM of sodium tungstate in the culture medium was optimal for the expression of recombinant MeFDH1 and production of formate (25.7 mM within 21 hours). The recombinant MeFDH1 expressing cells showed maximum formate productivity of 2.53 mM/g-wet cell/hr, which was 2.5 times greater than that of wild type. Thus, M. extorquens AM1 was successfully engineered by expressing MeFDH1 as recombinant enzyme to elevate the production of formate from CO 2 after elucidating key responsible enzyme for the conversion of CO 2 to formate.

  18. Solar to Liquid Fuels Production: Light-Driven Reduction of Carbon Dioxide to Formic Acid

    DTIC Science & Technology

    2014-03-29

    molecular wire. The X-ray crystal structure for the E . coli FDH enzyme shows that a [4Fe-4S] cluster is located near the surface of the protein. The...CO2 to formic acid. E . coli FDH, encoded by fdhF, was chosen for this work because it is a single-subunit enzyme that has been studied in detail, and...mutagenesis was employed to change surface-located Cys11 to Gly to open a coordination site. The proteins were overproduced in E . coli and purified

  19. Spectrum of PORCN mutations in Focal Dermal Hypoplasia

    USDA-ARS?s Scientific Manuscript database

    Focal Dermal Hypoplasia (FDH), also known as Goltz syndrome (OMIM 305600), is a genetic disorder that affects multiple organ systems early in development. Features of FDH include skin abnormalities, (hypoplasia, atrophy, linear pigmentation, and herniation of fat through dermal defects); papillomas...

  20. Effects of zinc on the production of alcohol by Clostridium carboxidivorans P7 using model syngas.

    PubMed

    Li, Demao; Meng, Chunxiao; Wu, Guanxun; Xie, Bintao; Han, Yifan; Guo, Yaqiong; Song, Chunhui; Gao, Zhengquan; Huang, Zhiyong

    2018-01-01

    Renewable energy, including biofuels such as ethanol and butanol from syngas bioconversed by Clostridium carboxidivorans P7, has been drawing extensive attention due to the fossil energy depletion and global eco-environmental issues. Effects of zinc on the growth and metabolites of C. carboxidivorans P7 were investigated with model syngas as the carbon source. The cell concentration was doubled, the ethanol content increased 3.02-fold and the butanol content increased 7.60-fold, the hexanol content increased 44.00-fold in the medium with 280 μM Zn 2+ , when comparing with those in the control medium [Zn 2+ , (7 μM)]. Studies of the genes expression involved in the carbon fixation as well as acid and alcohol production in the medium with 280 μM Zn 2+ indicated that fdhII was up-regulated on the second day, acs A, fdhII, bdh35 and bdh50 were up-regulated on the third day and bdh35, acsB, fdhI, fdhIII, fdhIV, buk, bdh10, bdh35, bdh40 and bdh50 were up-regulated on the fourth day. The results indicated that the increased Zn 2+ content increased the alcohol production through increase in the gene expression of the carbon fixation and alcohol dehydrogenase.

  1. Tungsten and Molybdenum Regulation of Formate Dehydrogenase Expression in Desulfovibrio vulgaris Hildenborough ▿

    PubMed Central

    da Silva, Sofia M.; Pimentel, Catarina; Valente, Filipa M. A.; Rodrigues-Pousada, Claudina; Pereira, Inês A. C.

    2011-01-01

    Formate is an important energy substrate for sulfate-reducing bacteria in natural environments, and both molybdenum- and tungsten-containing formate dehydrogenases have been reported in these organisms. In this work, we studied the effect of both metals on the levels of the three formate dehydrogenases encoded in the genome of Desulfovibrio vulgaris Hildenborough, with lactate, formate, or hydrogen as electron donors. Using Western blot analysis, quantitative real-time PCR, activity-stained gels, and protein purification, we show that a metal-dependent regulatory mechanism is present, resulting in the dimeric FdhAB protein being the main enzyme present in cells grown in the presence of tungsten and the trimeric FdhABC3 protein being the main enzyme in cells grown in the presence of molybdenum. The putatively membrane-associated formate dehydrogenase is detected only at low levels after growth with tungsten. Purification of the three enzymes and metal analysis shows that FdhABC3 specifically incorporates Mo, whereas FdhAB can incorporate both metals. The FdhAB enzyme has a much higher catalytic efficiency than the other two. Since sulfate reducers are likely to experience high sulfide concentrations that may result in low Mo bioavailability, the ability to use W is likely to constitute a selective advantage. PMID:21498650

  2. Localizing transcripts to single cells suggests an important role of uncultured deltaproteobacteria in the termite gut hydrogen economy.

    PubMed

    Rosenthal, Adam Z; Zhang, Xinning; Lucey, Kaitlyn S; Ottesen, Elizabeth A; Trivedi, Vikas; Choi, Harry M T; Pierce, Niles A; Leadbetter, Jared R

    2013-10-01

    Identifying microbes responsible for particular environmental functions is challenging, given that most environments contain an uncultivated microbial diversity. Here we combined approaches to identify bacteria expressing genes relevant to catabolite flow and to locate these genes within their environment, in this case the gut of a "lower," wood-feeding termite. First, environmental transcriptomics revealed that 2 of the 23 formate dehydrogenase (FDH) genes known in the system accounted for slightly more than one-half of environmental transcripts. FDH is an essential enzyme of H2 metabolism that is ultimately important for the assimilation of lignocellulose-derived energy by the insect. Second, single-cell PCR analysis revealed that two different bacterial types expressed these two transcripts. The most commonly transcribed FDH in situ is encoded by a previously unappreciated deltaproteobacterium, whereas the other FDH is spirochetal. Third, PCR analysis of fractionated gut contents demonstrated that these bacteria reside in different spatial niches; the spirochete is free-swimming, whereas the deltaproteobacterium associates with particulates. Fourth, the deltaproteobacteria expressing FDH were localized to protozoa via hybridization chain reaction-FISH, an approach for multiplexed, spatial mapping of mRNA and rRNA targets. These results underscore the importance of making direct vs. inference-based gene-species associations, and have implications in higher termites, the most successful termite lineage, in which protozoa have been lost from the gut community. Contrary to expectations, in higher termites, FDH genes related to those from the protozoan symbiont dominate, whereas most others were absent, suggesting that a successful gene variant can persist and flourish after a gut perturbation alters a major environmental niche.

  3. Reduction of Carbon Dioxide by a Molybdenum-Containing Formate Dehydrogenase: A Kinetic and Mechanistic Study.

    PubMed

    Maia, Luisa B; Fonseca, Luis; Moura, Isabel; Moura, José J G

    2016-07-20

    Carbon dioxide accumulation is a major concern for the ecosystems, but its abundance and low cost make it an interesting source for the production of chemical feedstocks and fuels. However, the thermodynamic and kinetic stability of the carbon dioxide molecule makes its activation a challenging task. Studying the chemistry used by nature to functionalize carbon dioxide should be helpful for the development of new efficient (bio)catalysts for atmospheric carbon dioxide utilization. In this work, the ability of Desulfovibrio desulfuricans formate dehydrogenase (Dd FDH) to reduce carbon dioxide was kinetically and mechanistically characterized. The Dd FDH is suggested to be purified in an inactive form that has to be activated through a reduction-dependent mechanism. A kinetic model of a hysteretic enzyme is proposed to interpret and predict the progress curves of the Dd FDH-catalyzed reactions (initial lag phase and subsequent faster phase). Once activated, Dd FDH is able to efficiently catalyze, not only the formate oxidation (kcat of 543 s(-1), Km of 57.1 μM), but also the carbon dioxide reduction (kcat of 46.6 s(-1), Km of 15.7 μM), in an overall reaction that is thermodynamically and kinetically reversible. Noteworthy, both Dd FDH-catalyzed formate oxidation and carbon dioxide reduction are completely inactivated by cyanide. Current FDH reaction mechanistic proposals are discussed and a different mechanism is here suggested: formate oxidation and carbon dioxide reduction are proposed to proceed through hydride transfer and the sulfo group of the oxidized and reduced molybdenum center, Mo(6+)═S and Mo(4+)-SH, are suggested to be the direct hydride acceptor and donor, respectively.

  4. Physiological and biochemical characterization of the soluble formate dehydrogenase, a molybdoenzyme from Alcaligenes eutrophus.

    PubMed Central

    Friedebold, J; Bowien, B

    1993-01-01

    Organoautotrophic growth of Alcaligenes eutrophus on formate was dependent on the presence of molybdate in the medium. Supplementation of the medium with tungstate lead to growth cessation. Corresponding effects of these anions were observed for the activity of the soluble, NAD(+)-linked formate dehydrogenase (S-FDH; EC 1.2.1.2) of the organism. Lack of molybdate or presence of tungstate resulted in an almost complete loss of S-FDH activity. S-FDH was purified to near homogeneity in the presence of nitrate as a stabilizing agent. The native enzyme exhibited an M(r) of 197,000 and a heterotetrameric quaternary structure with nonidentical subunits of M(r) 110,000 (alpha), 57,000 (beta), 19,400 (gamma), and 11,600 (delta). It contained 0.64 g-atom of molybdenum, 25 g-atom of nonheme iron, 20 g-atom of acid-labile sulfur, and 0.9 mol of flavin mononucleotide per mol. The fluorescence spectrum of iodine-oxidized S-FDH was nearly identical to the form A spectrum of milk xanthine oxidase, proving the presence of a pterin cofactor. The molybdenum-complexing cofactor was identified as molybdopterin guanine dinucleotide in an amount of 0.71 mol/mol of S-FDH. Apparent Km values of 3.3 mM for formate and 0.09 mM for NAD+ were determined. The enzyme coupled the oxidation of formate to a number of artificial electron acceptors and was strongly inactivated by formate in the absence of NAD+. It was inhibited by cyanide, azide, nitrate, and Hg2+ ions. Thus, the enzyme belongs to a new group of complex molybdo-flavo Fe-S FDH that so far has been detected in only one other aerobic bacterium. Images PMID:8335630

  5. A-Type Carrier Protein ErpA Is Essential for Formation of an Active Formate-Nitrate Respiratory Pathway in Escherichia coli K-12

    PubMed Central

    Pinske, Constanze

    2012-01-01

    A-type carrier (ATC) proteins of the Isc (iron-sulfur cluster) and Suf (sulfur mobilization) iron-sulfur ([Fe-S]) cluster biogenesis pathways are proposed to traffic preformed [Fe-S] clusters to apoprotein targets. In this study, we analyzed the roles of the ATC proteins ErpA, IscA, and SufA in the maturation of the nitrate-inducible, multisubunit anaerobic respiratory enzymes formate dehydrogenase N (Fdh-N) and nitrate reductase (Nar). Mutants lacking SufA had enhanced activities of both enzymes. While both Fdh-N and Nar activities were strongly reduced in an iscA mutant, both enzymes were inactive in an erpA mutant and in a mutant unable to synthesize the [Fe-S] cluster scaffold protein IscU. It could be shown for both Fdh-N and Nar that loss of enzyme activity correlated with absence of the [Fe-S] cluster-containing small subunit. Moreover, a slowly migrating form of the catalytic subunit FdnG of Fdh-N was observed, consistent with impeded twin arginine translocation (TAT)-dependent transport. The highly related Fdh-O enzyme was also inactive in the erpA mutant. Although the Nar enzyme has its catalytic subunit NarG localized in the cytoplasm, it also exhibited aberrant migration in an erpA iscA mutant, suggesting that these modular enzymes lack catalytic integrity due to impaired cofactor biosynthesis. Cross-complementation experiments demonstrated that multicopy IscA could partially compensate for lack of ErpA with respect to Fdh-N activity but not Nar activity. These findings suggest that ErpA and IscA have overlapping roles in assembly of these anaerobic respiratory enzymes but demonstrate that ErpA is essential for the production of active enzymes. PMID:22081393

  6. Understanding and Improving the Activity of Flavin Dependent Halogenases via Random and Targeted Mutagenesis

    PubMed Central

    Andorfer, Mary C.

    2018-01-01

    Flavin dependent halogenases (FDHs) catalyze the halogenation of organic substrates by coordinating reactions of reduced flavin, molecular oxygen, and chloride. Targeted and random mutagenesis of these enzymes has been used to both understand and alter their reactivity. These studies have led to insights into residues essential for catalysis and FDH variants with improved stability, expanded substrate scope, and altered site selectivity. Mutations throughout FDH structures have contributed to all of these advances. More recent studies have sought to rationalize the impact of these mutations on FDH function and to identify new FDHs to deepen our understanding of this enzyme class and to expand their utility for biocatalytic applications. PMID:29589959

  7. Discovery of an acidic, thermostable and highly NADP+ dependent formate dehydrogenase from Lactobacillus buchneri NRRL B-30929

    USDA-ARS?s Scientific Manuscript database

    Objectives: To identify a robust NADP+ dependent formate dehydrogenase from Lactobacillus buchneri NRRL B-30929 (LbFDH) with unique biochemical properties. Results: A new NADP+ dependent formate dehydrogenase gene (fdh) was cloned from genomic DNA of L. buchneri NRRL B-30929. The recombinant constru...

  8. Growth- and substrate-dependent transcription of formate dehydrogenase and hydrogenase coding genes in Syntrophobacter fumaroxidans and Methanospirillum hungatei.

    PubMed

    Worm, Petra; Stams, Alfons J M; Cheng, Xu; Plugge, Caroline M

    2011-01-01

    Transcription of genes coding for formate dehydrogenases (fdh genes) and hydrogenases (hyd genes) in Syntrophobacter fumaroxidans and Methanospirillum hungatei was studied following growth under different conditions. Under all conditions tested, all fdh and hyd genes were transcribed. However, transcription levels of the individual genes varied depending on the substrate and growth conditions. Our results strongly suggest that in syntrophically grown S. fumaroxidans cells, the [FeFe]-hydrogenase (encoded by Sfum_844-46), FDH1 (Sfum_2703-06) and Hox (Sfum_2713-16) may confurcate electrons from NADH and ferredoxin to protons and carbon dioxide to produce hydrogen and formate, respectively. Based on bioinformatic analysis, a membrane-integrated energy-converting [NiFe]-hydrogenase (Mhun_1741-46) of M. hungatei might be involved in the energy-dependent reduction of CO(2) to formylmethanofuran. The best candidates for F(420)-dependent N(5),N(10)-methyl-H(4) MPT and N(5),N(10),-methylene-H(4)MPT reduction are the cytoplasmic [NiFe]-hydrogenase and FDH1. 16S rRNA ratios indicate that in one of the triplicate co-cultures of S. fumaroxidans and M. hungatei, less energy was available for S. fumaroxidans. This led to enhanced transcription of genes coding for the Rnf-complex (Sfum_2694-99) and of several fdh and hyd genes. The Rnf-complex probably reoxidized NADH with ferredoxin reduction, followed by ferredoxin oxidation by the induced formate dehydrogenases and hydrogenases.

  9. Focal dermal hypoplasia (Goltz-Gorlin syndrome): a new case with a novel variant in the PORCN gene (c.1250T>C:p.F417S) and unusual spinal anomaly.

    PubMed

    Garavelli, Livia; Simonte, Graziella; Rosato, Simonetta; Wischmeijer, Anita; Albertini, Enrico; Guareschi, Elisa; Longo, Caterina; Albertini, Giuseppe; Gelmini, Chiara; Greco, Chiara; Errico, Stefania; Savino, Gustavo; Pavanello, Marco; Happle, Rudolf; Unger, Sheila; Superti-Furga, Andrea; Grzeschik, Karl-Heinz

    2013-07-01

    Focal dermal hypoplasia (FDH; Goltz-Gorlin syndrome; OMIM 305600) is a disorder that features involvement of the skin, skeletal system, and eyes. It is caused by loss-of-function mutations in the PORCN gene. We report a young girl with FDH, microphthalmos associated with colobomatous orbital cyst, dural ectasia and cystic malformation of the spinal cord, and a de novo variant in PORCN. This association has not been previously reported, and based on these observations the phenotypic spectrum of FDH might be broader than previously appreciated. It would be prudent to alter the suggested surveillance for this rare disorder. Copyright © 2013 Wiley Periodicals, Inc.

  10. Improving the direct electron transfer in monolithic bioelectrodes prepared by immobilization of FDH enzyme on carbon-coated anodic aluminum oxide films

    NASA Astrophysics Data System (ADS)

    Castro-Muñiz, Alberto; Hoshikawa, Yasuto; Komiyama, Hiroshi; Nakayama, Wataru; Itoh, Tetsuji; Kyotani, Takashi

    2016-02-01

    The present work reports the preparation of binderless carbon-coated porous films and the study of their performance as monolithic bioanodes. The films were prepared by coating anodic aluminum oxide (AAO) films with a thin layer of nitrogen-doped carbon by chemical vapor deposition. The films have cylindrical straight pores with controllable diameter and length. These monolithic films were used directly as bioelectrodes by loading the films with D-fructose dehydrogenase (FDH), an oxidoreductase enzyme that catalyzes the oxidation of D-fructose to 5-keto-D-fructose. The immobilization of the enzymes was carried out by physical adsorption in liquid phase and with an electrostatic attraction method. The latter method takes advantage of the fact that FDH is negatively charged during the catalytic oxidation of fructose. Thus the immobilization was performed under the application of a positive voltage to the CAAO film in a FDH-fructose solution in McIlvaine buffer (pH 5) at 25 ºC. As a result, the FDH modified electrodes with the latter method show much better electrochemical response than that with the conventional physical adsorption method. Due to the singular porous structure of the monolithic films, which consists of an array of straight and parallel nanochannels, it is possible to rule out the effect of the diffusion of the D-fructose into the pores. Thus the improvement in the performance upon using the electrostatic attraction method can be ascribed not only to a higher uptake, but also to a more appropriate molecule orientation of the enzyme units on the surface of the electrodes.

  11. POZ domain transcription factor, FBI-1, represses transcription of ADH5/FDH by interacting with the zinc finger and interfering with DNA binding activity of Sp1.

    PubMed

    Lee, Dong-Kee; Suh, Dongchul; Edenberg, Howard J; Hur, Man-Wook

    2002-07-26

    The POZ domain is a protein-protein interaction motif that is found in many transcription factors, which are important for development, oncogenesis, apoptosis, and transcription repression. We cloned the POZ domain transcription factor, FBI-1, that recognizes the cis-element (bp -38 to -22) located just upstream of the core Sp1 binding sites (bp -22 to +22) of the ADH5/FDH minimal promoter (bp -38 to +61) in vitro and in vivo, as revealed by electrophoretic mobility shift assay and chromatin immunoprecipitation assay. The ADH5/FDH minimal promoter is potently repressed by the FBI-1. Glutathione S-transferase fusion protein pull-down showed that the POZ domains of FBI-1, Plzf, and Bcl-6 directly interact with the zinc finger DNA binding domain of Sp1. DNase I footprinting assays showed that the interaction prevents binding of Sp1 to the GC boxes of the ADH5/FDH promoter. Gal4-POZ domain fusions targeted proximal to the GC boxes repress transcription of the Gal4 upstream activator sequence-Sp1-adenovirus major late promoter. Our data suggest that POZ domain represses transcription by interacting with Sp1 zinc fingers and by interfering with the DNA binding activity of Sp1.

  12. Efficient biosynthesis of L-phenylglycine by an engineered Escherichia coli with a tunable multi-enzyme-coordinate expression system.

    PubMed

    Liu, Qiaoli; Zhou, Junping; Yang, Taowei; Zhang, Xian; Xu, Meijuan; Rao, Zhiming

    2018-03-01

    Whole-cell catalysis with co-expression of two or more enzymes in a single host as a simple low-cost biosynthesis method has been widely studied and applied but hardly with regulation of multi-enzyme expression. Here we developed an efficient whole-cell catalyst for biosynthesis of L-phenylglycine (L-Phg) from benzoylformic acid through co-expression of leucine dehydrogenase from Bacillus cereus (BcLeuDH) and NAD + -dependent mutant formate dehydrogenase from Candida boidinii (CbFDH A10C ) in Escherichia coli with tunable multi-enzyme-coordinate expression system. By co-expressing one to four copies of CbFDH A10C and optimization of the RBS sequence of BcLeuDH in the expression system, the ratio of BcLeuDH to CbFDH in E. coli BL21/pETDuet-rbs 4 leudh-3fdh A10C was finally regulated to 2:1, which was the optimal one determined by enzyme-catalyzed synthesis. The catalyst activity of E. coli BL21/pETDuet-rbs 4 leudh-3fdh A10C was 28.4 mg L -1  min -1  g -1 dry cell weight for L-Phg production using whole-cell transformation, it's was 3.7 times higher than that of engineered E. coli without enzyme expression regulation. Under optimum conditions (pH 8.0 and 35 °C), 60 g L -1 benzoylformic acid was completely converted to pure chiral L-Phg in 4.5 h with 10 g L -1 dry cells and 50.4 g L -1 ammonium formate, and with enantiomeric excess > 99.9%. This multi-enzyme-coordinate expression system strategy significantly improved L-Phg productivity and demonstrated a novel low-cost method for enantiopure L-Phg production.

  13. Discovery of a new metal and NAD+-dependent formate dehydrogenase from Clostridium ljungdahlii.

    PubMed

    Çakar, M Mervan; Mangas-Sanchez, Juan; Birmingham, William R; Turner, Nicholas J; Binay, Barış

    2018-04-21

    Over the next decades, with the growing concern of rising atmospheric carbon dioxide (CO 2 ) levels, the importance of investigating new approaches for its reduction becomes crucial. Reclamation of CO 2 for conversion into biofuels represents an alternative and attractive production method that has been studied in recent years, now with enzymatic methods gaining more attention. Formate dehydrogenases (FDHs) are NAD(P)H-dependent oxidoreductases that catalyze the conversion of formate into CO 2 and have been extensively used for cofactor recycling in chemoenzymatic processes. A new FDH from Clostridium ljungdahlii (ClFDH) has been recently shown to possess activity in the reverse reaction: the mineralization of CO 2 into formate. In this study, we show the successful homologous expression of ClFDH in Escherichia coli. Biochemical and kinetic characterization of the enzyme revealed that this homologue also demonstrates activity toward CO 2 reduction. Structural analysis of the enzyme through homology modeling is also presented.

  14. Light Driven CO2 Fixation by Using Cyanobacterial Photosystem I and NADPH-Dependent Formate Dehydrogenase

    PubMed Central

    Ihara, Masaki; Kawano, Yusuke; Urano, Miho; Okabe, Ayako

    2013-01-01

    The ultimate goal of this research is to construct a new direct CO2 fixation system using photosystems in living algae. Here, we report light-driven formate production from CO2 by using cyanobacterial photosystem I (PS I). Formate, a chemical hydrogen carrier and important industrial material, can be produced from CO2 by using the reducing power and the catalytic function of formate dehydrogenase (FDH). We created a bacterial FDH mutant that experimentally switched the cofactor specificity from NADH to NADPH, and combined it with an in vitro-reconstituted cyanobacterial light-driven NADPH production system consisting of PS I, ferredoxin (Fd), and ferredoxin-NADP+-reductase (FNR). Consequently, light-dependent formate production under a CO2 atmosphere was successfully achieved. In addition, we introduced the NADPH-dependent FDH mutant into heterocysts of the cyanobacterium Anabaena sp. PCC 7120 and demonstrated an increased formate concentration in the cells. These results provide a new possibility for photo-biological CO2 fixation. PMID:23936519

  15. Light driven CO2 fixation by using cyanobacterial photosystem I and NADPH-dependent formate dehydrogenase.

    PubMed

    Ihara, Masaki; Kawano, Yusuke; Urano, Miho; Okabe, Ayako

    2013-01-01

    The ultimate goal of this research is to construct a new direct CO2 fixation system using photosystems in living algae. Here, we report light-driven formate production from CO2 by using cyanobacterial photosystem I (PS I). Formate, a chemical hydrogen carrier and important industrial material, can be produced from CO2 by using the reducing power and the catalytic function of formate dehydrogenase (FDH). We created a bacterial FDH mutant that experimentally switched the cofactor specificity from NADH to NADPH, and combined it with an in vitro-reconstituted cyanobacterial light-driven NADPH production system consisting of PS I, ferredoxin (Fd), and ferredoxin-NADP(+)-reductase (FNR). Consequently, light-dependent formate production under a CO2 atmosphere was successfully achieved. In addition, we introduced the NADPH-dependent FDH mutant into heterocysts of the cyanobacterium Anabaena sp. PCC 7120 and demonstrated an increased formate concentration in the cells. These results provide a new possibility for photo-biological CO2 fixation.

  16. Sulphur shuttling across a chaperone during molybdenum cofactor maturation.

    PubMed

    Arnoux, Pascal; Ruppelt, Christian; Oudouhou, Flore; Lavergne, Jérôme; Siponen, Marina I; Toci, René; Mendel, Ralf R; Bittner, Florian; Pignol, David; Magalon, Axel; Walburger, Anne

    2015-02-04

    Formate dehydrogenases (FDHs) are of interest as they are natural catalysts that sequester atmospheric CO2, generating reduced carbon compounds with possible uses as fuel. FDHs activity in Escherichia coli strictly requires the sulphurtransferase EcFdhD, which likely transfers sulphur from IscS to the molybdenum cofactor (Mo-bisPGD) of FDHs. Here we show that EcFdhD binds Mo-bisPGD in vivo and has submicromolar affinity for GDP-used as a surrogate of the molybdenum cofactor's nucleotide moieties. The crystal structure of EcFdhD in complex with GDP shows two symmetrical binding sites located on the same face of the dimer. These binding sites are connected via a tunnel-like cavity to the opposite face of the dimer where two dynamic loops, each harbouring two functionally important cysteine residues, are present. On the basis of structure-guided mutagenesis, we propose a model for the sulphuration mechanism of Mo-bisPGD where the sulphur atom shuttles across the chaperone dimer.

  17. Sulphur shuttling across a chaperone during molybdenum cofactor maturation

    NASA Astrophysics Data System (ADS)

    Arnoux, Pascal; Ruppelt, Christian; Oudouhou, Flore; Lavergne, Jérôme; Siponen, Marina I.; Toci, René; Mendel, Ralf R.; Bittner, Florian; Pignol, David; Magalon, Axel; Walburger, Anne

    2015-02-01

    Formate dehydrogenases (FDHs) are of interest as they are natural catalysts that sequester atmospheric CO2, generating reduced carbon compounds with possible uses as fuel. FDHs activity in Escherichia coli strictly requires the sulphurtransferase EcFdhD, which likely transfers sulphur from IscS to the molybdenum cofactor (Mo-bisPGD) of FDHs. Here we show that EcFdhD binds Mo-bisPGD in vivo and has submicromolar affinity for GDP—used as a surrogate of the molybdenum cofactor’s nucleotide moieties. The crystal structure of EcFdhD in complex with GDP shows two symmetrical binding sites located on the same face of the dimer. These binding sites are connected via a tunnel-like cavity to the opposite face of the dimer where two dynamic loops, each harbouring two functionally important cysteine residues, are present. On the basis of structure-guided mutagenesis, we propose a model for the sulphuration mechanism of Mo-bisPGD where the sulphur atom shuttles across the chaperone dimer.

  18. Optimizing Immobilized Enzyme Performance in Cell-Free Environments to Produce Liquid Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belfort, Georges; Grimaldi, Joseph J.

    2015-01-27

    Limitations on biofuel production using cell culture (Escherichia coli, Clostridium, Saccharomyces cerevisiae, brown microalgae, blue-green algae and others) include low product (alcohol) concentrations (≤0.2 vol%) due to feedback inhibition, instability of cells, and lack of economical product recovery processes. To overcome these challenges, an alternate simplified biofuel production scheme was tested based on a cell-free immobilized enzyme system. Using this cell free system, we were able to obtain about 2.6 times higher concentrations of iso-butanol using our non-optimized system as compared with live cell systems. This process involved two steps: (i) converts acid to aldehyde using keto-acid decarboxylase (KdcA), andmore » (ii) produces alcohol from aldehyde using alcohol dehydrogenase (ADH) with a cofactor (NADH) conversion from inexpensive formate using a third enzyme, formate dehydrogenase (FDH). To increase stability and conversion efficiency with easy separations, the first two enzymes were immobilized onto methacrylate resin. Fusion proteins of labile KdcA (fKdcA) were expressed to stabilize the covalently immobilized KdcA. Covalently immobilized ADH exhibited long-term stability and efficient conversion of aldehyde to alcohol over multiple batch cycles without fusions. High conversion rates and low protein leaching were achieved by covalent immobilization of enzymes on methacrylate resin. The complete reaction scheme was demonstrated by immobilizing both ADH and fKdcA and using FDH free in solution. The new system without in situ removal of isobutanol achieved a 55% conversion of ketoisovaleric acid to isobutanol at a concentration of 0.5 % (v/v). Further increases in titer will require continuous removal of the isobutanol using our novel brush membrane system that exhibits a 1.5 fold increase in the separation factor of isobutanol from water versus that obtained for commercial silicone rubber membranes. These bio-inspired brush membranes are based on the presence of glycocalyx filaments coating the luminal surface of our vasculature and represent a new class of synthetic membranes. They thus meet the requirements/scope of the Bimolecular Materials program, Materials Science and Engineering Div., Office of Science, US DOE.« less

  19. Reversible interconversion of carbon dioxide and formate by an electroactive enzyme

    PubMed Central

    Reda, Torsten; Plugge, Caroline M.; Abram, Nerilie J.; Hirst, Judy

    2008-01-01

    Carbon dioxide (CO2) is a kinetically and thermodynamically stable molecule. It is easily formed by the oxidation of organic molecules, during combustion or respiration, but is difficult to reduce. The production of reduced carbon compounds from CO2 is an attractive proposition, because carbon-neutral energy sources could be used to generate fuel resources and sequester CO2 from the atmosphere. However, available methods for the electrochemical reduction of CO2 require excessive overpotentials (are energetically wasteful) and produce mixtures of products. Here, we show that a tungsten-containing formate dehydrogenase enzyme (FDH1) adsorbed to an electrode surface catalyzes the efficient electrochemical reduction of CO2 to formate. Electrocatalysis by FDH1 is thermodynamically reversible—only small overpotentials are required, and the point of zero net catalytic current defines the reduction potential. It occurs under thoroughly mild conditions, and formate is the only product. Both as a homogeneous catalyst and on the electrode, FDH1 catalyzes CO2 reduction with a rate more than two orders of magnitude faster than that of any known catalyst for the same reaction. Formate oxidation is more than five times faster than CO2 reduction. Thermodynamically, formate and hydrogen are oxidized at similar potentials, so formate is a viable energy source in its own right as well as an industrially important feedstock and a stable intermediate in the conversion of CO2 to methanol and methane. FDH1 demonstrates the feasibility of interconverting CO2 and formate electrochemically, and it is a template for the development of robust synthetic catalysts suitable for practical applications. PMID:18667702

  20. An integrated bienzyme glucose oxidase-fructose dehydrogenase-tetrathiafulvalene-3-mercaptopropionic acid-gold electrode for the simultaneous determination of glucose and fructose.

    PubMed

    Campuzano, Susana; Loaiza, Oscar A; Pedrero, María; de Villena, F Javier Manuel; Pingarrón, José M

    2004-06-01

    A bienzyme biosensor for the simultaneous determination of glucose and fructose was developed by coimmobilising glucose oxidase (GOD), fructose dehydrogenase (FDH), and the mediator, tetrathiafulvalene (TTF), by cross-linking with glutaraldehyde atop a 3-mercaptopropionic acid (MPA) self-assembled monolayer (SAM) on a gold disk electrode (AuE). The performance of this bienzyme electrode under batch and flow injection (FI) conditions, as well as an amperometric detection in high-performance liquid chromatography (HPLC), are reported. The order of enzyme immobilisation atop the MPA-SAM affected the biosensor amperometric response in terms of sensitivity, with the immobilisation order GOD, FDH, TTF being selected. Similar analytical characteristics to those obtained with single GOD or FDH SAM-based biosensors for glucose and fructose were achieved with the bienzyme electrode, indicating that no noticeable changes in the biosensor responses to the analytes occurred as a consequence of the coimmobilisation of both enzymes on the same MPA-AuE. The suitability of the bienzyme biosensor for the analysis of real samples under flow injection conditions was tested by determining glucose in two certified serum samples. The simultaneous determination of glucose and fructose in the same sample cannot be performed without a separation step because at the detection potential used (+0.10 V), both sugars show amperometric response. Consequently, HPLC with amperometric detection at the TTF-FDH-GOD-MPA-AuE was accomplished. Glucose and fructose were simultaneously determined in honey, cola softdrink, and commercial apple juice, and the results were compared with those obtained by using other reference methods.

  1. Goltz-Gorlin Syndrome: Revisiting the Clinical Spectrum.

    PubMed

    Yesodharan, Dhanya; Büschenfelde, Uta Meyer Zum; Kutsche, Kerstin; Mohandas Nair, K; Nampoothiri, Sheela

    2018-01-31

    To describe the varying phenotypic spectrum of Focal Dermal Hypoplasia (FDH) and to emphasize the need for identifying the condition in mildly affected females which is crucial for offering a prenatal diagnosis in subsequent pregnancy owing to the risk of having a severely affected baby. The phenotype-genotype correlation of 4 patients with FDH, over a period of 11 y from the genetic clinic in a tertiary care centre from Kerala, India was done. All four mutation proven patients were females (2 adults and 2 children). One of the adult female subjects were mildly affected, though she had a history of having a severely affected female child who expired on day six. Among the 2 affected children, one of them had an unaffected mother and the other had an affected mother. FDH has a wide clinical spectrum from very subtle findings to severe manifestations. The lethality of the condition in males and the disfigurement and multisystem involvement in females highlights the importance of confirmation of diagnosis by molecular analysis so that the family can be offered prenatal diagnosis in subsequent pregnancy.

  2. Artificial leaf device for solar fuel production.

    PubMed

    Amao, Yutaka; Shuto, Naho; Furuno, Kana; Obata, Asami; Fuchino, Yoshiko; Uemura, Keiko; Kajino, Tsutomu; Sekito, Takeshi; Iwai, Satoshi; Miyamoto, Yasushi; Matsuda, Masatoshi

    2012-01-01

    Solar fuels, such as hydrogen gas produced from water and methanol produced from carbon dioxide reduction by artificial photosynthesis, have received considerable attention. In natural leaves the photosynthetic proteins are well-organized in the thylakoid membrane. To develop an artificial leaf device for solar low-carbon fuel production from CO2, a chlorophyll derivative chlorin-e6 (Chl-e6; photosensitizer), 1-carboxylundecanoyl-1'-methyl-4,4'-bipyrizinium bromide, iodide (CH3V(CH2)9COOH; the electron carrier) and formate dehydrogenase (FDH) (the catalyst) immobilised onto a silica-gel-based thin layer chromatography plate (the Chl-V-FDH device) was investigated. From luminescence spectroscopy measurements, the photoexcited triplet state of Chl-e6 was quenched by the CH3V(CH2)9COOH moiety on the device, indicating the photoinduced electron transfer from the photoexcited triplet state of Chl-e6 to the CH3V(CH2)9COOH moiety. When the CO2-saturated sample solution containing NADPH (the electron donor) was flowed onto the Chl-V-FDH device under visible light irradiation, the formic acid concentration increased with increasing irradiation time.

  3. Nitrate reductase-formate dehydrogenase couple involved in the fungal denitrification by Fusarium oxysporum.

    PubMed

    Uchimura, Hiromasa; Enjoji, Hitoshi; Seki, Takafumi; Taguchi, Ayako; Takaya, Naoki; Shoun, Hirofumi

    2002-04-01

    Dissimilatory nitrate reductase (Nar) was solubilized and partially purified from the large particle (mitochondrial) fraction of the denitrifying fungus Fusarium oxysporum and characterized. Many lines of evidence showed that the membrane-bound Nar is distinct from the soluble, assimilatory nitrate reductase. Further, the spectral and other properties of the fungal Nar were similar to those of dissimilatory Nars of Escherichia coli and denitrifying bacteria, which are comprised of a molybdoprotein, a cytochrome b, and an iron-sulfur protein. Formate-nitrate oxidoreductase activity was also detected in the mitochondrial fraction, which was shown to arise from the coupling of formate dehydrogenase (Fdh), Nar, and a ubiquinone/ubiquinol pool. This is the first report of the occurrence in a eukaryote of Fdh that is associated with the respiratory chain. The coupling with Fdh showed that the fungal Nar system is more similar to that involved in the nitrate respiration by Escherichia coli than that in the bacterial denitrifying system. Analyses of the mutant species of F. oxysporum that were defective in Nar and/or assimilatory nitrate reductase conclusively showed that Nar is essential for the fungal denitrification.

  4. Isotopic effects in the collinear reactive FHH system

    NASA Technical Reports Server (NTRS)

    Lepetit, B.; Launay, J. M.; Le Dourneuf, M.

    1986-01-01

    Exact quantum reaction probabilities for a collinear model of the F + HH, HD, DD and DH reactions on the MV potential energy surface have been computed using hyperspherical coordinates. The results, obtained up to a total energy of 1.8 eV, show three main features: (1) resonances, whose positions and widths are analyzed simply in the hyperspherical formalism; (2) a slowly varying background increasing for FHD, decreasing for FDH, and oscillating for FHH and FDD, whose variations are interpreted by classical dynamics; and (3) partial reaction probabilities revealing decreasing vibrational adiabaticity in the order FHH-FDD-FHD-FDH.

  5. Genetically Engineered Escherichia coli Nissle 1917 Synbiotics Reduce Metabolic Effects Induced by Chronic Consumption of Dietary Fructose

    PubMed Central

    Somabhai, Chaudhari Archana; Raghuvanshi, Ruma; Nareshkumar, G.

    2016-01-01

    Aims To assess protective efficacy of genetically modified Escherichia coli Nissle 1917 (EcN) on metabolic effects induced by chronic consumption of dietary fructose. Materials and Methods EcN was genetically modified with fructose dehydrogenase (fdh) gene for conversion of fructose to 5-keto-D-fructose and mannitol-2-dehydrogenase (mtlK) gene for conversion to mannitol, a prebiotic. Charles foster rats weighing 150–200 g were fed with 20% fructose in drinking water for two months. Probiotic treatment of EcN (pqq), EcN (pqq-glf-mtlK), EcN (pqq-fdh) was given once per week 109 cells for two months. Furthermore, blood and liver parameters for oxidative stress, dyslipidemia and hyperglycemia were estimated. Fecal samples were collected to determine the production of short chain fatty acids and pyrroloquinoline quinone (PQQ) production. Results EcN (pqq-glf-mtlK), EcN (pqq-fdh) transformants were confirmed by restriction digestion and functionality was checked by PQQ estimation and HPLC analysis. There was significant increase in body weight, serum glucose, liver injury markers, lipid profile in serum and liver, and decrease in antioxidant enzyme activity in high-fructose-fed rats. However the rats treated with EcN (pqq-glf-mtlK) and EcN (pqq-fdh) showed significant reduction in lipid peroxidation along with increase in serum and hepatic antioxidant enzyme activities. Restoration of liver injury marker enzymes was also seen. Increase in short chain fatty acids (SCFA) demonstrated the prebiotic effects of mannitol and gluconic acid. Conclusions Our study demonstrated the effectiveness of probiotic EcN producing PQQ and fructose metabolizing enzymes against the fructose induced hepatic steatosis suggesting that its potential for use in treating fructose induced metabolic syndrome. PMID:27760187

  6. Genetic polymorphisms in the formaldehyde dehydrogenase gene and their biological significance.

    PubMed

    Just, Walter; Zeller, Jasmin; Riegert, Clarissa; Speit, Günter

    2011-11-30

    The GSH-dependent formaldehyde dehydrogenase (FDH) is the most important enzyme for the metabolic inactivation of formaldehyde. We studied three polymorphisms of this gene with the intention to elucidate their relevance for inter-individual differences in the protection against the (geno-)toxicity of FA. The first polymorphism (rs11568816) was investigated using real-time PCR and restriction fragment analysis in 150 subjects. However, we did not find the polymorphic sequence in any of the subjects. We studied a second polymorphism (rs17028487), representing a base exchange (c.*114A>G) in exon 9 of the FDH gene. We analyzed 70 subjects with the SNaPshot Primer Extension method and subsequent analysis in a ABI PRISM 3100, but no variant allele was identified. A third polymorphism, rs13832 in exon 9 (c.*493G>T), was studied in a group of 105 subjects by the SNaPshot Primer Extension method. 43 of the subjects were heterozygous for the polymorphism (G/T), 46 homozygous for the T allele, and 16 were homozygous for the G-allele. Real-time RT-PCR measurements of FDH mRNA did not indicate a significant difference in transcript levels between the heterozygous and the homozygous groups. The in vitro comet assay after FA exposure of blood samples obtained from 5 homozygous GG and 3 homozygous TT subjects did not lead to a significant difference between these two groups. Altogether, our study did not identify biologically relevant polymorphisms in transcribed regions of the FDH gene, which may lead to inter-individual differences in the metabolic inactivation of FA. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Improvement of ethanol productivity and energy efficiency by degradation of inhibitors using recombinant Zymomonas mobilis (pHW20a-fdh).

    PubMed

    Dong, Hong-Wei; Fan, Li-Qiang; Luo, Zichen; Zhong, Jian-Jiang; Ryu, Dewey D Y; Bao, Jie

    2013-09-01

    Toxic compounds, such as formic acid, furfural, and hydroxymethylfurfural (HMF) generated during pretreatment of corn stover (CS) at high temperature and low pH, inhibit growth of Zymomonas mobilis and lower the conversion efficiency of CS to biofuel and other products. The inhibition of toxic compounds is considered as one of the major technical barriers in the lignocellulose bioconversion. In order to detoxify and/or degrade these toxic compounds by the model ethanologenic strain Z. mobilis itself in situ the fermentation medium, we constructed a recombinant Z. mobilis ZM4 (pHW20a-fdh) strain that is capable of degrading toxic inhibitor, formate. This is accomplished by cloning heterologous formate dehydrogenase gene (fdh) from Saccharomyces cerevisiae and by coupling this reaction of NADH regeneration reaction system with furfural and HMF degradation in the recombinant Z. mobilis strain. The NADH regeneration reaction also improved both the energy efficiency and cell physiological activity of the recombinant organism, which were definitely confirmed by the improved cell growth, ethanol yield, and ethanol productivity during fermentation with CS hydrolysate. Copyright © 2013 Wiley Periodicals, Inc.

  8. Respiratory proteins contribute differentially to Campylobacter jejuni’s survival and in vitro interaction with hosts’ intestinal cells

    PubMed Central

    2012-01-01

    Background The genetic features that facilitate Campylobacter jejuni’s adaptation to a wide range of environments are not completely defined. However, whole genome expression studies showed that respiratory proteins (RPs) were differentially expressed under varying conditions and stresses, suggesting further unidentified roles for RPs in C. jejuni’s adaptation. Therefore, our objectives were to characterize the contributions of selected RPs to C. jejuni’s i- key survival phenotypes under different temperature (37°C vs. 42°C) and oxygen (microaerobic, ambient, and oxygen-limited/anaerobic) conditions and ii- its interactions with intestinal epithelial cells from disparate hosts (human vs. chickens). Results C. jejuni mutant strains with individual deletions that targeted five RPs; nitrate reductase (ΔnapA), nitrite reductase (ΔnrfA), formate dehydrogenase (ΔfdhA), hydrogenase (ΔhydB), and methylmenaquinol:fumarate reductase (ΔmfrA) were used in this study. We show that only the ΔfdhA exhibited a decrease in motility; however, incubation at 42°C significantly reduced the deficiency in the ΔfdhA’s motility as compared to 37°C. Under all tested conditions, the ΔmfrA showed a decreased susceptibility to hydrogen peroxide (H2O2), while the ΔnapA and the ΔfdhA showed significantly increased susceptibility to the oxidant as compared to the wildtype. Further, the susceptibility of the ΔnapA to H2O2 was significantly more pronounced at 37°C. The biofilm formation capability of individual RP mutants varied as compared to the wildtype. However, the impact of the deletion of certain RPs affected biofilm formation in a manner that was dependent on temperature and/or oxygen concentration. For example, the ΔmfrA displayed significantly deficient and increased biofilm formation under microaerobic conditions at 37°C and 42°C, respectively. However, under anaerobic conditions, the ΔmfrA was only significantly impaired in biofilm formation at 42°C. Additionally, the RPs mutants showed differential ability for infecting and surviving in human intestinal cell lines (INT-407) and primary chicken intestinal epithelial cells, respectively. Notably, the ΔfdhA and the ΔhydB were deficient in interacting with both cell types, while the ΔmfrA displayed impairments only in adherence to and invasion of INT-407. Scanning electron microscopy showed that the ΔhydB and the ΔfdhA exhibited filamentous and bulging (almost spherical) cell shapes, respectively, which might be indicative of defects in cell division. Conclusions We conclude that the RPs contribute to C. jejuni’s motility, H2O2 resistance, biofilm formation, and in vitro interactions with hosts’ intestinal cells. Further, the impact of certain RPs varied in response to incubation temperature and/or oxygen concentration. Therefore, RPs may facilitate the prevalence of C. jejuni in a variety of niches, contributing to the pathogen’s remarkable potential for adaptation. PMID:23148765

  9. Fluor Daniel Hanford Inc. integrated safety management system phase 1 verification final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PARSONS, J.E.

    1999-10-28

    The purpose of this review is to verify the adequacy of documentation as submitted to the Approval Authority by Fluor Daniel Hanford, Inc. (FDH). This review is not only a review of the Integrated Safety Management System (ISMS) System Description documentation, but is also a review of the procedures, policies, and manuals of practice used to implement safety management in an environment of organizational restructuring. The FDH ISMS should support the Hanford Strategic Plan (DOE-RL 1996) to safely clean up and manage the site's legacy waste; deploy science and technology while incorporating the ISMS theme to ''Do work safely''; andmore » protect human health and the environment.« less

  10. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  11. Methanol toxicity and formate oxidation in NEUT2 mice.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, R. J.; Champion, K. M.; Giometti, C. S.

    2001-09-15

    NEUT2 mice are deficient in cytosolic 10-formyltetrahydrofolate dehydrogenase (FDH; EC 1.5.1.6) which catalyzes the oxidation of excess folate-linked one-carbon units in the form of 10-formyltetrahydrofolate to CO{sub 2} and tetrahydrofolate. The absence of FDH should impair the oxidation of formate via the folate-dependent pathway and as a consequence render homozygous NEUT2 mice more susceptible to methanol toxicity. Normal (CB6-F1) and NEUT2 heterozygous and homozygous mice had essentially identical LD50 values for methanol, 6.08, 6.00, and 6.03 g/kg, respectively. Normal mice oxidized low doses of [{sup 14}C]sodium formate (ip 5 mg/kg) to {sup 14}CO{sub 2} at approximately twice the rate ofmore » homozygous NEUT2 mice, indicating the presence of another formate-oxidizing system in addition to FDH. Treatment of mice with the catalase inhibitor, 3-aminotriazole (1 g/kg ip) had no effect on the rate of formate oxidation, indicating that at low concentrations formate was not oxidized peroxidatively by catalase. High doses of [{sup 14}C]sodium formate (ip 100 mg/kg) were oxidized to {sup 14}CO{sub 2} at identical rates in normal and NEUT2 homozygous mice. Pretreatment with 3-aminotriazole (1 g/kg ip) in this instance resulted in a 40 and 50% decrease in formate oxidation to CO2 in both normal and homozygous NEUT2 mice, respectively. These results indicate that mice are able to oxidize formate to CO{sub 2} by at least three different routes: (1) folate-dependent via FDH at low levels of formate; (2) peroxidation by catalase at high levels of formate; and (3) by an unknown route(s) which appears to function at both low and high levels of formate. The implications of these observations are discussed in terms of the current hypotheses concerning methanol and formate toxicity in rodents and primates.« less

  12. Oxalate-Metabolising Genes of the White-Rot Fungus Dichomitus squalens Are Differentially Induced on Wood and at High Proton Concentration

    PubMed Central

    de Vries, Ronald P.; Timonen, Sari; Hildén, Kristiina

    2014-01-01

    Oxalic acid is a prevalent fungal metabolite with versatile roles in growth and nutrition, including degradation of plant biomass. However, the toxicity of oxalic acid makes regulation of its intra- and extracellular concentration crucial. To increase the knowledge of fungal oxalate metabolism, a transcriptional level study on oxalate-catabolising genes was performed with an effective lignin-degrading white-rot fungus Dichomitus squalens, which has demonstrated particular abilities in production and degradation of oxalic acid. The expression of oxalic-acid decomposing oxalate decarboxylase (ODC) and formic-acid decomposing formate dehydrogenase (FDH) encoding genes was followed during the growth of D. squalens on its natural spruce wood substrate. The effect of high proton concentration on the regulation of the oxalate-catabolising genes was determined after addition of organic acid (oxalic acid) and inorganic acid (hydrochloric acid) to the liquid cultures of D. squalens. In order to evaluate the co-expression of oxalate-catabolising and manganese peroxidase (MnP) encoding genes, the expression of one MnP encoding gene, mnp1, of D. squalens was also surveyed in the solid state and liquid cultures. Sequential action of ODC and FDH encoding genes was detected in the studied cultivations. The odc1, fdh2 and fdh3 genes of D. squalens showed constitutive expression, whereas ODC2 and FHD1 most likely are the main responsible enzymes for detoxification of high concentrations of oxalic and formic acids. The results also confirmed the central role of ODC1 when D. squalens grows on coniferous wood. Phylogenetic analysis revealed that fungal ODCs have evolved from at least two gene copies whereas FDHs have a single ancestral gene. As a conclusion, the multiplicity of oxalate-catabolising genes and their differential regulation on wood and in acid-amended cultures of D. squalens point to divergent physiological roles for the corresponding enzymes. PMID:24505339

  13. The Alcohol Dehydrogenase Gene Family in Melon (Cucumis melo L.): Bioinformatic Analysis and Expression Patterns

    PubMed Central

    Jin, Yazhong; Zhang, Chong; Liu, Wei; Tang, Yufan; Qi, Hongyan; Chen, Hao; Cao, Songxiao

    2016-01-01

    Alcohol dehydrogenases (ADH), encoded by multigene family in plants, play a critical role in plant growth, development, adaptation, fruit ripening and aroma production. Thirteen ADH genes were identified in melon genome, including 12 ADHs and one formaldehyde dehydrogenease (FDH), designated CmADH1-12 and CmFDH1, in which CmADH1 and CmADH2 have been isolated in Cantaloupe. ADH genes shared a lower identity with each other at the protein level and had different intron-exon structure at nucleotide level. No typical signal peptides were found in all CmADHs, and CmADH proteins might locate in the cytoplasm. The phylogenetic tree revealed that 13 ADH genes were divided into three groups respectively, namely long-, medium-, and short-chain ADH subfamily, and CmADH1,3-11, which belongs to the medium-chain ADH subfamily, fell into six medium-chain ADH subgroups. CmADH12 may belong to the long-chain ADH subfamily, while CmFDH1 may be a Class III ADH and serve as an ancestral ADH in melon. Expression profiling revealed that CmADH1, CmADH2, CmADH10 and CmFDH1 were moderately or strongly expressed in different vegetative tissues and fruit at medium and late developmental stages, while CmADH8 and CmADH12 were highly expressed in fruit after 20 days. CmADH3 showed preferential expression in young tissues. CmADH4 only had slight expression in root. Promoter analysis revealed several motifs of CmADH genes involved in the gene expression modulated by various hormones, and the response pattern of CmADH genes to ABA, IAA and ethylene were different. These CmADHs were divided into ethylene-sensitive and –insensitive groups, and the functions of CmADHs were discussed. PMID:27242871

  14. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  15. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  16. On regularizing the MCTDH equations of motion

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Dieter; Wang, Haobin

    2018-03-01

    The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.

  17. VhuD Facilitates Electron Flow from H2 or Formate to Heterodisulfide Reductase in Methanococcus maripaludis

    PubMed Central

    Costa, Kyle C.; Lie, Thomas J.; Xia, Qin

    2013-01-01

    Flavin-based electron bifurcation has recently been characterized as an essential energy conservation mechanism that is utilized by hydrogenotrophic methanogenic Archaea to generate low-potential electrons in an ATP-independent manner. Electron bifurcation likely takes place at the flavin associated with the α subunit of heterodisulfide reductase (HdrA). In Methanococcus maripaludis the electrons for this reaction come from either formate or H2 via formate dehydrogenase (Fdh) or Hdr-associated hydrogenase (Vhu). However, how these enzymes bind to HdrA to deliver electrons is unknown. Here, we present evidence that the δ subunit of hydrogenase (VhuD) is central to the interaction of both enzymes with HdrA. When M. maripaludis is grown under conditions where both Fdh and Vhu are expressed, these enzymes compete for binding to VhuD, which in turn binds to HdrA. Under these conditions, both enzymes are fully functional and are bound to VhuD in substoichiometric quantities. We also show that Fdh copurifies specifically with VhuD in the absence of other hydrogenase subunits. Surprisingly, in the absence of Vhu, growth on hydrogen still occurs; we show that this involves F420-reducing hydrogenase. The data presented here represent an initial characterization of specific protein interactions centered on Hdr in a hydrogenotrophic methanogen that utilizes multiple electron donors for growth. PMID:24039260

  18. Improved synthesis of chiral alcohols with Escherichia coli cells co-expressing pyridine nucleotide transhydrogenase, NADP+-dependent alcohol dehydrogenase and NAD+-dependent formate dehydrogenase.

    PubMed

    Weckbecker, Andrea; Hummel, Werner

    2004-11-01

    Recombinant pyridine nucleotide transhydrogenase (PNT) from Escherichia coli has been used to regenerate NAD+ and NADPH. The pnta and pntb genes encoding for the alpha- and beta-subunits were cloned and co-expressed with NADP+-dependent alcohol dehydrogenase (ADH) from Lactobacillus kefir and NAD+-dependent formate dehydrogenase (FDH) from Candida boidinii. Using this whole-cell biocatalyst, efficient conversion of prochiral ketones to chiral alcohols was achieved: 66% acetophenone was reduced to (R)-phenylethanol over 12 h, whereas only 19% (R)-phenylethanol was formed under the same conditions with cells containing ADH and FDH genes but without PNT genes. Cells that were permeabilized with toluene showed ketone reduction only if both cofactors were present.

  19. The Software Design for the Wide-Field Infrared Explorer Attitude Control System

    NASA Technical Reports Server (NTRS)

    Anderson, Mark O.; Barnes, Kenneth C.; Melhorn, Charles M.; Phillips, Tom

    1998-01-01

    The Wide-Field Infrared Explorer (WIRE), currently scheduled for launch in September 1998, is the fifth of five spacecraft in the NASA/Goddard Small Explorer (SMEX) series. This paper presents the design of WIRE's Attitude Control System flight software (ACS FSW). WIRE is a momentum-biased, three-axis stabilized stellar pointer which provides high-accuracy pointing and autonomous acquisition for eight to ten stellar targets per orbit. WIRE's short mission life and limited cryogen supply motivate requirements for Sun and Earth avoidance constraints which are designed to prevent catastrophic instrument damage and to minimize the heat load on the cryostat. The FSW implements autonomous fault detection and handling (FDH) to enforce these instrument constraints and to perform several other checks which insure the safety of the spacecraft. The ACS FSW implements modules for sensor data processing, attitude determination, attitude control, guide star acquisition, actuator command generation, command/telemetry processing, and FDH. These software components are integrated with a hierarchical control mode managing module that dictates which software components are currently active. The lowest mode in the hierarchy is the 'safest' one, in the sense that it utilizes a minimal complement of sensors and actuators to keep the spacecraft in a stable configuration (power and pointing constraints are maintained). As higher modes in the hierarchy are achieved, the various software functions are activated by the mode manager, and an increasing level of attitude control accuracy is provided. If FDH detects a constraint violation or other anomaly, it triggers a safing transition to a lower control mode. The WIRE ACS FSW satisfies all target acquisition and pointing accuracy requirements, enforces all pointing constraints, provides the ground with a simple means for reconfiguring the system via table load, and meets all the demands of its real-time embedded environment (16 MHz Intel 80386 processor with 80387 coprocessor running under the VRTX operating system). The mode manager organizes and controls all the software modules used to accomplish these goals, and in particular, the FDH module is tightly coupled with the mode manager.

  20. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  1. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  2. Generic patient-reported outcomes in child health research: a review of conceptual content using World Health Organization definitions.

    PubMed

    Fayed, Nora; de Camargo, Olaf Kraus; Kerr, Elizabeth; Rosenbaum, Peter; Dubey, Ankita; Bostan, Cristina; Faulhaber, Markus; Raina, Parminder; Cieza, Alarcos

    2012-12-01

    Our aims were to (1) describe the conceptual basis of popular generic instruments according to World Health Organization (WHO) definitions of functioning, disability, and health (FDH), and quality of life (QOL) with health-related quality of life (HRQOL) as a subcomponent of QOL; (2) map the instruments to the International Classification of Functioning, Disability and Health (ICF); and (3) provide information on how the analyzed instruments were used in the literature. This should enable users to make valid choices about which instruments have the desired content for a specific context or purpose. Child health-based literature over a 5-year period was reviewed to find research employing health status and QOL/HRQOL instruments. WHO definitions of FDH and QOL were applied to each item of the 15 most used instruments to differentiate measures of FDH and QOL/HRQOL. The ICF was used to describe the health and health-related content (if any) in those instruments. Additional aspects of instrument use were extracted from these articles. Many instruments that were used to measure QOL/HRQOL did not reflect WHO definitions of QOL. The ICF domains within instruments were highly variable with respect to whether body functions, activities and participation, or environment were emphasized. There is inconsistency among researchers about how to measure HRQOL and QOL. Moreover, when an ICF content analysis is applied, there is variability among instruments in the health components included and emphasized. Reviewing content is important for matching instruments to their intended purpose. © The Authors. Developmental Medicine & Child Neurology © 2012 Mac Keith Press.

  3. Growth and recombinant protein expression with Escherichia coli in different batch cultivation media.

    PubMed

    Hortsch, Ralf; Weuster-Botz, Dirk

    2011-04-01

    Parallel operated milliliter-scale stirred tank bioreactors were applied for recombinant protein expression studies in simple batch experiments without pH titration. An enzymatic glucose release system (EnBase), a complex medium, and the frequently used LB and TB media were compared with regard to growth of Escherichia coli and recombinant protein expression (alcohol dehydrogenase (ADH) from Lactobacillus brevis and formate dehydrogenase (FDH) from Candida boidinii). Dissolved oxygen and pH were recorded online, optical densities were measured at-line, and the activities of ADH and FDH were analyzed offline. Best growth was observed in a complex medium with maximum dry cell weight concentrations of 14 g L(-1). EnBase cultivations enabled final dry cell weight concentrations between 6 and 8 g L(-1). The pH remained nearly constant in EnBase cultivations due to the continuous glucose release, showing the usefulness of this glucose release system especially for pH-sensitive bioprocesses. Cell-specific enzyme activities varied considerably depending on the different media used. Maximum specific ADH activities were measured with the complex medium, 6 h after induction with IPTG, whereas the highest specific FDH activities were achieved with the EnBase medium at low glucose release profiles 24 h after induction. Hence, depending on the recombinant protein, different medium compositions, times for induction, and times for cell harvest have to be evaluated to achieve efficient expression of recombinant proteins in E. coli. A rapid experimental evaluation can easily be performed with parallel batch operated small-scale stirred tank bioreactors.

  4. Towards cell-free isobutanol production: Development of a novel immobilized enzyme system.

    PubMed

    Grimaldi, Joseph; Collins, Cynthia H; Belfort, Georges

    2016-01-01

    Producing fuels and chemical intermediates with cell cultures is severely limited by low product concentrations (≤0.2%(v/v)) due to feedback inhibition, cell instability, and lack of economical product recovery processes. We have developed an alternate simplified production scheme based on a cell-free immobilized enzyme system. Two immobilized enzymes (keto-acid decarboxylase (KdcA) and alcohol dehydrogenase (ADH)) and one enzyme in solution (formate dehydrogenase (FDH) for NADH recycle) produced isobutanol titers 8 to 20 times higher than the highest reported titers with S. cerevisiae on a mol/mol basis. These high conversion rates and low protein leaching were achieved by covalent immobilization of enzymes (ADH) and enzyme fusions (fKdcA) on methacrylate resin. The new enzyme system without in situ removal of isobutanol achieved a 55% conversion of ketoisovaleric acid to isobutanol at a concentration of 0.135 (mole isobutanol produced for each mole ketoisovaleric acid consumed). Further increasing titer will require continuous removal of the isobutanol using an in situ recovery system. © 2015 American Institute of Chemical Engineers.

  5. Proper time regularization and the QCD chiral phase transition

    PubMed Central

    Cui, Zhu-Fang; Zhang, Jin-Li; Zong, Hong-Shi

    2017-01-01

    We study the QCD chiral phase transition at finite temperature and finite quark chemical potential within the two flavor Nambu–Jona-Lasinio (NJL) model, where a generalization of the proper-time regularization scheme is motivated and implemented. We find that in the chiral limit the whole transition line in the phase diagram is of second order, whereas for finite quark masses a crossover is observed. Moreover, if we take into account the influence of quark condensate to the coupling strength (which also provides a possible way of how the effective coupling varies with temperature and quark chemical potential), it is found that a CEP may appear. These findings differ substantially from other NJL results which use alternative regularization schemes, some explanation and discussion are given at the end. This indicates that the regularization scheme can have a dramatic impact on the study of the QCD phase transition within the NJL model. PMID:28401889

  6. Illumination with 630-nm red light reduces oxidative stress and restores memory by photo-activating catalase and formaldehyde dehydrogenase in SAMP8 mice.

    PubMed

    Zhang, Jingnan; Yue, Xiangpei; Luo, Hongjun; Jiang, Wenjing; Mei, Yufei; Ai, Li; Gao, Ge; Wu, Yan; Yang, Hui; An, Jieran; Ding, Shumao; Yang, Xu; Sun, Bingui; Luo, Wenhong; He, Rongqiao; Jia, Jianping; Lyu, Jihui; Tong, Zhiqian

    2018-06-05

    Pharmacological treatments for Alzheimer's disease (AD) have not resulted in desirable clinical efficacy over 100 years. Hydrogen peroxide (H2O2), a reactive and the most stable compound of reactive oxygen species (ROS), contributes to oxidative stress in AD patients. Here, we designed a medical device to emit red light at 630±15 nm from a light-emitting diode (LED-RL) and investigated whether the LED-RL reduces brain H2O2 levels and improves memory in senescence-accelerated prone 8 mouse (SAMP8) model of age-related dementia. We found that age-associated H2O2 directly inhibited formaldehyde dehydrogenase (FDH). FDH inactivity and semicarbazide-sensitive amine oxidase (SSAO) disorder resulted in endogenous formaldehyde (FA) accumulation. Unexpectedly, excess FA, in turn, caused acetylcholine (Ach) deficiency by inhibiting choline acetyltransferase (ChAT) activity in vitro and in vivo. Interestingly, the 630-nm red light can penetrate the skull and abdomen with light penetration rates: ~49% and ~43%, respectively. Illumination with LED-RL markedly activated both catalase and FDH in the brains, cultured cells and purified protein solutions, all reduced brain H2O2 and FA levels and restored brain Ach contents. Consequently, LED-RL not only prevented early-stage memory decline but also rescued late-stage memory deficits in SAMP8 mice. We developed a phototherapeutic device with 630-nm red light, and this LED-RL reduced brain H2O2 levels and reversed age-related memory disorders. The phototherapy of LED-RL has low photo toxicity and high rate of tissue penetration, and non-invasively reverses aging-associated cognitive decline. This finding opens a promising opportunity to translate LED-RL into clinical treatment for patients with dementia.

  7. Fresh from the Ornamental Garden: Hips of Selected Rose Cultivars Rich in Phytonutrients.

    PubMed

    Cunja, Vlasta; Mikulic-Petkovsek, Maja; Weber, Nika; Jakopic, Jerneja; Zupan, Anka; Veberic, Robert; Stampar, Franci; Schmitzer, Valentina

    2016-02-01

    Morphological parameters (size, weight, color), the content of sugars, organic acids, lycopene, β-carotene, and phenolics were determined in hips of Rosa canina (RCA), Rosa sweginzowii (RSW), Rosa rugosa (RUG), and selected ornamental Rosa cultivars Fru Dagmar Hastrup (FDH), Repandia (REP), Veilchenblau (RVB), Aloha (RAL), Bonica (BON), and Golden Gate (RGG). Although traditionally used RCA hips contained the highest amount of cyanidin-3-glucoside (83 μg/g DW) and were the reddest (h° = 17.5), they did not stand out in other analyzed parameters. RGG climber had the biggest hips (8.86 g), which also contained highest sugar levels (50.9 g/100 g DW). RAL stood out as the cultivar rich in organic acids (33.9 g/100 g DW), mainly because of high quinic acid content (17.6 g/100g DW). FDH and RSW hips were characterized by particularly high ascorbic acid levels (4325 mg/100 g DW and 4711 mg/100 g DW). Other ornamental cultivars contained low amounts of ascorbic acid compared to the analyzed species. The phenolic profile was species/cultivars-specific. The greatest diversity of phenolic compounds was detected in RUG and FDH hips (55 and 54 different tentatively identified compounds with HPLC/MS). Flavanols represented the main phenolic class in most of the investigated species/cultivars and RGG hips contained the highest amount of catechin and proanthocyandin derivatives (15855 μg/g DW). Altogether RAL hips contained the highest quantity of phenolics (44746 μg/g DW) mainly due to high levels of hydrolysable tannins compared to other species/cultivars. Although small, hips of BON and REP were most abundant regarding β-carotene and lycopene content, respectively. © 2016 Institute of Food Technologists®

  8. Identification of formaldehyde as the metabolite responsible for the mutagenicity of methyl tertiary-butyl ether in the activated mouse lymphoma assay.

    PubMed

    Mackerer, C R; Angelosanto, F A; Blackburn, G R; Schreiner, C A

    1996-09-01

    Methyl tertiary-butyl ether (MTBE), which is added to gasoline as an octane enhancer and to reduce automotive emissions, has been evaluated in numerous toxicological tests, including those for genotoxicity. MTBE did not show any mutagenic potential in the Ames bacterial assay or any clastogenicity in cytogenetic tests. However, it has been shown to be mutagenic in an in vitro gene mutation assay using mouse lymphoma cells when tested in the presence, but not in the absence, of a rat liver-derived metabolic activation system (S-9). In the present study, MTBE was tested to determine if formaldehyde, in the presence of the S-9, was responsible for the observed mutagenicity. A modification of the mouse lymphoma assay was employed which permits determination of whether a suspect material is mutagenic because it contains or is metabolized to formaldehyde. In the modified assay, the enzyme formaldehyde dehydrogenase (FDH) and its co-factor, NAD+ are added in large excess during the exposure period so that any formaldehyde produced in the system is rapidly converted to formic acid which is not genotoxic. An MTBE dose-responsive increase in the frequency of mutants and in cytotoxicity occurred without FDH present, and this effect was greatly reduced in the presence of FDH NAD+. The findings clearly demonstrate that formaldehyde derived from MTBE is responsible for mutagenicity of MTBE in the activated mouse lymphoma assay. Furthermore, the results suggest that the lack of mutagenicity/clastogenicity seen with MTBE in other in vitro assays might have resulted from inadequacies in the test systems employed for those assays.

  9. A rare human syndrome provides genetic evidence that WNT signaling is required for reprogramming of fibroblasts to induced pluripotent stem cells

    PubMed Central

    Ross, Jason; Busch, Julia; Mintz, Ellen; Ng, Damian; Stanley, Alexandra; Brafman, David; Sutton, V. Reid; Van den Veyver, Ignatia; Willert, Karl

    2015-01-01

    SUMMARY WNT signaling promotes the reprogramming of somatic cells to an induced pluripotent state. We provide genetic evidence that WNT signaling is a requisite step during the induction of pluripotency. Fibroblasts from individuals with Focal Dermal Hypoplasia (FDH), a rare genetic syndrome caused by mutations in the essential WNT processing enzyme PORCN, fail to reprogram using standard methods. This blockade in reprogramming is overcome by ectopic WNT signaling and by PORCN overexpression, thus demonstrating that WNT signaling is essential for reprogramming. The rescue of reprogramming is critically dependent on the level of WNT signaling: steady baseline activation of the WNT pathway yields karyotypically normal iPS cells, whereas daily stimulation with Wnt3a produces FDH-iPS cells with severely abnormal karyotypes. Therefore, although WNT signaling is required for cellular reprogramming, inappropriate activation of WNT signaling induces chromosomal instability, highlighting the precarious nature of ectopic WNT activation, and its tight relationship with oncogenic transformation. PMID:25464842

  10. Efficient energy stable schemes for isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven

    2018-07-01

    We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.

  11. The Adler D-function for N = 1 SQCD regularized by higher covariant derivatives in the three-loop approximation

    NASA Astrophysics Data System (ADS)

    Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.

    2018-01-01

    We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.

  12. Reputation-Based Secure Sensor Localization in Wireless Sensor Networks

    PubMed Central

    He, Jingsha; Xu, Jing; Zhu, Xingye; Zhang, Yuqiang; Zhang, Ting; Fu, Wanqing

    2014-01-01

    Location information of sensor nodes in wireless sensor networks (WSNs) is very important, for it makes information that is collected and reported by the sensor nodes spatially meaningful for applications. Since most current sensor localization schemes rely on location information that is provided by beacon nodes for the regular sensor nodes to locate themselves, the accuracy of localization depends on the accuracy of location information from the beacon nodes. Therefore, the security and reliability of the beacon nodes become critical in the localization of regular sensor nodes. In this paper, we propose a reputation-based security scheme for sensor localization to improve the security and the accuracy of sensor localization in hostile or untrusted environments. In our proposed scheme, the reputation of each beacon node is evaluated based on a reputation evaluation model so that regular sensor nodes can get credible location information from highly reputable beacon nodes to accomplish localization. We also perform a set of simulation experiments to demonstrate the effectiveness of the proposed reputation-based security scheme. And our simulation results show that the proposed security scheme can enhance the security and, hence, improve the accuracy of sensor localization in hostile or untrusted environments. PMID:24982940

  13. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    PubMed

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  14. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  15. Dimension-5 C P -odd operators: QCD mixing and renormalization

    DOE PAGES

    Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; ...

    2015-12-23

    Here, we study the off-shell mixing and renormalization of flavor-diagonal dimension-five T- and P-odd operators involving quarks, gluons, and photons, including quark electric dipole and chromoelectric dipole operators. Furthermore, we present the renormalization matrix to one loop in themore » $$\\bar{MS}$$ scheme. We also provide a definition of the quark chromoelectric dipole operator in a regularization-independent momentum-subtraction scheme suitable for nonperturbative lattice calculations and present the matching coefficients with the $$\\bar{MS}$$ scheme to one loop in perturbation theory, using both the naïve dimensional regularization and ’t Hooft–Veltman prescriptions for γ 5.« less

  16. Clostridium acidurici electron-bifurcating formate dehydrogenase.

    PubMed

    Wang, Shuning; Huang, Haiyan; Kahnt, Jörg; Thauer, Rudolf K

    2013-10-01

    Cell extracts of uric acid-grown Clostridium acidurici catalyzed the coupled reduction of NAD(+) and ferredoxin with formate at a specific activity of 1.3 U/mg. The enzyme complex catalyzing the electron-bifurcating reaction was purified 130-fold and found to be composed of four subunits encoded by the gene cluster hylCBA-fdhF2.

  17. Growth, nutritional, and gastrointestinal aspects of focal dermal hypoplasia (Goltz-Gorlin syndrome)

    USDA-ARS?s Scientific Manuscript database

    Focal dermal hypoplasia (FDH) is a rare genetic disorder caused by mutations in the PORCN gene located on the X-chromosome. In the present study, we characterized the pattern of growth, body composition, and the nutritional and gastrointestinal aspects of children and adults (n'='19) affected with t...

  18. Frequency-Domain Tomography for Single-shot, Ultrafast Imaging of Evolving Laser-Plasma Accelerators

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Downer, Michael

    2011-10-01

    Intense laser pulses propagating through plasma create plasma wakefields that often evolve significantly, e.g. by expanding and contracting. However, such dynamics are known in detail only through intensive simulations. Laboratory visualization of evolving plasma wakes in the ``bubble'' regime is important for optimizing and scaling laser-plasma accelerators. Recently snap-shots of quasi-static wakes were recorded using frequency-domain holography (FDH). To visualize the wake's evolution, we have generalized FDH to frequency-domain tomography (FDT), which uses multiple probes propagating at different angles with respect to the pump pulse. Each probe records a phase streak, imprinting a partial record of the evolution of pump-created structures. We then topographically reconstruct the full evolution from all phase streaks. To prove the concept, a prototype experiment visualizing nonlinear index evolution in glass is demonstrated. Four probes propagating at 0, 0.6, 2, 14 degrees to the index ``bubble'' are angularly and temporally multiplexed to a single spectrometer to achieve cost-effective FDT. From these four phase streaks, an FDT algorithm analogous to conventional CT yields a single-shot movie of the pump's self-focusing dynamics.

  19. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  20. A first-passage scheme for determination of overall rate constants for non-diffusion-limited suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Shih-Yuan; Yen, Yi-Ming

    2002-02-01

    A first-passage scheme is devised to determine the overall rate constant of suspensions under the non-diffusion-limited condition. The original first-passage scheme developed for diffusion-limited processes is modified to account for the finite incorporation rate at the inclusion surface by using a concept of the nonzero survival probability of the diffusing entity at entity-inclusion encounters. This nonzero survival probability is obtained from solving a relevant boundary value problem. The new first-passage scheme is validated by an excellent agreement between overall rate constant results from the present development and from an accurate boundary collocation calculation for the three common spherical arrays [J. Chem. Phys. 109, 4985 (1998)], namely simple cubic, body-centered cubic, and face-centered cubic arrays, for a wide range of P and f. Here, P is a dimensionless quantity characterizing the relative rate of diffusion versus surface incorporation, and f is the volume fraction of the inclusion. The scheme is further applied to random spherical suspensions and to investigate the effect of inclusion coagulation on overall rate constants. It is found that randomness in inclusion arrangement tends to lower the overall rate constant for f up to the near close-packing value of the regular arrays because of the inclusion screening effect. This screening effect turns stronger for regular arrays when f is near and above the close-packing value of the regular arrays, and consequently the overall rate constant of the random array exceeds that of the regular array. Inclusion coagulation too induces the inclusion screening effect, and leads to lower overall rate constants.

  1. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  2. Adiabatic regularization for gauge fields and the conformal anomaly

    NASA Astrophysics Data System (ADS)

    Chu, Chong-Sun; Koyama, Yoji

    2017-03-01

    Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.

  3. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  4. Clostridium acidurici Electron-Bifurcating Formate Dehydrogenase

    PubMed Central

    Wang, Shuning; Huang, Haiyan; Kahnt, Jörg

    2013-01-01

    Cell extracts of uric acid-grown Clostridium acidurici catalyzed the coupled reduction of NAD+ and ferredoxin with formate at a specific activity of 1.3 U/mg. The enzyme complex catalyzing the electron-bifurcating reaction was purified 130-fold and found to be composed of four subunits encoded by the gene cluster hylCBA-fdhF2. PMID:23872566

  5. Finance and supply management project execution plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BENNION, S.I.

    As a subproject of the HANDI 2000 project, the Finance and Supply Management system is intended to serve FDH and Project Hanford major subcontractor with financial processes including general ledger, project costing, budgeting, and accounts payable, and supply management process including purchasing, inventory and contracts management. Currently these functions are performed with numerous legacy information systems and suboptimized processes.

  6. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  7. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  8. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  10. A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays

    PubMed Central

    Hughes, Alec; Hynynen, Kullervo

    2016-01-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323

  11. A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.

    PubMed

    Hughes, Alec; Hynynen, Kullervo

    2016-12-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.

  12. The Role of Foreign Domestic Helpers in Hong Kong Chinese Children's English and Chinese Skills: A Longitudinal Study

    ERIC Educational Resources Information Center

    Dulay, Katrina May; Tong, Xiuhong; McBride, Catherine

    2017-01-01

    We investigated the influence of nonparental caregivers, such as foreign domestic helpers (FDH), on the home language spoken to the child and its implications for vocabulary and word reading development in Cantonese- and English-speaking bilingual children. Using data collected from ages 5 to 9, we analyzed Chinese vocabulary, Chinese character…

  13. Visualization of evolving laser-generated structures by frequency domain tomography

    NASA Astrophysics Data System (ADS)

    Chang, Yenyu; Li, Zhengyan; Wang, Xiaoming; Zgadzaj, Rafal; Downer, Michael

    2011-10-01

    We introduce frequency domain tomography (FDT) for single-shot visualization of time-evolving refractive index structures (e.g. laser wakefields, nonlinear index structures) moving at light-speed. Previous researchers demonstrated single-shot frequency domain holography (FDH), in which a probe-reference pulse pair co- propagates with the laser-generated structure, to obtain snapshot-like images. However, in FDH, information about the structure's evolution is averaged. To visualize an evolving structure, we use several frequency domain streak cameras (FDSCs), in each of which a probe-reference pulse pair propagates at an angle to the propagation direction of the laser-generated structure. The combination of several FDSCs constitutes the FDT system. We will present experimental results for a 4-probe FDT system that has imaged the whole-beam self-focusing of a pump pulse propagating through glass in a single laser shot. Combining temporal and angle multiplexing methods, we successfully processed data from four probe pulses in one spectrometer in a single-shot. The output of data processing is a multi-frame movie of the self- focusing pulse. Our results promise the possibility of visualizing evolving laser wakefield structures that underlie laser-plasma accelerators used for multi-GeV electron acceleration.

  14. Ready to use bioinformatics analysis as a tool to predict immobilisation strategies for protein direct electron transfer (DET).

    PubMed

    Cazelles, R; Lalaoui, N; Hartmann, T; Leimkühler, S; Wollenberger, U; Antonietti, M; Cosnier, S

    2016-11-15

    Direct electron transfer (DET) to proteins is of considerable interest for the development of biosensors and bioelectrocatalysts. While protein structure is mainly used as a method of attaching the protein to the electrode surface, we employed bioinformatics analysis to predict the suitable orientation of the enzymes to promote DET. Structure similarity and secondary structure prediction were combined underlying localized amino-acids able to direct one of the enzyme's electron relays toward the electrode surface by creating a suitable bioelectrocatalytic nanostructure. The electro-polymerization of pyrene pyrrole onto a fluorine-doped tin oxide (FTO) electrode allowed the targeted orientation of the formate dehydrogenase enzyme from Rhodobacter capsulatus (RcFDH) by means of hydrophobic interactions. Its electron relays were directed to the FTO surface, thus promoting DET. The reduction of nicotinamide adenine dinucleotide (NAD(+)) generating a maximum current density of 1μAcm(-2) with 10mM NAD(+) leads to a turnover number of 0.09electron/s/molRcFDH. This work represents a practical approach to evaluate electrode surface modification strategies in order to create valuable bioelectrocatalysts. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Amine dehydrogenases: efficient biocatalysts for the reductive amination of carbonyl compounds.

    PubMed

    Knaus, Tanja; Böhmer, Wesley; Mutti, Francesco G

    2017-01-21

    Amines constitute the major targets for the production of a plethora of chemical compounds that have applications in the pharmaceutical, agrochemical and bulk chemical industries. However, the asymmetric synthesis of α-chiral amines with elevated catalytic efficiency and atom economy is still a very challenging synthetic problem. Here, we investigated the biocatalytic reductive amination of carbonyl compounds employing a rising class of enzymes for amine synthesis: amine dehydrogenases (AmDHs). The three AmDHs from this study - operating in tandem with a formate dehydrogenase from Candida boidinii (Cb-FDH) for the recycling of the nicotinamide coenzyme - performed the efficient amination of a range of diverse aromatic and aliphatic ketones and aldehydes with up to quantitative conversion and elevated turnover numbers (TONs). Moreover, the reductive amination of prochiral ketones proceeded with perfect stereoselectivity, always affording the ( R )-configured amines with more than 99% enantiomeric excess. The most suitable amine dehydrogenase, the optimised catalyst loading and the required reaction time were determined for each substrate. The biocatalytic reductive amination with this dual-enzyme system (AmDH-Cb-FDH) possesses elevated atom efficiency as it utilizes the ammonium formate buffer as the source of both nitrogen and reducing equivalents. Inorganic carbonate is the sole by-product.

  16. Holographic Imaging of Evolving Laser-Plasma Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downer, Michael; Shvets, G.

    In the 1870s, English photographer Eadweard Muybridge captured motion pictures within one cycle of a horse’s gallop, which settled a hotly debated question of his time by showing that the horse became temporarily airborne. In the 1940s, Manhattan project photographer Berlin Brixner captured a nuclear blast at a million frames per second, and resolved a dispute about the explosion’s shape and speed. In this project, we developed methods to capture detailed motion pictures of evolving, light-velocity objects created by a laser pulse propagating through matter. These objects include electron density waves used to accelerate charged particles, laser-induced refractive index changesmore » used for micromachining, and ionization tracks used for atmospheric chemical analysis, guide star creation and ranging. Our “movies”, like Muybridge’s and Brixner’s, are obtained in one shot, since the laser-created objects of interest are insufficiently repeatable for accurate stroboscopic imaging. Our high-speed photographs have begun to resolve controversies about how laser-created objects form and evolve, questions that previously could be addressed only by intensive computer simulations based on estimated initial conditions. Resolving such questions helps develop better tabletop particle accelerators, atmospheric ranging devices and many other applications of laser-matter interactions. Our photographic methods all begin by splitting one or more “probe” pulses from the laser pulse that creates the light-speed object. A probe illuminates the object and obtains information about its structure without altering it. We developed three single-shot visualization methods that differ in how the probes interact with the object of interest or are recorded. (1) Frequency-Domain Holography (FDH). In FDH, there are 2 probes, like “object” and “reference” beams in conventional holography. Our “object” probe surrounds the light-speed object, like a fleas swarming around a sprinting animal. The object modifies the probe, imprinting information about its structure. Meanwhile, our “reference” probe co-propagates ahead of the object, free of its influence. After the interaction, object and reference combine to record a hologram. For technical reasons, our recording device is a spectrometer (a frequency-measuring device), hence the name “frequency-domain” holography. We read the hologram electronically to obtain a “snapshot” of the object’s average structure as it transits the medium. Our published work shows numerous snapshots of electron density waves (“laser wakes”) in ionized gas (“plasma”), analogous to a water wake behind a boat. Such waves are the basis of tabletop particle accelerators, in which charged particles surf on the light-speed wave, gaining energy. Comparing our snapshots to computer simulations deepens understanding of laser wakes. FDH takes snapshots of objects that are quasi-static --- i.e. like Muybridge’s horse standing still on a treadmill. If the object changes shape, FDH images blur, as when a subject moves while a camera shutter is open. Many laser-generated objects of interest do evolve as they propagate. To overcome this limit of FDH, we developed .... (2) Frequency-Domain Tomography (FDT). In FDT, 5 to 10 probe pulses are fired simultaneously across the object’s path at different angles, like a crossfire of bullets. The object imprints a “streaked” record of its evolution on each probe, which we record as in FDH, then recover a multi-frame “movie” of the object’s evolving structure using algorithms of computerized tomography. When propagation distance exceeds a few millimeters, reconstructed FDT images distort. This is because the lenses that image probes to detector have limited depth of field, like cameras that cannot focus simultaneously on both nearby and distant objects. But some laser-generated objects of interest propagate over meters. For these applications we developed … (3) Multi-Object-Plane Phase-Contrast Imaging (MOP-PCI). In MOP-PCI, we image FDT-like probes to the detector from multiple “object planes” --- like recording an event simultaneously with several cameras, some focused on nearby, others on distant, objects. To increase sensitivity, we exploit a phase-contrast imaging technique developed by Dutch Nobel laureate Fritz Zernike in the 1930s. Using MOP-PCI we recorded single-shot movies of laser pulse tracks through more than 10 cm of air. We plan to record images of meter-long tracks of electron bunches propagating through plasma in an experiment at the Stanford Linear Accelerator Center (SLAC). This will help SLAC scientists understand, optimize and scale small plasma-based particle accelerators that have applications in medicine, industry, materials science and high-energy physics.« less

  17. Two-loop matching factors for light quark masses and three-loop mass anomalous dimensions in the regularization invariant symmetric momentum-subtraction schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almeida, Leandro G.; Physics Department, Brookhaven National Laboratory, Upton, New York 11973; Sturm, Christian

    2010-09-01

    Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the MS scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f}=3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less

  18. Stimulated Deep Neural Network for Speech Recognition

    DTIC Science & Technology

    2016-09-08

    making network regularization and robust adaptation challenging. Stimulated training has recently been proposed to address this problem by encouraging...potential to improve regularization and adaptation. This paper investigates stimulated training of DNNs for both of these options. These schemes take

  19. Image segmentation with a novel regularized composite shape prior based on surrogate study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less

  20. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  1. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  2. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  3. Postirradiation Testing Laboratory (327 Building)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kammenzind, D.E.

    A Standards/Requirements Identification Document (S/RID) is the total list of the Environment, Safety and Health (ES and H) requirements to be implemented by a site, facility, or activity. These requirements are appropriate to the life cycle phase to achieve an adequate level of protection for worker and public health and safety, and the environment during design, construction, operation, decontamination and decommissioning, and environmental restoration. S/RlDs are living documents, to be revised appropriately based on change in the site`s or facility`s mission or configuration, a change in the facility`s life cycle phase, or a change to the applicable standards/requirements. S/RIDs encompassmore » health and safety, environmental, and safety related safeguards and security (S and S) standards/requirements related to the functional areas listed in the US Department of Energy (DOE) Environment, Safety and Health Configuration Guide. The Fluor Daniel Hanford (FDH) Contract S/RID contains standards/requirements, applicable to FDH and FDH subcontractors, necessary for safe operation of Project Hanford Management Contract (PHMC) facilities, that are not the direct responsibility of the facility manager (e.g., a site-wide fire department). Facility S/RIDs contain standards/requirements applicable to a specific facility that are the direct responsibility of the facility manager. S/RlDs are prepared by those responsible for managing the operation of facilities or the conduct of activities that present a potential threat to the health and safety of workers, public, or the environment, including: Hazard Category 1 and 2 nuclear facilities and activities, as defined in DOE 5480.23. Selected Hazard Category 3 nuclear, and Low Hazard non-nuclear facilities and activities, as agreed upon by RL. The Postirradiation Testing Laboratory (PTL) S/RID contains standards/ requirements that are necessary for safe operation of the PTL facility, and other building/areas that are the direct responsibility of the specific facility manager. The specific DOE Orders, regulations, industry codes/standards, guidance documents and good industry practices that serve as the basis for each element/subelement are identified and aligned with each subelement.« less

  4. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  5. A Regularized Neural Net Approach for Retrieval of Atmospheric and Surface Temperatures with the IASI Instrument

    NASA Technical Reports Server (NTRS)

    Aires, F.; Chedin, A.; Scott, N. A.; Rossow, W. B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    Abstract In this paper, a fast atmospheric and surface temperature retrieval algorithm is developed for the high resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. This algorithm is constructed on the basis of a neural network technique that has been regularized by introduction of a priori information. The performance of the resulting fast and accurate inverse radiative transfer model is presented for a large divE:rsified dataset of radiosonde atmospheres including rare events. Two configurations are considered: a tropical-airmass specialized scheme and an all-air-masses scheme.

  6. FeynArts model file for MSSM transition counterterms from DREG to DRED

    NASA Astrophysics Data System (ADS)

    Stöckinger, Dominik; Varšo, Philipp

    2012-02-01

    The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.

  7. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  8. Coupled reactions on bioparticles: Stereoselective reduction with cofactor regeneration on PhaC inclusion bodies.

    PubMed

    Spieler, Valerie; Valldorf, Bernhard; Maaß, Franziska; Kleinschek, Alexander; Hüttenhain, Stefan H; Kolmar, Harald

    2016-07-01

    Chiral alcohols are important building blocks for specialty chemicals and pharmaceuticals. The production of chiral alcohols from ketones can be carried out stereo selectively with alcohol dehydrogenases (ADHs). To establish a process for cost-effective enzyme immobilization on solid phase for application in ketone reduction, we used an established enzyme pair consisting of ADH from Rhodococcus erythropolis and formate dehydrogenase (FDH) from Candida boidinii for NADH cofactor regeneration and co-immobilized them on modified poly-p-hydroxybutyrate synthase (PhaC)-inclusion bodies that were recombinantly produced in Escherichia coli cells. After separate production of genetically engineered and recombinantly produced enzymes and particles, cell lysates were combined and enzymes endowed with a Kcoil were captured on the surface of the Ecoil presenting particles due to coiled-coil interaction. Enzyme-loaded particles could be easily purified by centrifugation. Total conversion of 4'-chloroacetophenone to (S)-4-chloro-α-methylbenzyl alcohol could be accomplished using enzyme-loaded particles, catalytic amounts of NAD(+) and formate as substrates for FDH. Chiral GC-MS analysis revealed that immobilized ADH retained enantioselectivity with 99 % enantiomeric excess. In conclusion, this strategy may become a cost-effective alternative to coupled reactions using purified enzymes. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Amine dehydrogenases: efficient biocatalysts for the reductive amination of carbonyl compounds

    PubMed Central

    Mutti, Francesco G.

    2017-01-01

    Amines constitute the major targets for the production of a plethora of chemical compounds that have applications in the pharmaceutical, agrochemical and bulk chemical industries. However, the asymmetric synthesis of α-chiral amines with elevated catalytic efficiency and atom economy is still a very challenging synthetic problem. Here, we investigated the biocatalytic reductive amination of carbonyl compounds employing a rising class of enzymes for amine synthesis: amine dehydrogenases (AmDHs). The three AmDHs from this study – operating in tandem with a formate dehydrogenase from Candida boidinii (Cb-FDH) for the recycling of the nicotinamide coenzyme – performed the efficient amination of a range of diverse aromatic and aliphatic ketones and aldehydes with up to quantitative conversion and elevated turnover numbers (TONs). Moreover, the reductive amination of prochiral ketones proceeded with perfect stereoselectivity, always affording the (R)-configured amines with more than 99% enantiomeric excess. The most suitable amine dehydrogenase, the optimised catalyst loading and the required reaction time were determined for each substrate. The biocatalytic reductive amination with this dual-enzyme system (AmDH–Cb-FDH) possesses elevated atom efficiency as it utilizes the ammonium formate buffer as the source of both nitrogen and reducing equivalents. Inorganic carbonate is the sole by-product. PMID:28663713

  10. Mathematical model of blasting schemes management in mining operations in presence of random disturbances

    NASA Astrophysics Data System (ADS)

    Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.

    2017-11-01

    The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.

  11. Blind Compressed Sensing Enables 3-Dimensional Dynamic Free Breathing Magnetic Resonance Imaging of Lung Volumes and Diaphragm Motion.

    PubMed

    Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews

    2016-06-01

    The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.

  12. Study of X(5568) in a unitary coupled-channel approximation of BK¯ and Bs π

    NASA Astrophysics Data System (ADS)

    Sun, Bao-Xi; Dong, Fang-Yong; Pang, Jing-Long

    2017-07-01

    The potential of the B meson and the pseudoscalar meson is constructed up to the next-to-leading order Lagrangian, and then the BK¯ and Bs π interaction is studied in the unitary coupled-channel approximation. A resonant state with a mass about 5568 MeV and JP =0+ is generated dynamically, which can be associated with the X(5568) state announced by the D0 Collaboration recently. The mass and the decay width of this resonant state depend on the regularization scale in the dimensional regularization scheme, or the maximum momentum in the momentum cutoff regularization scheme. The scattering amplitude of the vector B meson and the pseudoscalar meson is calculated, and an axial-vector state with a mass near 5620 MeV and JP =1+ is produced. Their partners in the charm sector are also discussed.

  13. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  14. Study of Bird Ingestions into Small Inlet Area, Aircraft Turbine Engines

    DTIC Science & Technology

    1989-12-01

    TFE731 3 -0- -0- NONE NONE NONE12/16/1987 -0- -0- -0- RICHMOND, VA-BYRD FIELD YES TFE731 3R A,O 2 NONE NONE NONE12/17/1987 8. -0- FOH FRIEDRICHSHAFEN ...GERMANY NO TPE331 5 A,X 1 -0- NONE RETARE12/17/1987 8. -0- FDH FRIEDRICHSHAFEN , GERMANY NO TPE331 5 A,K I 0 NONE RETARD12/30/1987 -0- -0- -0- CRICIUNA

  15. Zika Virus: Obstetric and Pediatric Anesthesia Considerations.

    PubMed

    Tutiven, Jacqueline L; Pruden, Benjamin T; Banks, James S; Stevenson, Mario; Birnbach, David J

    2017-06-01

    As of November 2016, the Florida Department of Health (FDH) and the Centers for Disease Control and Prevention have confirmed more than 4000 travel-related Zika virus (ZIKV) infections in the United States with >700 of those in Florida. There have been 139 cases of locally acquired infection, all occurring in Miami, Florida. Within the US territories (eg, Puerto Rico, US Virgin Islands), >30,000 cases of ZIKV infection have been reported. The projected number of individuals at risk for ZIKV infection in the Caribbean and Latin America approximates 5 million. Similar to Dengue and Chikungunya viruses, ZIKV is spread to humans by infected Aedes aegypti mosquitoes, through travel-associated local transmission, via sexual contact, and through blood transfusions. South Florida is an epicenter for ZIKV infection in the United States and the year-round warm climate along with an abundance of mosquito vectors that can harbor the flavivirus raise health care concerns. ZIKV infection is generally mild with clinical manifestations of fever, rash, conjunctivitis, and arthralgia. Of greatest concern, however, is growing evidence for the relationship between ZIKV infection of pregnant women and increased incidence of abnormal pregnancies and congenital abnormalities in the newborn, now medically termed ZIKA Congenital Syndrome. Federal health officials are observing 899 confirmed Zika-positive pregnancies and the FDH is currently monitoring 110 pregnant women with evidence of Zika infection. The University of Miami/Jackson Memorial Hospital is uniquely positioned just north of downtown Miami and within the vicinity of Liberty City, Little Haiti, and Miami Beach, which are currently "hot spots" for Zika virus exposure and transmissions. As the FDH works fervently to prevent a Zika epidemic in the region, health care providers at the University of Miami and Jackson Memorial Hospital prepare for the clinical spectrum of ZIKV effects as well as the safe perioperative care of the parturients and their affected newborns. In an effort to meet anesthetic preparedness for the care of potential Zika-positive patients and perinatal management of babies born with ZIKA Congenital Syndrome, this review highlights the interim guidelines from the Centers for Disease Control and Prevention and also suggest anesthetic implications and recommendations. In addition, this article reviews guidance for the evaluation and anesthetic management of infants with congenital ZIKV infection. To better manage the perioperative care of affected newborns, this article also reviews the comparative anesthetic implications of babies born with related congenital malformations.

  16. On a fourth order accurate implicit finite difference scheme for hyperbolic conservation laws. II - Five-point schemes

    NASA Technical Reports Server (NTRS)

    Harten, A.; Tal-Ezer, H.

    1981-01-01

    This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.

  17. Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.

  18. Designing a Syntax-Based Retrieval System for Supporting Language Learning

    ERIC Educational Resources Information Center

    Tsao, Nai-Lung; Kuo, Chin-Hwa; Wible, David; Hung, Tsung-Fu

    2009-01-01

    In this paper, we propose a syntax-based text retrieval system for on-line language learning and use a fast regular expression search engine as its main component. Regular expression searches provide more scalable querying and search results than keyword-based searches. However, without a well-designed index scheme, the execution time of regular…

  19. Growth, nutritional, and gastrointestinal aspects of focal dermal hypoplasia (Goltz-Gorlin syndrome).

    PubMed

    Motil, Kathleen J; Fete, Mary; Fete, Timothy J

    2016-03-01

    Focal dermal hypoplasia (FDH) is a rare genetic disorder caused by mutations in the PORCN gene located on the X-chromosome. In the present study, we characterized the pattern of growth, body composition, and the nutritional and gastrointestinal aspects of children and adults (n = 19) affected with this disorder using clinical anthropometry and a survey questionnaire. The mean birth length (P < 0.06) and weight (P < 0.001) z-scores of the participants were lower than the reference population. The mean head circumference (P < 0.001), height (length) (P < 0.001), weight (P < 0.01), and BMI (P < 0.05) for age z-scores of the participants were lower than the reference population. The height-for-age and weight-for-age z-scores of the participants did not differ significantly between birth and current measurements. Three-fourths of the group reported having one or more nutritional or gastrointestinal problems including short stature (65%), underweight (77%), oral motor dysfunction (41%), gastroesophageal reflux (24%), gastroparesis (35%), and constipation (35%). These observations provide novel clinical information about growth, body composition, and nutritional and gastrointestinal aspects of children and adults with FDH and underscore the importance of careful observation and early clinical intervention in the care of individuals affected with this disorder. © 2016 Wiley Periodicals, Inc.

  20. Controlled intramyocardial release of engineered chemokines by biodegradable hydrogels as a treatment approach of myocardial infarction

    PubMed Central

    Projahn, Delia; Simsekyilmaz, Sakine; Singh, Smriti; Kanzler, Isabella; Kramp, Birgit K; Langer, Marcella; Burlacu, Alexandrina; Bernhagen, Jürgen; Klee, Doris; Zernecke, Alma; Hackeng, Tilman M; Groll, Jürgen; Weber, Christian; Liehn, Elisa A; Koenen, Rory R

    2014-01-01

    Myocardial infarction (MI) induces a complex inflammatory immune response, followed by the remodelling of the heart muscle and scar formation. The rapid regeneration of the blood vessel network system by the attraction of hematopoietic stem cells is beneficial for heart function. Despite the important role of chemokines in these processes, their use in clinical practice has so far been limited by their limited availability over a long time-span in vivo. Here, a method is presented to increase physiological availability of chemokines at the site of injury over a defined time-span and simultaneously control their release using biodegradable hydrogels. Two different biodegradable hydrogels were implemented, a fast degradable hydrogel (FDH) for delivering Met-CCL5 over 24 hrs and a slow degradable hydrogel (SDH) for a gradual release of protease-resistant CXCL12 (S4V) over 4 weeks. We demonstrate that the time-controlled release using Met-CCL5-FDH and CXCL12 (S4V)-SDH suppressed initial neutrophil infiltration, promoted neovascularization and reduced apoptosis in the infarcted myocardium. Thus, we were able to significantly preserve the cardiac function after MI. This study demonstrates that time-controlled, biopolymer-mediated delivery of chemokines represents a novel and feasible strategy to support the endogenous reparatory mechanisms after MI and may compliment cell-based therapies. PMID:24512349

  1. A second order derivative scheme based on Bregman algorithm class

    NASA Astrophysics Data System (ADS)

    Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia

    2016-10-01

    The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.

  2. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  3. Sparse spikes super-resolution on thin grids II: the continuous basis pursuit

    NASA Astrophysics Data System (ADS)

    Duval, Vincent; Peyré, Gabriel

    2017-09-01

    This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \

  4. Parallel discrete-event simulation schemes with heterogeneous processing elements.

    PubMed

    Kim, Yup; Kwon, Ikhyun; Chae, Huiseung; Yook, Soon-Hyung

    2014-07-01

    To understand the effects of nonidentical processing elements (PEs) on parallel discrete-event simulation (PDES) schemes, two stochastic growth models, the restricted solid-on-solid (RSOS) model and the Family model, are investigated by simulations. The RSOS model is the model for the PDES scheme governed by the Kardar-Parisi-Zhang equation (KPZ scheme). The Family model is the model for the scheme governed by the Edwards-Wilkinson equation (EW scheme). Two kinds of distributions for nonidentical PEs are considered. In the first kind computing capacities of PEs are not much different, whereas in the second kind the capacities are extremely widespread. The KPZ scheme on the complex networks shows the synchronizability and scalability regardless of the kinds of PEs. The EW scheme never shows the synchronizability for the random configuration of PEs of the first kind. However, by regularizing the arrangement of PEs of the first kind, the EW scheme is made to show the synchronizability. In contrast, EW scheme never shows the synchronizability for any configuration of PEs of the second kind.

  5. Representation of viruses in the remediated PDB archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawson, Catherine L., E-mail: cathy.lawson@rutgers.edu; Dutta, Shuchismita; Westbrook, John D.

    2008-08-01

    A new data model for PDB entries of viruses and other biological assemblies with regular noncrystallographic symmetry is described. A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies,more » subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive.« less

  6. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  7. Application of the Organic Synthetic Designs to Astrobiology

    NASA Astrophysics Data System (ADS)

    Kolb, V. M.

    2009-12-01

    In this paper we propose a synthesis of the heterocyclic compounds and the insoluble materials on the meteorites. Our synthetic scheme involves the reaction of sugars and amino acids, the so-called Maillard reaction. We have developed this scheme based on the combined analysis of the regular and retrosynthetic organic synthetic principles. The merits of these synthetic methods for the prebiotic design are addressed.

  8. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  9. Sample Based Unit Liter Dose Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JENSEN, L.

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new datamore » to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting {mu}Ci/g or {mu}Ci/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000).« less

  10. Scalar self-force on eccentric geodesics in Schwarzschild spacetime: A time-domain computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Roland

    2007-06-15

    We calculate the self-force acting on a particle with scalar charge moving on a generic geodesic around a Schwarzschild black hole. This calculation requires an accurate computation of the retarded scalar field produced by the moving charge; this is done numerically with the help of a fourth-order convergent finite-difference scheme formulated in the time domain. The calculation also requires a regularization procedure, because the retarded field is singular on the particle's world line; this is handled mode-by-mode via the mode-sum regularization scheme first introduced by Barack and Ori. This paper presents the numerical method, various numerical tests, and a samplemore » of results for mildly eccentric orbits as well as ''zoom-whirl'' orbits.« less

  11. Two-level schemes for the advection equation

    NASA Astrophysics Data System (ADS)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  12. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  13. Regularization in Orbital Mechanics; Theory and Practice

    NASA Astrophysics Data System (ADS)

    Roa, Javier

    2017-09-01

    Regularized equations of motion can improve numerical integration for the propagation of orbits, and simplify the treatment of mission design problems. This monograph discusses standard techniques and recent research in the area. While each scheme is derived analytically, its accuracy is investigated numerically. Algebraic and topological aspects of the formulations are studied, as well as their application to practical scenarios such as spacecraft relative motion and new low-thrust trajectories.

  14. A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.

    PubMed

    Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas

    2015-12-01

    Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

  15. Iterative image reconstruction for multienergy computed tomography via structure tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.

  16. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  17. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  18. Numerical simulation of a shear-thinning fluid through packed spheres

    NASA Astrophysics Data System (ADS)

    Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol

    2012-12-01

    Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.

  19. Range-Separated Brueckner Coupled Cluster Doubles Theory

    NASA Astrophysics Data System (ADS)

    Shepherd, James J.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2014-04-01

    We introduce a range-separation approximation to coupled cluster doubles (CCD) theory that successfully overcomes limitations of regular CCD when applied to the uniform electron gas. We combine the short-range ladder channel with the long-range ring channel in the presence of a Bruckner renormalized one-body interaction and obtain ground-state energies with an accuracy of 0.001 a.u./electron across a wide range of density regimes. Our scheme is particularly useful in the low-density and strongly correlated regimes, where regular CCD has serious drawbacks. Moreover, we cure the infamous overcorrelation of approaches based on ring diagrams (i.e., the particle-hole random phase approximation). Our energies are further shown to have appropriate basis set and thermodynamic limit convergence, and overall this scheme promises energetic properties for realistic periodic and extended systems which existing methods do not possess.

  20. One-loop corrections to light cone wave functions: The dipole picture DIS cross section

    NASA Astrophysics Data System (ADS)

    Hänninen, H.; Lappi, T.; Paatelainen, R.

    2018-06-01

    We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.

  1. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  2. Generalization Analysis of Fredholm Kernel Regularized Classifiers.

    PubMed

    Gong, Tieliang; Xu, Zongben; Chen, Hong

    2017-07-01

    Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.

  3. Expected impacts of the Cannabis Infringement Notice scheme in Western Australia on regular users and their involvement in the cannabis market.

    PubMed

    Chanteloup, Francoise; Lenton, Simon; Fetherston, James; Barratt, Monica J

    2005-07-01

    The effect on the cannabis market is one area of interest in the evaluation of the new 'prohibition with civil penalties' scheme for minor cannabis offences in WA. One goal of the scheme is to reduce the proportion of cannabis consumed that is supplied by large-scale suppliers that may also supply other drugs. As part of the pre-change phase of the evaluation, 100 regular (at least weekly) cannabis users were given a qualitative and quantitative interview covering knowledge and attitudes towards cannabis law, personal cannabis use, market factors, experience with the justice system and impact of legislative change. Some 85% of those who commented identified the changes as having little impact on their cannabis use. Some 89% of the 70 who intended to cultivate cannabis once the CIN scheme was introduced suggested they would grow cannabis within the two non-hydroponic plant-limit eligible for an infringement notice under the new law. Only 15% believed an increase in self-supply would undermine the large scale suppliers of cannabis in the market and allow some cannabis users to distance themselves from its unsavoury aspects. Only 11% said they would enter, or re-enter, the cannabis market as sellers as a result of the scheme introduction. Most respondents who commented believed that the impact of the legislative changes on the cannabis market would be negligible. The extent to which this happens will be addressed in the post-change phase of this research. Part of the challenge in assessing the impact of the CIN scheme on the cannabis market is that it is distinctly heterogeneous.

  4. Multi-enzymatic one-pot reduction of dehydrocholic acid to 12-keto-ursodeoxycholic acid with whole-cell biocatalysts.

    PubMed

    Sun, Boqiao; Kantzow, Christina; Bresch, Sven; Castiglione, Kathrin; Weuster-Botz, Dirk

    2013-01-01

    Ursodeoxycholic acid (UDCA) is a bile acid of industrial interest as it is used as an agent for the treatment of primary sclerosing cholangitis and the medicamentous, non-surgical dissolution of gallstones. Currently, it is prepared industrially from cholic acid following a seven-step chemical procedure with an overall yield of <30%. In this study, we investigated the key enzymatic steps in the chemo-enzymatic preparation of UDCA-the two-step reduction of dehydrocholic acid (DHCA) to 12-keto-ursodeoxycholic acid using a mutant of 7β-hydroxysteroid dehydrogenase (7β-HSDH) from Collinsella aerofaciens and 3α-hydroxysteroid dehydrogenase (3α-HSDH) from Comamonas testosteroni. Three different one-pot reaction approaches were investigated using whole-cell biocatalysts in simple batch processes. We applied one-biocatalyst systems, where 3α-HSDH, 7β-HSDH, and either a mutant of formate dehydrogenase (FDH) from Mycobacterium vaccae N10 or a glucose dehydrogenase (GDH) from Bacillus subtilis were expressed in a Escherichia coli BL21(DE3) based host strain. We also investigated two-biocatalyst systems, where 3α-HSDH and 7β-HSDH were expressed separately together with FDH enzymes for cofactor regeneration in two distinct E. coli hosts that were simultaneously applied in the one-pot reaction. The best result was achieved by the one-biocatalyst system with GDH for cofactor regeneration, which was able to completely convert 100 mM DHCA to >99.5 mM 12-keto-UDCA within 4.5 h in a simple batch process on a liter scale. Copyright © 2012 Wiley Periodicals, Inc.

  5. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    DTIC Science & Technology

    2015-03-31

    FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on

  6. Effects of high-frequency damping on iterative convergence of implicit viscous solver

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko

    2017-11-01

    This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.

  7. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  8. On the regularization for nonlinear tomographic absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Dai, Jinghang; Yu, Tao; Xu, Lijun; Cai, Weiwei

    2018-02-01

    Tomographic absorption spectroscopy (TAS) has attracted increased research efforts recently due to the development in both hardware and new imaging concepts such as nonlinear tomography and compressed sensing. Nonlinear TAS is one of the emerging modality that bases on the concept of nonlinear tomography and has been successfully demonstrated both numerically and experimentally. However, all the previous demonstrations were realized using only two orthogonal projections simply for ease of implementation. In this work, we examine the performance of nonlinear TAS using other beam arrangements and test the effectiveness of the beam optimization technique that has been developed for linear TAS. In addition, so far only smoothness prior has been adopted and applied in nonlinear TAS. Nevertheless, there are also other useful priors such as sparseness and model-based prior which have not been investigated yet. This work aims to show how these priors can be implemented and included in the reconstruction process. Regularization through Bayesian formulation will be introduced specifically for this purpose, and a method for the determination of a proper regularization factor will be proposed. The comparative studies performed with different beam arrangements and regularization schemes on a few representative phantoms suggest that the beam optimization method developed for linear TAS also works for the nonlinear counterpart and the regularization scheme should be selected properly according to the available a priori information under specific application scenarios so as to achieve the best reconstruction fidelity. Though this work is conducted under the context of nonlinear TAS, it can also provide useful insights for other tomographic modalities.

  9. Medical image enhancement using resolution synthesis

    NASA Astrophysics Data System (ADS)

    Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.

    2011-03-01

    We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.

  10. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  11. New form of the exact NSVZ β-function: the three-loop verification for terms containing Yukawa couplings

    NASA Astrophysics Data System (ADS)

    Kazantsev, A. E.; Shakhmanov, V. Yu.; Stepanyantz, K. V.

    2018-04-01

    We investigate a recently proposed new form of the exact NSVZ β-function, which relates the β-function to the anomalous dimensions of the quantum gauge superfield, of the Faddeev-Popov ghosts, and of the chiral matter superfields. Namely, for the general renormalizable N = 1 supersymmetric gauge theory, regularized by higher covariant derivatives, the sum of all three-loop contributions to the β-function containing the Yukawa couplings is compared with the corresponding two-loop contributions to the anomalous dimensions of the quantum superfields. It is demonstrated that for the considered terms both new and original forms of the NSVZ relation are valid independently of the subtraction scheme if the renormalization group functions are defined in terms of the bare couplings. This result is obtained from the equality relating the loop integrals, which, in turn, follows from the factorization of the integrals for the β-function into integrals of double total derivatives. For the renormalization group functions defined in terms of the renormalized couplings we verify that the NSVZ scheme is obtained with the higher covariant derivative regularization supplemented by the subtraction scheme in which only powers of ln Λ /μ are included into the renormalization constants.

  12. Numbers and functions in quantum field theory

    NASA Astrophysics Data System (ADS)

    Schnetz, Oliver

    2018-04-01

    We review recent results in the theory of numbers and single-valued functions on the complex plane which arise in quantum field theory. These results are the basis for a new approach to high-loop-order calculations. As concrete examples, we provide scheme-independent counterterms of primitive log-divergent graphs in ϕ4 theory up to eight loops and the renormalization functions β , γ , γm of dimensionally regularized ϕ4 theory in the minimal subtraction scheme up to seven loops.

  13. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the subtraction terms I

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor; Trócsányi, Zoltán

    2008-08-01

    In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.

  14. CO2 Photoreduction by Formate Dehydrogenase and a Ru-Complex in a Nanoporous Glass Reactor.

    PubMed

    Noji, Tomoyasu; Jin, Tetsuro; Nango, Mamoru; Kamiya, Nobuo; Amao, Yutaka

    2017-02-01

    In this study, we demonstrated the conversion of CO 2 to formic acid under ambient conditions in a photoreduction nanoporous reactor using a photosensitizer, methyl viologen (MV 2+ ), and formate dehydrogenase (FDH). The overall efficiency of this reactor was 14 times higher than that of the equivalent solution. The accumulation rate of formic acid in the nanopores of 50 nm is 83 times faster than that in the equivalent solution. Thus, this CO 2 photoreduction nanoporous glass reactor will be useful as an artificial photosynthesis system that converts CO 2 to fuel.

  15. Training survey -- educational profile for Hanford HANDI 2000 project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    Fluor Daniel Hanford, Inc. (FDH) is currently adopting streamlined business processes through integrated software solutions. Replacing the legacy software (current/replacement systems, attached) also avoids significant maintenance required to resolve Year 2000 issues. This initiative is being referred to as `HANDI 2000`. The software being implemented in the first phase of this project includes Indus International`s PASSPORT Software, Peoplesoft and Primavera P3 Software. The project, which encompasses all the system replacements that will occur, has been named `HANDI 2000.` The PASSPORT applications being implemented are Inventory Management, Purchasing, Contract Management, Accounts Payable, and MSDS (Material Safety Data Sheets).

  16. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  17. Renormalized stress-energy tensor for stationary black holes

    NASA Astrophysics Data System (ADS)

    Levi, Adam

    2017-01-01

    We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.

  18. Numerical solution of the wave equation with variable wave speed on nonconforming domains by high-order difference potentials

    NASA Astrophysics Data System (ADS)

    Britt, S.; Tsynkov, S.; Turkel, E.

    2018-02-01

    We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.

  19. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  20. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  1. CO 2-fixing one-carbon metabolism in a cellulose-degrading bacterium Clostridium thermocellum

    DOE PAGES

    Xiong, Wei; Lin, Paul P.; Magnusson, Lauren; ...

    2016-10-28

    Clostridium thermocellum can ferment cellulosic biomass to formate and other end products, including CO 2. This organism lacks formate dehydrogenase (Fdh), which catalyzes the reduction of CO 2 to formate. However, feeding the bacterium 13C-bicarbonate and cellobiose followed by NMR analysis showed the production of 13C-formate in C. thermocellum culture, indicating the presence of an uncharacterized pathway capable of converting CO 2 to formate. Combining genomic and experimental data, we demonstrated that the conversion of CO 2 to formate serves as a CO 2 entry point into the reductive one-carbon (C1) metabolism, and internalizes CO 2 via two biochemical reactions:more » the reversed pyruvate:ferredoxin oxidoreductase (rPFOR), which incorporates CO 2 using acetyl-CoA as a substrate and generates pyruvate, and pyruvate-formate lyase (PFL) converting pyruvate to formate and acetyl-CoA. We analyzed the labeling patterns of proteinogenic amino acids in individual deletions of all five putative PFOR mutants and in a PFL deletion mutant. We identified two enzymes acting as rPFOR, confirmed the dual activities of rPFOR and PFL crucial for CO 2 uptake, and provided physical evidence of a distinct in vivo 'rPFOR-PFL shunt' to reduce CO 2 to formate while circumventing the lack of Fdh. Such a pathway precedes CO 2 fixation via the reductive C1 metabolic pathway in C. thermocellum. Lastly, these findings demonstrated the metabolic versatility of C. thermocellum, which is thought of as primarily a cellulosic heterotroph but is shown here to be endowed with the ability to fix CO 2 as well.« less

  2. Levels of control exerted by the Isc iron-sulfur cluster system on biosynthesis of the formate hydrogenlyase complex.

    PubMed

    Pinske, Constanze; Jaroschinsky, Monique; Sawers, R Gary

    2013-06-01

    The membrane-associated formate hydrogenlyase (FHL) complex of bacteria like Escherichia coli is responsible for the disproportionation of formic acid into the gaseous products carbon dioxide and dihydrogen. It comprises minimally seven proteins including FdhF and HycE, the catalytic subunits of formate dehydrogenase H and hydrogenase 3, respectively. Four proteins of the FHL complex have iron-sulphur cluster ([Fe-S]) cofactors. Biosynthesis of [Fe-S] is principally catalysed by the Isc or Suf systems and each comprises proteins for assembly and for delivery of [Fe-S]. This study demonstrates that the Isc system is essential for biosynthesis of an active FHL complex. In the absence of the IscU assembly protein no hydrogen production or activity of FHL subcomponents was detected. A deletion of the iscU gene also resulted in reduced intracellular formate levels partially due to impaired synthesis of pyruvate formate-lyase, which is dependent on the [Fe-S]-containing regulator FNR. This caused reduced expression of the formate-inducible fdhF gene. The A-type carrier (ATC) proteins IscA and ErpA probably deliver [Fe-S] to specific apoprotein components of the FHL complex because mutants lacking either protein exhibited strongly reduced hydrogen production. Neither ATC protein could compensate for the lack of the other, suggesting that they had independent roles in [Fe-S] delivery to complex components. Together, the data indicate that the Isc system modulates FHL complex biosynthesis directly by provision of [Fe-S] as well as indirectly by influencing gene expression through the delivery of [Fe-S] to key regulators and enzymes that ultimately control the generation and oxidation of formate.

  3. The multi-faceted outcomes of conjunct diabetes and cardiovascular familial history in type 2 diabetes.

    PubMed

    Hermans, Michel P; Ahn, Sylvie A; Rousseau, Michel F

    2012-01-01

    Familial history of early-onset CHD (EOCHD) is a major risk factor for CHD. Familial diabetes history (FDH) impacts β-cell function. Some transmissible, accretional gradient of CHD risk may exist when diabetes and EOCHD familial histories combine. We investigated whether the impact of such combination is neutral, additive, or potentiating in T2DM descendants, as regards cardiometabolic phenotype, glucose homeostasis and micro-/macroangiopathies. Cross-sectional retrospective cohort study of 796 T2DM divided according to presence (Diab[+]) or absence (Diab[-]) of 1st-degree diabetes familial history and/or EOCHD (CVD(+) and (-)). Four subgroups: (i) [Diab(-)CVD(-)] (n=355); (ii) [Diab(+)CVD(-)] (n=338); (iii) [Diab(-)CVD(+)] (n=47); and (iv) [Diab(+)CVD(+)] (n=56). No interaction on subgroup distribution between presence of both familial histories, the combination of which translated into additive detrimental outcomes and higher rates of fat mass, sarcopenia, (hs)CRP and retinopathy. FDH(+) had lower insulinemia, insulin secretion, hyperbolic product, and accelerated hyperbolic product loss. An EOCHD family history affected neither insulin secretion nor sensitivity. There were significant differences regarding macroangiopathy/CAD, more prevalent in [Diab(-)CVD(+)] and [Diab(+)CVD(+)]. Among CVD(+), the highest macroangiopathy prevalence was observed in [Diab(-)CVD(+)], who had 66% macroangiopathy, and 57% CAD, rates higher (absolute-relative) by 23%-53% (overall) and 21%-58% (CAD) than [Diab(+)CVD(+)], who inherited the direst cardiometabolic familial history (p 0.0288 and 0.0310). A parental history for diabetes markedly affects residual insulin secretion and secretory loss rate in T2DM offspring without worsening insulin resistance. It paradoxically translated into lower macroangiopathy with concurrent familial EOCHD. Conjunct diabetes and CV familial histories generate multi-faceted vascular outcomes in offspring, including lesser macroangiopathy/CAD. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Association Between Ultra-Processed Food Consumption and Functional Gastrointestinal Disorders: Results From the French NutriNet-Santé Cohort.

    PubMed

    Schnabel, Laure; Buscail, Camille; Sabate, Jean-Marc; Bouchoucha, Michel; Kesse-Guyot, Emmanuelle; Allès, Benjamin; Touvier, Mathilde; Monteiro, Carlos A; Hercberg, Serge; Benamouzig, Robert; Julia, Chantal

    2018-06-15

    Ultra-processed foods (UPF) consumption has increased over the last decades and is raising concerns about potential adverse health effects. Our objective was to assess the association between UPF consumption and four functional gastrointestinal disorders (FGIDs): irritable bowel syndrome (IBS), functional constipation (FC), functional diarrhea (FDh), and functional dyspepsia (FDy), in a large sample of French adults. We analyzed dietary data of 33,343 participants from the web-based NutriNet-Santé cohort, who completed at least three 24 h food records, prior to a Rome III self-administered questionnaire. Proportion (in weight) of UPF in the diet (UPFp) was computed for each subject. The association between UPFp quartiles and FGIDs was estimated by multivariable logistic regression. Participants included in the analysis were mainly women (76.4%), and the mean age was 50.4 (SD = 14.0) years. UPF accounted for 16.0% of food consumed in weight, corresponding to 33.0% of total energy intake. UPF consumption was associated with younger age, living alone, lower incomes, higher BMI, and lower physical activity level (all p < 0.0001). A total of 3516 participants reported IBS (10.5%), 1785 FC (5.4%), 1303 FDy (3.9%), and 396 FDh (1.1%). After adjusting for confounding factors, an increase in UPFp was associated with a higher risk of IBS ( a OR Q4 vs. Q1 [95% CI]: 1.25 [1.12-1.39], p-trend < 0.0001). This study suggests an association between UPF and IBS. Further longitudinal studies are needed to confirm those results and understand the relative impact of the nutritional composition and specific characteristics of UPF in this relationship.

  5. CO 2-fixing one-carbon metabolism in a cellulose-degrading bacterium Clostridium thermocellum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Wei; Lin, Paul P.; Magnusson, Lauren

    Clostridium thermocellum can ferment cellulosic biomass to formate and other end products, including CO 2. This organism lacks formate dehydrogenase (Fdh), which catalyzes the reduction of CO 2 to formate. However, feeding the bacterium 13C-bicarbonate and cellobiose followed by NMR analysis showed the production of 13C-formate in C. thermocellum culture, indicating the presence of an uncharacterized pathway capable of converting CO 2 to formate. Combining genomic and experimental data, we demonstrated that the conversion of CO 2 to formate serves as a CO 2 entry point into the reductive one-carbon (C1) metabolism, and internalizes CO 2 via two biochemical reactions:more » the reversed pyruvate:ferredoxin oxidoreductase (rPFOR), which incorporates CO 2 using acetyl-CoA as a substrate and generates pyruvate, and pyruvate-formate lyase (PFL) converting pyruvate to formate and acetyl-CoA. We analyzed the labeling patterns of proteinogenic amino acids in individual deletions of all five putative PFOR mutants and in a PFL deletion mutant. We identified two enzymes acting as rPFOR, confirmed the dual activities of rPFOR and PFL crucial for CO 2 uptake, and provided physical evidence of a distinct in vivo 'rPFOR-PFL shunt' to reduce CO 2 to formate while circumventing the lack of Fdh. Such a pathway precedes CO 2 fixation via the reductive C1 metabolic pathway in C. thermocellum. Lastly, these findings demonstrated the metabolic versatility of C. thermocellum, which is thought of as primarily a cellulosic heterotroph but is shown here to be endowed with the ability to fix CO 2 as well.« less

  6. Toward Homosuccinate Fermentation: Metabolic Engineering of Corynebacterium glutamicum for Anaerobic Production of Succinate from Glucose and Formate

    PubMed Central

    Litsanov, Boris; Brocker, Melanie

    2012-01-01

    Previous studies have demonstrated the capability of Corynebacterium glutamicum for anaerobic succinate production from glucose under nongrowing conditions. In this work, we have addressed two shortfalls of this process, the formation of significant amounts of by-products and the limitation of the yield by the redox balance. To eliminate acetate formation, a derivative of the type strain ATCC 13032 (strain BOL-1), which lacked all known pathways for acetate and lactate synthesis (Δcat Δpqo Δpta-ackA ΔldhA), was constructed. Chromosomal integration of the pyruvate carboxylase gene pycP458S into BOL-1 resulted in strain BOL-2, which catalyzed fast succinate production from glucose with a yield of 1 mol/mol and showed only little acetate formation. In order to provide additional reducing equivalents derived from the cosubstrate formate, the fdh gene from Mycobacterium vaccae, coding for an NAD+-coupled formate dehydrogenase (FDH), was chromosomally integrated into BOL-2, leading to strain BOL-3. In an anaerobic batch process with strain BOL-3, a 20% higher succinate yield from glucose was obtained in the presence of formate. A temporary metabolic blockage of strain BOL-3 was prevented by plasmid-borne overexpression of the glyceraldehyde 3-phosphate dehydrogenase gene gapA. In an anaerobic fed-batch process with glucose and formate, strain BOL-3/pAN6-gap accumulated 1,134 mM succinate in 53 h with an average succinate production rate of 1.59 mmol per g cells (dry weight) (cdw) per h. The succinate yield of 1.67 mol/mol glucose is one of the highest currently described for anaerobic succinate producers and was accompanied by a very low level of by-products (0.10 mol/mol glucose). PMID:22389371

  7. Matching the quasiparton distribution in a momentum subtraction scheme

    NASA Astrophysics Data System (ADS)

    Stewart, Iain W.; Zhao, Yong

    2018-03-01

    The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.

  8. Renormalization of quark bilinear operators in a momentum-subtraction scheme with a nonexceptional subtraction point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturm, C.; Soni, A.; Aoki, Y.

    2009-07-01

    We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less

  9. An irregular lattice method for elastic wave propagation

    NASA Astrophysics Data System (ADS)

    O'Brien, Gareth S.; Bean, Christopher J.

    2011-12-01

    Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.

  10. Dynamic coupling of subsurface and seepage flows solved within a regularized partition formulation

    NASA Astrophysics Data System (ADS)

    Marçais, J.; de Dreuzy, J.-R.; Erhel, J.

    2017-11-01

    Hillslope response to precipitations is characterized by sharp transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Locally, the transition between these two regimes is triggered by soil saturation. Here we develop an integrative approach to simultaneously solve the subsurface flow, locate the potential fully saturated areas and deduce the generated saturation excess overland flow. This approach combines the different dynamics and transitions in a single partition formulation using discontinuous functions. We propose to regularize the system of partial differential equations and to use classic spatial and temporal discretization schemes. We illustrate our methodology on the 1D hillslope storage Boussinesq equations (Troch et al., 2003). We first validate the numerical scheme on previous numerical experiments without saturation excess overland flow. Then we apply our model to a test case with dynamic transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Our results show that discretization respects mass balance both locally and globally, converges when the mesh or time step are refined. Moreover the regularization parameter can be taken small enough to ensure accuracy without suffering of numerical artefacts. Applied to some hundreds of realistic hillslope cases taken from Western side of France (Brittany), the developed method appears to be robust and efficient.

  11. A geometric level set model for ultrasounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarti, A.; Malladi, R.

    We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.

  12. Boundary-element modelling of dynamics in external poroviscoelastic problems

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.

    2018-04-01

    A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.

  13. Apparently noninvariant terms of nonlinear sigma models in lattice perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harada, Koji; Hattori, Nozomu; Kubo, Hirofumi

    2009-03-15

    Apparently noninvariant terms (ANTs) that appear in loop diagrams for nonlinear sigma models are revisited in lattice perturbation theory. The calculations have been done mostly with dimensional regularization so far. In order to establish that the existence of ANTs is independent of the regularization scheme, and of the potential ambiguities in the definition of the Jacobian of the change of integration variables from group elements to 'pion' fields, we employ lattice regularization, in which everything (including the Jacobian) is well defined. We show explicitly that lattice perturbation theory produces ANTs in the four-point functions of the pion fields at one-loopmore » and the Jacobian does not play an important role in generating ANTs.« less

  14. Efficient algorithms for solution of interference cancellation and channel estimation for mobile OFDM system

    NASA Astrophysics Data System (ADS)

    Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou

    Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.

  15. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  16. Tachyon field in loop quantum cosmology: An example of traversable singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Lifang; Zhu Jianyang

    2009-06-15

    Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less

  17. [PICS: pharmaceutical inspection cooperation scheme].

    PubMed

    Morénas, J

    2009-01-01

    The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.

  18. Asynchronous discrete event schemes for PDEs

    NASA Astrophysics Data System (ADS)

    Stone, D.; Geiger, S.; Lord, G. J.

    2017-08-01

    A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.

  19. Matching by linear programming and successive convexification.

    PubMed

    Jiang, Hao; Drew, Mark S; Li, Ze-Nian

    2007-06-01

    We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.

  20. Regeneration of Nicotinamide Coenzymes: Principles and Applications for the Synthesis of Chiral Compounds

    NASA Astrophysics Data System (ADS)

    Weckbecker, Andrea; Gröger, Harald; Hummel, Werner

    Dehydrogenases which depend on nicotinamide coenzymes are of increasing interest for the preparation of chiral compounds, either by reduction of a prochiral precursor or by oxidative resolution of their racemate. The regeneration of oxidized and reduced nicotinamide cofactors is a very crucial step because the use of these cofactors in stoichiometric amounts is too expensive for application. There are several possibilities to regenerate nicotinamide cofactors: established methods such as formate/formate dehydrogenase (FDH) for the regeneration of NADH, recently developed electrochemical methods based on new mediator structures, or the application of gene cloning methods for the construction of "designed" cells by heterologous expression of appropriate genes.

  1. An explicit asymptotic preserving low Froude scheme for the multilayer shallow water model with density stratification

    NASA Astrophysics Data System (ADS)

    Couderc, F.; Duran, A.; Vila, J.-P.

    2017-08-01

    We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.

  2. Two-loop matching factors for light quark masses and three-loop mass anomalous dimensions in the RI/SMOM schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturm, C.; Almeida, L.

    2010-04-26

    Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the {ovr MS} scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{mu}} schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f} = 3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less

  3. Educational Supervision Appropriate for Psychiatry Trainee's Needs

    ERIC Educational Resources Information Center

    Rele, Kiran; Tarrant, C. Jane

    2010-01-01

    Objective: The authors studied the regularity and content of supervision sessions in one of the U.K. postgraduate psychiatric training schemes (Mid-Trent). Methods: A questionnaire sent to psychiatry trainees assessed the timing and duration of supervision, content and protection of supervision time, and overall quality of supervision. The authors…

  4. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  5. Step to improve neural cryptography against flipping attacks.

    PubMed

    Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold

    2004-12-01

    Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.

  6. Weak Galerkin method for the Biot’s consolidation model

    DOE PAGES

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    2017-08-23

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  7. Damageable contact between an elastic body and a rigid foundation

    NASA Astrophysics Data System (ADS)

    Campo, M.; Fernández, J. R.; Silva, A.

    2009-02-01

    In this work, the contact problem between an elastic body and a rigid obstacle is studied, including the development of material damage which results from internal compression or tension. The variational problem is formulated as a first-kind variational inequality for the displacements coupled with a parabolic partial differential equation for the damage field. The existence of a unique local weak solution is stated. Then, a fully discrete scheme is introduced using the finite element method to approximate the spatial variable and an Euler scheme to discretize the time derivatives. Error estimates are derived on the approximate solutions, from which the linear convergence of the algorithm is deduced under suitable regularity conditions. Finally, three two-dimensional numerical simulations are performed to demonstrate the accuracy and the behaviour of the scheme.

  8. Systolic array processing of the sequential decoding algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  9. Weak Galerkin method for the Biot’s consolidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  10. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  11. Representation of viruses in the remediated PDB archive

    PubMed Central

    Lawson, Catherine L.; Dutta, Shuchismita; Westbrook, John D.; Henrick, Kim; Berman, Helen M.

    2008-01-01

    A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies, subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive. PMID:18645236

  12. Generalized Sheet Transition Condition FDTD Simulation of Metasurface

    NASA Astrophysics Data System (ADS)

    Vahabzadeh, Yousef; Chamanara, Nima; Caloz, Christophe

    2018-01-01

    We propose an FDTD scheme based on Generalized Sheet Transition Conditions (GSTCs) for the simulation of polychromatic, nonlinear and space-time varying metasurfaces. This scheme consists in placing the metasurface at virtual nodal plane introduced between regular nodes of the staggered Yee grid and inserting fields determined by GSTCs in this plane in the standard FDTD algorithm. The resulting update equations are an elegant generalization of the standard FDTD equations. Indeed, in the limiting case of a null surface susceptibility ($\\chi_\\text{surf}=0$), they reduce to the latter, while in the next limiting case of a time-invariant metasurface $[\\chi_\\text{surf}\

  13. An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.

    PubMed

    Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart

    2010-01-01

    This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.

  14. Precise MS light-quark masses from lattice QCD in the regularization invariant symmetric momentum-subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorbahn, Martin; Jaeger, Sebastian; Department of Physics and Astronomy, University of Sussex, Falmer, Brighton BN1 9QH

    2010-12-01

    We compute the conversion factors needed to obtain the MS and renormalization-group-invariant (RGI) up, down, and strange quark masses at next-to-next-to-leading order from the corresponding parameters renormalized in the recently proposed RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }renormalization schemes. This is important for obtaining the MS masses with the best possible precision from numerical lattice QCD simulations, because the customary RI{sup (')}/MOM scheme is afflicted with large irreducible uncertainties both on the lattice and in perturbation theory. We find that the smallness of the known one-loop matching coefficients is accompanied by even smaller two-loop contributions. From a study of residual scalemore » dependences, we estimate the resulting perturbative uncertainty on the light-quark masses to be about 2% in the RI/SMOM scheme and about 3% in the RI/SMOM{sub {gamma}{sub {mu}} }scheme. Our conversion factors are given in fully analytic form, for general covariant gauge and renormalization point. We provide expressions for the associated anomalous dimensions.« less

  15. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  16. Method for solving the problem of nonlinear heating a cylindrical body with unknown initial temperature

    NASA Astrophysics Data System (ADS)

    Yaparova, N.

    2017-10-01

    We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.

  17. Advanced Imaging Methods for Long-Baseline Optical Interferometry

    NASA Astrophysics Data System (ADS)

    Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.

    2008-11-01

    We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.

  18. Damping efficiency of the Tchamwa-Wielgosz explicit dissipative scheme under instantaneous loading conditions

    NASA Astrophysics Data System (ADS)

    Mahéo, Laurent; Grolleau, Vincent; Rio, Gérard

    2009-11-01

    To deal with dynamic and wave propagation problems, dissipative methods are often used to reduce the effects of the spurious oscillations induced by the spatial and time discretization procedures. Among the many dissipative methods available, the Tchamwa-Wielgosz (TW) explicit scheme is particularly useful because it damps out the spurious oscillations occurring in the highest frequency domain. The theoretical study performed here shows that the TW scheme is decentered to the right, and that the damping can be attributed to a nodal displacement perturbation. The FEM study carried out using instantaneous 1-D and 3-D compression loads shows that it is useful to display the damping versus the number of time steps in order to obtain a constant damping efficiency whatever the size of element used for the regular meshing. A study on the responses obtained with irregular meshes shows that the TW scheme is only slightly sensitive to the spatial discretization procedure used. To cite this article: L. Mahéo et al., C. R. Mecanique 337 (2009).

  19. Data traffic reduction schemes for sparse Cholesky factorizations

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1988-01-01

    Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.

  20. Design and evaluation of nonverbal sound-based input for those with motor handicapped.

    PubMed

    Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong

    2013-03-01

    Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.

  1. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  2. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  3. Global Static Indexing for Real-Time Exploration of Very Large Regular Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V; Frank, R

    2001-07-23

    In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less

  4. Hexagonal Pixels and Indexing Scheme for Binary Images

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    2004-01-01

    A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.

  5. Finite-element lattice Boltzmann simulations of contact line dynamics

    NASA Astrophysics Data System (ADS)

    Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim

    2018-01-01

    The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.

  6. The Capra Research Program for Modelling Extreme Mass Ratio Inspirals

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    2011-02-01

    Suppose a small compact object (black hole or neutron star) of mass m orbits a large black hole of mass M ≫ m. This system emits gravitational waves (GWs) that have a radiation-reaction effect on the particle's motion. EMRIs (extreme-mass-ratio inspirals) of this type will be important GW sources for LISA. To fully analyze these GWs, and to detect weaker sources also present in the LISA data stream, will require highly accurate EMRI GW templates. In this article I outline the ``Capra'' research program to try to model EMRIs and calculate their GWs ab initio, assuming only that m ≪ M and that the Einstein equations hold. Because m ≪ M the timescale for the particle's orbit to shrink is too long for a practical direct numerical integration of the Einstein equations, and because this orbit may be deep in the large black hole's strong-field region, a post-Newtonian approximation would be inaccurate. Instead, we treat the EMRI spacetime as a perturbation of the large black hole's ``background'' (Schwarzschild or Kerr) spacetime and use the methods of black-hole perturbation theory, expanding in the small parameter m/M. The particle's motion can be described either as the result of a radiation-reaction ``self-force'' acting in the background spacetime or as geodesic motion in a perturbed spacetime. Several different lines of reasoning lead to the (same) basic O(m/M) ``MiSaTaQuWa'' equations of motion for the particle. In particular, the MiSaTaQuWa equations can be derived by modelling the particle as either a point particle or a small Schwarzschild black hole. The latter is conceptually elegant, but the former is technically much simpler and (surprisingly for a nonlinear field theory such as general relativity) still yields correct results. Modelling the small body as a point particle, its own field is singular along the particle worldline, so it's difficult to formulate a meaningful ``perturbation'' theory or equations of motion there. Detweiler and Whiting found an elegant decomposition of the particle's metric perturbation into a singular part which is spherically symmetric at the particle and a regular part which is smooth (and non-symmetric) at the particle. If we assume that the singular part (being spherically symmetric at the particle) exerts no force on the particle, then the MiSaTaQuWa equations follow immediately. The MiSaTaQuWa equations involve gradients of a (curved-spacetime) Green function, integrated over the particle's entire past worldline. These expressions aren't amenable to direct use in practical computations. By carefully analysing the singularity structure of each term in a spherical-harmonic expansion of the particle's field, Barack and Ori found that the self-force can be written as an infinite sum of modes, each of which can be calculated by (numerically) solving a set of wave equations in 1{+}1 dimensions, summing the gradients of the resulting fields at the particle position, and then subtracting certain analytically-calculable ``regularization parameters''. This ``mode-sum'' regularization scheme has been the basis for much further research including explicit numerical calculations of the self-force in a variety of situations, initially for Schwarzschild spacetime and more recently extending to Kerr spacetime. Recently Barack and Golbourn developed an alternative ``m-mode'' regularization scheme. This regularizes the physical metric perturbation by subtracting from it a suitable ``puncture function'' approximation to the Detweiler-Whiting singular field. The residual is then decomposed into a Fourier sum over azimuthal (e^{imϕ}) modes, and the resulting equations solved numerically in 2{+}1 dimensions. Vega and Detweiler have developed a related scheme that uses the same puncture-function regularization but then solves the regularized perturbation equation numerically in 3{+}1 dimensions, avoiding a mode-sum decomposition entirely. A number of research projects are now using these puncture-function regularization schemes, particularly for calculations in Kerr spacetime. Most Capra research to date has used 1st order perturbation theory, with the particle moving on a fixed (usually geodesic) worldline. Much current research is devoted to generalizing this to allow the particle worldline to be perturbed by the self-force, and to obtain approximation schemes which remain valid over long (EMRI-inspiral) timescales. To obtain the very high accuracies needed to fully exploit LISA's observations of the strongest EMRIs, 2nd order perturbation theory will probably also be needed; both this and long-time approximations remain frontiers for future Capra research.

  7. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  8. A closed expression for the UV-divergent parts of one-loop tensor integrals in dimensional regularization

    NASA Astrophysics Data System (ADS)

    Sulyok, G.

    2017-07-01

    Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.

  9. Strange quark condensate in the nucleon in 2 + 1 flavor QCD.

    PubMed

    Toussaint, D; Freeman, W

    2009-09-18

    We calculate the "strange quark content of the nucleon," , which is important for interpreting the results of some dark matter detection experiments. The method is to evaluate quark-line disconnected correlations on the MILC lattice ensembles, which include the effects of dynamical light and strange quarks. After continuum and chiral extrapolations, the result is = 0.69(7)_{stat}(9)_{syst}, in the modified minimal subtraction scheme (2 GeV) regularization, or for the renormalization scheme invariant form, m_{s} partial differentialM_{N}/ partial differentialm_{s} = 59(6)(8) MeV.

  10. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  11. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    NASA Astrophysics Data System (ADS)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  12. Regulation of pokemon 1 activity by sumoylation.

    PubMed

    Roh, Hee-Eun; Lee, Min-Nyung; Jeon, Bu-Nam; Choi, Won-Il; Kim, Yoo-Jin; Yu, Mi-Young; Hur, Man-Wook

    2007-01-01

    Pokemon 1 is a proto-oncogenic transcriptional regulator that contains a POZ domain at the N-terminus and four Kruppel-like zinc fingers at the C-terminus. Pokemon 1 plays an important role in adipogenesis, osteogenesis, oncogenesis, and transcription of NF-kB responsive genes. Recent reports have shown that biological activities of transcription factors are regulated by sumolylation. We investigated whether Pokemon 1 is post-translationally modified by sumoylation and whether the modification affects Pokemon 1's transcriptional properties. We found that Pokemon 1 is sumoylated in vitro and in vivo. Upon careful analysis of the amino acid sequence of Pokemon 1, we found ten potential sumoylation sites located at lysines 61, 354, 371, 379, 383, 396, 486, 487, 536 and 539. We mutated each of these amino acids into arginine and tested whether the mutation could affect the transcriptional properties of Pokemon 1 on the Pokemon 1 responsive genes, such as ADH5/FDH and pG5-FRE-Luc. Wild-type Pokemon 1 potently represses transcription of ADH5/FDH. Most of the mutants, however, were weaker transcription repressors and repressed transcription 1.3-3.3 fold less effective. Although potential sumoylation sites were located close to the DNA binding domain or the nuclear localization sequence, the mutations did not alter nuclear localization or DNA binding activity. In addition, on the pG5-FRE-Luc test promoter construct, ectopic SUMO-1 repressed transcription in the presence of Pokemon 1. The sumoylation target lysine residue at amino acid 61, which is located in the middle of the POZ-domain, is important because K61R mutation resulted in a much weaker molecular interaction with corepressors. Our data suggest that Pokemon 1's activity as a transcription factor may involve sumoylation, and that sumoylation might be important in the regulation of transcription by Pokemon 1.

  13. NADP-Specific Electron-Bifurcating [FeFe]-Hydrogenase in a Functional Complex with Formate Dehydrogenase in Clostridium autoethanogenum Grown on CO

    PubMed Central

    Wang, Shuning; Huang, Haiyan; Kahnt, Jörg; Mueller, Alexander P.; Köpke, Michael

    2013-01-01

    Flavin-based electron bifurcation is a recently discovered mechanism of coupling endergonic to exergonic redox reactions in the cytoplasm of anaerobic bacteria and archaea. Among the five electron-bifurcating enzyme complexes characterized to date, one is a heteromeric ferredoxin- and NAD-dependent [FeFe]-hydrogenase. We report here a novel electron-bifurcating [FeFe]-hydrogenase that is NADP rather than NAD specific and forms a complex with a formate dehydrogenase. The complex was found in high concentrations (6% of the cytoplasmic proteins) in the acetogenic Clostridium autoethanogenum autotrophically grown on CO, which was fermented to acetate, ethanol, and 2,3-butanediol. The purified complex was composed of seven different subunits. As predicted from the sequence of the encoding clustered genes (fdhA/hytA-E) and from chemical analyses, the 78.8-kDa subunit (FdhA) is a selenocysteine- and tungsten-containing formate dehydrogenase, the 65.5-kDa subunit (HytB) is an iron-sulfur flavin mononucleotide protein harboring the NADP binding site, the 51.4-kDa subunit (HytA) is the [FeFe]-hydrogenase proper, and the 18.1-kDa (HytC), 28.6-kDa (HytD), 19.9-kDa (HytE1), and 20.1-kDa (HytE2) subunits are iron-sulfur proteins. The complex catalyzed both the reversible coupled reduction of ferredoxin and NADP+ with H2 or formate and the reversible formation of H2 and CO2 from formate. We propose the complex to have two functions in vivo, namely, to normally catalyze CO2 reduction to formate with NADPH and reduced ferredoxin in the Wood-Ljungdahl pathway and to catalyze H2 formation from NADPH and reduced ferredoxin when these redox mediators get too reduced during unbalanced growth of C. autoethanogenum on CO (E0′ = −520 mV). PMID:23893107

  14. NADP-specific electron-bifurcating [FeFe]-hydrogenase in a functional complex with formate dehydrogenase in Clostridium autoethanogenum grown on CO.

    PubMed

    Wang, Shuning; Huang, Haiyan; Kahnt, Jörg; Mueller, Alexander P; Köpke, Michael; Thauer, Rudolf K

    2013-10-01

    Flavin-based electron bifurcation is a recently discovered mechanism of coupling endergonic to exergonic redox reactions in the cytoplasm of anaerobic bacteria and archaea. Among the five electron-bifurcating enzyme complexes characterized to date, one is a heteromeric ferredoxin- and NAD-dependent [FeFe]-hydrogenase. We report here a novel electron-bifurcating [FeFe]-hydrogenase that is NADP rather than NAD specific and forms a complex with a formate dehydrogenase. The complex was found in high concentrations (6% of the cytoplasmic proteins) in the acetogenic Clostridium autoethanogenum autotrophically grown on CO, which was fermented to acetate, ethanol, and 2,3-butanediol. The purified complex was composed of seven different subunits. As predicted from the sequence of the encoding clustered genes (fdhA/hytA-E) and from chemical analyses, the 78.8-kDa subunit (FdhA) is a selenocysteine- and tungsten-containing formate dehydrogenase, the 65.5-kDa subunit (HytB) is an iron-sulfur flavin mononucleotide protein harboring the NADP binding site, the 51.4-kDa subunit (HytA) is the [FeFe]-hydrogenase proper, and the 18.1-kDa (HytC), 28.6-kDa (HytD), 19.9-kDa (HytE1), and 20.1-kDa (HytE2) subunits are iron-sulfur proteins. The complex catalyzed both the reversible coupled reduction of ferredoxin and NADP(+) with H2 or formate and the reversible formation of H2 and CO2 from formate. We propose the complex to have two functions in vivo, namely, to normally catalyze CO2 reduction to formate with NADPH and reduced ferredoxin in the Wood-Ljungdahl pathway and to catalyze H2 formation from NADPH and reduced ferredoxin when these redox mediators get too reduced during unbalanced growth of C. autoethanogenum on CO (E0' = -520 mV).

  15. Chloramphenicol Biosynthesis: The Structure of CmlS, a Flavin-Dependent Halogenase Shwing a Covalent Flavin-Aspartate Bond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podzelinska, K.; Latimer, R; Bhattacharya, A

    2010-01-01

    Chloramphenicol is a halogenated natural product bearing an unusual dichloroacetyl moiety that is critical for its antibiotic activity. The operon for chloramphenicol biosynthesis in Streptomyces venezuelae encodes the chloramphenicol halogenase CmlS, which belongs to the large and diverse family of flavin-dependent halogenases (FDH's). CmlS was previously shown to be essential for the formation of the dichloroacetyl group. Here we report the X-ray crystal structure of CmlS determined at 2.2 {angstrom} resolution, revealing a flavin monooxygenase domain shared by all FDHs, but also a unique 'winged-helix' C-terminal domain that creates a T-shaped tunnel leading to the halogenation active site. Intriguingly, themore » C-terminal tail of this domain blocks access to the halogenation active site, suggesting a structurally dynamic role during catalysis. The halogenation active site is notably nonpolar and shares nearly identical residues with Chondromyces crocatus tyrosyl halogenase (CndH), including the conserved Lys (K71) that forms the reactive chloramine intermediate. The exception is Y350, which could be used to stabilize enolate formation during substrate halogenation. The strictly conserved residue E44, located near the isoalloxazine ring of the bound flavin adenine dinucleotide (FAD) cofactor, is optimally positioned to function as a remote general acid, through a water-mediated proton relay, which could accelerate the reaction of the chloramine intermediate during substrate halogenation, or the oxidation of chloride by the FAD(C4{alpha})-OOH intermediate. Strikingly, the 8{alpha} carbon of the FAD cofactor is observed to be covalently attached to D277 of CmlS, a residue that is highly conserved in the FDH family. In addition to representing a new type of flavin modification, this has intriguing implications for the mechanism of FDHs. Based on the crystal structure and in analogy to known halogenases, we propose a reaction mechanism for CmlS.« less

  16. Meta-Analyses of Dehalococcoides mccartyi Strain 195 Transcriptomic Profiles Identify a Respiration Rate-Related Gene Expression Transition Point and Interoperon Recruitment of a Key Oxidoreductase Subunit

    PubMed Central

    Mansfeldt, Cresten B.; Rowe, Annette R.; Heavner, Gretchen L. W.; Zinder, Stephen H.

    2014-01-01

    A cDNA-microarray was designed and used to monitor the transcriptomic profile of Dehalococcoides mccartyi strain 195 (in a mixed community) respiring various chlorinated organics, including chloroethenes and 2,3-dichlorophenol. The cultures were continuously fed in order to establish steady-state respiration rates and substrate levels. The organization of array data into a clustered heat map revealed two major experimental partitions. This partitioning in the data set was further explored through principal component analysis. The first two principal components separated the experiments into those with slow (1.6 ± 0.6 μM Cl−/h)- and fast (22.9 ± 9.6 μM Cl−/h)-respiring cultures. Additionally, the transcripts with the highest loadings in these principal components were identified, suggesting that those transcripts were responsible for the partitioning of the experiments. By analyzing the transcriptomes (n = 53) across experiments, relationships among transcripts were identified, and hypotheses about the relationships between electron transport chain members were proposed. One hypothesis, that the hydrogenases Hup and Hym and the formate dehydrogenase-like oxidoreductase (DET0186-DET0187) form a complex (as displayed by their tight clustering in the heat map analysis), was explored using a nondenaturing protein separation technique combined with proteomic sequencing. Although these proteins did not migrate as a single complex, DET0112 (an FdhB-like protein encoded in the Hup operon) was found to comigrate with DET0187 rather than with the catalytic Hup subunit DET0110. On closer inspection of the genome annotations of all Dehalococcoides strains, the DET0185-to-DET0187 operon was found to lack a key subunit, an FdhB-like protein. Therefore, on the basis of the transcriptomic, genomic, and proteomic evidence, the place of the missing subunit in the DET0185-to-DET0187 operon is likely filled by recruiting a subunit expressed from the Hup operon (DET0112). PMID:25063656

  17. Using the Staff Sharing Scheme to Support School Staff in Managing Challenging Behaviour More Effectively

    ERIC Educational Resources Information Center

    Jones, Daniel; Monsen, Jeremy; Franey, John

    2013-01-01

    This paper explores how educational psychologists working in a training/consultative way can enable teachers to manage challenging pupil behaviour more effectively. It sets out a rationale which encourages schools to embrace a group based teacher peer-support system as part of regular school development. It then explores the usefulness of the…

  18. The Tanda: A Practice at the Intersection of Mathematics, Culture, and Financial Goals

    ERIC Educational Resources Information Center

    Martin, Lee; Goldman, Shelley; Jimenez, Osvaldo

    2009-01-01

    We present an analysis and discussion of the "tanda," a multiperson pooled credit and savings scheme (a rotating credit association or RCA), as described by two informants from Mexican immigrant communities in California. In the tanda, participants contribute regularly to a common fund which is distributed to participants on a rotating…

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirillov, A. A.; Savelova, E. P., E-mail: ka98@mail.ru

    The problem of free-particle scattering on virtual wormholes is considered. It is shown that, for all types of relativistic fields, this scattering leads to the appearance of additional very heavy particles, which play the role of auxiliary fields in the invariant scheme of Pauli–Villars regularization. A nonlinear correction that describes the back reaction of particles to the vacuum distribution of virtual wormholes is also obtained.

  20. Improvement of geomagnetic core field modeling with a priori information about Gauss coefficient correlations

    NASA Astrophysics Data System (ADS)

    Schachtschneider, R.; Rother, M.; Lesur, V.

    2013-12-01

    We introduce a method that enables us to account for existing correlations between Gauss coefficients in core field modelling. The information about the correlations are obtained from a highly accurate field model based on CHAMP data, e.g. the GRIMM-3 model. We compute the covariance matrices of the geomagnetic field, the secular variation, and acceleration up to degree 18 and use these in the regularization scheme of the core field inversion. For testing our method we followed two different approaches by applying it to two different synthetic satellite data sets. The first is a short data set with a time span of only three months. Here we test how the information about correlations help to obtain an accurate model when only very little information are available. The second data set is a large one covering several years. In this case, besides reducing the residuals in general, we focus on the improvement of the model near the boundaries of the data set where the accerelation is generally more difficult to handle. In both cases the obtained covariance matrices are included in the damping scheme of the regularization. That way information from scales that could otherwise not be resolved by the data can be extracted. We show that by using this technique we are able to improve the models of the field and the secular variation for both, the short and the long term data set, compared to approaches using more conventional regularization techniques.

  1. Maintaining quality in the UK breast screening program

    NASA Astrophysics Data System (ADS)

    Gale, Alastair

    2010-02-01

    Breast screening in the UK has been implemented for over 20 years and annually nearly two million women are now screened with an estimated 1,400 lives saved. Nationally, some 700 individuals interpret screening mammograms in almost 110 screening centres. Currently, women aged 50 to 70 are invited for screening every three years and by 2012 this age range will increase to 47 - 73 years. There is a rapid ongoing transition from using film mammograms to full field digital mammography such that in 2010 every screening centre will be partly digital. An early, and long running, concern has been how to ensure the highest quality of imaging interpretation across the UK, an issue enhanced by the use of a three year screening interval. To partly address this question a self assessment scheme was developed in 1988 and subsequently implemented nationally in the UK as a virtually mandatory activity. The scheme is detailed from its beginnings, through its various developments to current incarnation and future plans. This encompasses both radiological (single view screening, two view screening, mammographic film and full field digital mammography) as well as design changes (cases reported by means of: form filling; PDA; tablet PC; iPhone, and the internet). The scheme provides a rich data source which is regularly studied to examine different aspects of radiological performance. Overall it aids screening radiologists by giving them regular access to a range of difficult exemplar cases together with feedback on their performance as compared to their peers.

  2. Robust watermarking scheme for binary images using a slice-based large-cluster algorithm with a Hamming Code

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Yuan; Liu, Chen-Chung

    2006-01-01

    The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.

  3. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  4. Quantum properties of supersymmetric theories regularized by higher covariant derivatives

    NASA Astrophysics Data System (ADS)

    Stepanyantz, Konstantin

    2018-02-01

    We investigate quantum corrections in \\mathscr{N} = 1 non-Abelian supersymmetric gauge theories, regularized by higher covariant derivatives. In particular, by the help of the Slavnov-Taylor identities we prove that the vertices with two ghost legs and one leg of the quantum gauge superfield are finite in all orders. This non-renormalization theorem is confirmed by an explicit one-loop calculation. By the help of this theorem we rewrite the exact NSVZ β-function in the form of the relation between the β-function and the anomalous dimensions of the matter superfields, of the quantum gauge superfield, and of the Faddeev-Popov ghosts. Such a relation has simple qualitative interpretation and allows suggesting a prescription producing the NSVZ scheme in all loops for the theories regularized by higher derivatives. This prescription is verified by the explicit three-loop calculation for the terms quartic in the Yukawa couplings.

  5. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  6. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  7. Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems

    NASA Astrophysics Data System (ADS)

    Leuschner, Matthias; Fritzen, Felix

    2017-11-01

    Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.

  8. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  9. Dissection and engineering of the Escherichia coli formate hydrogenlyase complex.

    PubMed

    McDowall, Jennifer S; Hjersing, M Charlotte; Palmer, Tracy; Sargent, Frank

    2015-10-07

    The Escherichia coli formate hydrogenlyase (FHL) complex is produced under fermentative conditions and couples formate oxidation to hydrogen production. In this work, the architecture of FHL has been probed by analysing affinity-tagged complexes from various genetic backgrounds. In a successful attempt to stabilize the complex, a strain encoding a fusion between FdhF and HycB has been engineered and characterised. Finally, site-directed mutagenesis of the hycG gene was performed, which is predicted to encode a hydrogenase subunit important for regulating sensitivity to oxygen. This work helps to define the core components of FHL and provides solutions to improving the stability of the enzyme. Copyright © 2015 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  10. Signal enhancement for the sensitivity-limited solid state NMR experiments using a continuous, non-uniform acquisition scheme

    NASA Astrophysics Data System (ADS)

    Qiang, Wei

    2011-12-01

    We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.

  11. Modified Dispersion Relations: from Black-Hole Entropy to the Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Garattini, Remo

    2012-07-01

    Quantum Field Theory is plagued by divergences in the attempt to calculate physical quantities. Standard techniques of regularization and renormalization are used to keep under control such a problem. In this paper we would like to use a different scheme based on Modified Dispersion Relations (MDR) to remove infinities appearing in one loop approximation in contrast to what happens in conventional approaches. In particular, we apply the MDR regularization to the computation of the entropy of a Schwarzschild black hole from one side and the Zero Point Energy (ZPE) of the graviton from the other side. The graviton ZPE is connected to the cosmological constant by means of of the Wheeler-DeWitt equation.

  12. Werner Heisenberg (1901-1976)

    NASA Astrophysics Data System (ADS)

    Yang, Chen Ning

    2013-05-01

    Werner Heisenberg was one of the greatest physicists of all times. When he started out as a young research worker, the world of physics was in a very confused and frustrating state, which Abraham Pais has described1 as: It was the spring of hope, it was the winter of despair using Charles Dickens' words in A Tale of Two Cities. People were playing a guessing game: There were from time to time great triumphs in proposing, through sheer intuition, make-shift schemes that amazingly explained some regularities in spectral physics, leading to joy. But invariably such successes would be followed by further work which reveal the inconsistency or inadequacy of the new scheme, leading to despair...

  13. A far-field non-reflecting boundary condition for two-dimensional wake flows

    NASA Technical Reports Server (NTRS)

    Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli

    1995-01-01

    Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.

  14. Finite entanglement entropy of black holes

    NASA Astrophysics Data System (ADS)

    Giaccari, Stefano; Modesto, Leonardo; Rachwał, Lesław; Zhu, Yiwei

    2018-06-01

    We compute the area term contribution to black holes' entanglement entropy (using the conical technique) for a class of local or weakly non-local super-renormalizable gravitational theories coupled to matter. For the first time, we explicitly prove that all the beta functions in the proposed theory, except for the cosmological constant, are identically zero in cut-off regularization scheme and not only in dimensional regularization scheme. In particular, we show that there is no divergence quadratic in cut-off and hence there is no contribution to the beta function of the Newton constant. As a consequence of this result, we argue that in these theories of gravity conical entropy is a sensible definition of physical entropy, in particular, it is positive-definite and gauge independent. On top of this the conical entropy, being expressed only in terms of the classical Newton constant, turns out to be finite and naturally coincides with Bekenstein-Hawking entropy. Finally, we propose a theory in which the renormalization of the Newton constant is entirely due to the Standard Model matter, arguing that such a contribution does not give the usual interpretational problems of conical entropy discussed in the literature.

  15. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  16. Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1989-01-01

    Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.

  17. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  18. Continuum limit of Bk from 2+1 flavor domain wall QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soni, A.; T. Izubuchi, et al.

    2011-07-01

    We determine the neutral kaon mixing matrix element B{sub K} in the continuum limit with 2+1 flavors of domain wall fermions, using the Iwasaki gauge action at two different lattice spacings. These lattice fermions have near exact chiral symmetry and therefore avoid artificial lattice operator mixing. We introduce a significant improvement to the conventional nonperturbative renormalization (NPR) method in which the bare matrix elements are renormalized nonperturbatively in the regularization invariant momentum scheme (RI-MOM) and are then converted into the MS{sup -} scheme using continuum perturbation theory. In addition to RI-MOM, we introduce and implement four nonexceptional intermediate momentum schemesmore » that suppress infrared nonperturbative uncertainties in the renormalization procedure. We compute the conversion factors relating the matrix elements in this family of regularization invariant symmetric momentum schemes (RI-SMOM) and MS{sup -} at one-loop order. Comparison of the results obtained using these different intermediate schemes allows for a more reliable estimate of the unknown higher-order contributions and hence for a correspondingly more robust estimate of the systematic error. We also apply a recently proposed approach in which twisted boundary conditions are used to control the Symanzik expansion for off-shell vertex functions leading to a better control of the renormalization in the continuum limit. We control chiral extrapolation errors by considering both the next-to-leading order SU(2) chiral effective theory, and an analytic mass expansion. We obtain B{sub K}{sup MS{sup -}} (3 GeV) = 0.529(5){sub stat}(15){sub {chi}}(2){sub FV}(11){sub NPR}. This corresponds to B{sup -}{sub K}{sup RGI{sup -}} = 0.749(7){sub stat}(21){sub {chi}}(3){sub FV}(15){sub NPR}. Adding all sources of error in quadrature, we obtain B{sup -}{sub K}{sup RGI{sup -}} = 0.749(27){sub combined}, with an overall combined error of 3.6%.« less

  19. Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts

    PubMed Central

    Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.

    2013-01-01

    To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080

  20. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  1. Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin

    2005-02-01

    This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.

  2. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations. Part 1; Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2009-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.

  3. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2010-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.

  4. Subtypes of Reading Disability in a Shallow Orthography: A Double Dissociation between Accuracy-Disabled and Rate-Disabled Readers of Hebrew

    ERIC Educational Resources Information Center

    Shany, Michal; Share, David L.

    2011-01-01

    Whereas most English language sub-typing schemes for dyslexia (e.g., Castles & Coltheart, "1993") have focused on reading accuracy for words varying in regularity, such an approach may have limited utility for reading disability sub-typing beyond English in which fluency rather than accuracy is the key discriminator of developmental and individual…

  5. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.

    2010-11-01

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.

  6. Electronic transport coefficients in plasmas using an effective energy-dependent electron-ion collision-frequency

    NASA Astrophysics Data System (ADS)

    Faussurier, G.; Blancard, C.; Combis, P.; Decoster, A.; Videau, L.

    2017-10-01

    We present a model to calculate the electrical and thermal electronic conductivities in plasmas using the Chester-Thellung-Kubo-Greenwood approach coupled with the Kramers approximation. The divergence in photon energy at low values is eliminated using a regularization scheme with an effective energy-dependent electron-ion collision-frequency. Doing so, we interpolate smoothly between the Drude-like and the Spitzer-like regularizations. The model still satisfies the well-known sum rule over the electrical conductivity. Such kind of approximation is also naturally extended to the average-atom model. A particular attention is paid to the Lorenz number. Its nondegenerate and degenerate limits are given and the transition towards the Drude-like limit is proved in the Kramers approximation.

  7. Hadron physics through asymptotic SU(3) and the chiral SU(3) x SU(3) algebra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oneda, S.; Matsuda, S.; Perlmutter, A.

    From Coral Gables conference on fundamental interactions for theoretical studies; Coral Gables, Florida, USA (22 Jan 1973). See CONF-730124-. The inter- SU(3)-multiplet regularities and clues to a possible level scheme of hadrons are studied in a systematic way. The hypothesis of asymptotic SU(3) is made in the presence of GMO mass splittings with mixing, which allows information to be extracted from the chiral SU(3) x SU(3) charge algebras and from the exotic commutation relations. For the ground states the schemes obtained are compatible with those of the SU(6) x O(3) classification. Sum rules are obtained which recover most of themore » good results of SU(6). (LBS)« less

  8. Comparison of Electrochemical Immunosensors and Aptasensors for Detection of Small Organic Molecules in Environment, Food Safety, Clinical and Public Security.

    PubMed

    Piro, Benoit; Shi, Shihui; Reisberg, Steeve; Noël, Vincent; Anquetin, Guillaume

    2016-02-29

    We review here the most frequently reported targets among the electrochemical immunosensors and aptasensors: antibiotics, bisphenol A, cocaine, ochratoxin A and estradiol. In each case, the immobilization procedures are described as well as the transduction schemes and the limits of detection. It is shown that limits of detections are generally two to three orders of magnitude lower for immunosensors than for aptasensors, due to the highest affinities of antibodies. No significant progresses have been made to improve these affinities, but transduction schemes were improved instead, which lead to a regular improvement of the limit of detections corresponding to ca. five orders of magnitude over these last 10 years. These progresses depend on the target, however.

  9. Noiseless Vlasov-Poisson simulations with linearly transformed particles

    DOE PAGES

    Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...

    2014-06-25

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less

  10. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    PubMed

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  11. Steerable sound transport in a 3D acoustic network

    NASA Astrophysics Data System (ADS)

    Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie

    2017-10-01

    Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.

  12. Hybrid fiber links for accurate optical frequency comparison

    NASA Astrophysics Data System (ADS)

    Lee, Won-Kyu; Stefani, Fabio; Bercy, Anthony; Lopez, Olivier; Amy-Klein, Anne; Pottie, Paul-Eric

    2017-05-01

    We present the experimental demonstration of a local two-way optical frequency comparison over a 43-km-long urban fiber network without any requirement for measurement synchronization. We combined the local two-way scheme with a regular active noise compensation scheme that was implemented on another parallel fiber leading to a highly reliable and robust frequency transfer. This hybrid scheme allowed us to investigate the major limiting factors of the local two-way comparison. We analyzed the contributions of the interferometers at both local and remote locations to the phase noise of the local two-way signal. Using the ability of this setup to be injected by either a single laser or two independent lasers, we measured the contributions of the demodulated laser instabilities to the long-term instability. We show that a fractional frequency instability level of 10-20 at 10,000 s can be obtained using this simple setup after propagation over a distance of 43 km in an urban area.

  13. Finite-Difference Lattice Boltzmann Scheme for High-Speed Compressible Flow: Two-Dimensional Case

    NASA Astrophysics Data System (ADS)

    Gan, Yan-Biao; Xu, Ai-Guo; Zhang, Guang-Cai; Zhang, Ping; Zhang, Lei; Li, Ying-Jun

    2008-07-01

    Lattice Boltzmann (LB) modeling of high-speed compressible flows has long been attempted by various authors. One common weakness of most of previous models is the instability problem when the Mach number of the flow is large. In this paper we present a finite-difference LB model, which works for flows with flexible ratios of specific heats and a wide range of Mach number, from 0 to 30 or higher. Besides the discrete-velocity-model by Watari [Physica A 382 (2007) 502], a modified Lax Wendroff finite difference scheme and an artificial viscosity are introduced. The combination of the finite-difference scheme and the adding of artificial viscosity must find a balance of numerical stability versus accuracy. The proposed model is validated by recovering results of some well-known benchmark tests: shock tubes and shock reflections. The new model may be used to track shock waves and/or to study the non-equilibrium procedure in the transition between the regular and Mach reflections of shock waves, etc.

  14. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    PubMed Central

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  15. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  16. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  17. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2013-09-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  18. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2014-01-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  19. Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr

    2013-02-15

    The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.

  20. Higher-order quantum-chromodynamic corrections to the longitudinal coefficient function in deep-inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, G.A.

    1982-01-01

    A calculation of nonsinglet longitudinal coefficient function of deep-inelastic scattering through order-g/sup 4/ is presented, using the operator-product expansion and the renormalization group. Both ultraviolet and infrared divergences are regulated with dimensional regularization. The renormalization scheme dependence of the result is discussed along with its phenomenological application in the determination of R = sigma/sub L//sigma/sub T/.

  1. Covariance in self-dual inhomogeneous models of effective quantum geometry: Spherical symmetry and Gowdy systems

    NASA Astrophysics Data System (ADS)

    Ben Achour, Jibril; Brahma, Suddhasattwa

    2018-06-01

    When applying the techniques of loop quantum gravity (LQG) to symmetry-reduced gravitational systems, one first regularizes the scalar constraint using holonomy corrections, prior to quantization. In inhomogeneous system, where a residual spatial diffeomorphism symmetry survives, such modification of the gauge generator generating time reparametrization can potentially lead to deformations or anomalies in the modified algebra of first-class constraints. When working with self-dual variables, it has already been shown that, for spherically symmetric geometry coupled to a scalar field, the holonomy-modified constraints do not generate any modifications to general covariance, as one faces in the real variables formulation, and can thus accommodate local degrees of freedom in such inhomogeneous models. In this paper, we extend this result to Gowdy cosmologies in the self-dual Ashtekar formulation. Furthermore, we show that the introduction of a μ ¯-scheme in midisuperspace models, as is required in the "improved dynamics" of LQG, is possible in the self-dual formalism while being out of reach in the current effective models using real-valued Ashtekar-Barbero variables. Our results indicate the advantages of using the self-dual variables to obtain a covariant loop regularization prior to quantization in inhomogeneous symmetry-reduced polymer models, additionally implementing the crucial μ ¯-scheme, and thus a consistent semiclassical limit.

  2. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Britt, Darrell Steven, Jr.

    Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.

  3. Self-force via m-mode regularization and 2+1D evolution: Foundations and a scalar-field implementation on Schwarzschild spacetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, Sam R.; Barack, Leor

    2011-01-15

    To model the radiative evolution of extreme mass-ratio binary inspirals (a key target of the LISA mission), the community needs efficient methods for computation of the gravitational self-force (SF) on the Kerr spacetime. Here we further develop a practical 'm-mode regularization' scheme for SF calculations, and give the details of a first implementation. The key steps in the method are (i) removal of a singular part of the perturbation field with a suitable 'puncture' to leave a sufficiently regular residual within a finite worldtube surrounding the particle's worldline, (ii) decomposition in azimuthal (m) modes, (iii) numerical evolution of the mmore » modes in 2+1D with a finite-difference scheme, and (iv) reconstruction of the SF from the mode sum. The method relies on a judicious choice of puncture, based on the Detweiler-Whiting decomposition. We give a working definition for the ''order'' of the puncture, and show how it determines the convergence rate of the m-mode sum. The dissipative piece of the SF displays an exponentially convergent mode sum, while the m-mode sum for the conservative piece converges with a power law. In the latter case, the individual modal contributions fall off at large m as m{sup -n} for even n and as m{sup -n+1} for odd n, where n is the puncture order. We describe an m-mode implementation with a 4th-order puncture to compute the scalar-field SF along circular geodesics on Schwarzschild. In a forthcoming companion paper we extend the calculation to the Kerr spacetime.« less

  4. A multichannel amplitude and relative-phase controller for active sound quality control

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    The enhancement of the sound quality of periodic disturbances for a number of listeners within an enclosure often confronts difficulties given by cross-channel interferences, which arise from simultaneously profiling the primary sound at each error sensor. These interferences may deteriorate the original sound among each listener, which is an unacceptable result from the point of view of sound quality control. In this paper we provide experimental evidence on controlling both amplitude and relative-phase functions of stationary complex primary sounds for a number of listeners within a cavity, attaining amplifications of twice the original value, reductions on the order of 70 dB, and relative-phase shifts between ± π rad, still in a free-of-interference control scenario. To accomplish such burdensome control targets, we have designed a multichannel active sound profiling scheme that bases its operation on exchanging time-domain control signals among the control units during uptime. Provided the real parts of the eigenvalues of persistently excited control matrices are positive, the proposed multichannel array is able to counterbalance cross-channel interferences, while attaining demanding control targets. Moreover, regularization of unstable control matrices is not seen to prevent the proposed array to provide free-of-interference amplitude and relative-phase control, but the system performance is degraded, as a function of the amount of regularization needed. The assessment of Loudness and Roughness metrics on the controlled primary sound proves that the proposed distributed control scheme noticeably outperforms current techniques, since active amplitude- and/or relative-phase-based enhancement of the auditory qualities of a primary sound no longer implies in causing interferences among different positions. In this regard, experimental results also confirm the effectiveness of the proposed scheme on stably enhancing the sound qualities of periodic sounds for multiple listeners within a cavity.

  5. High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.

  6. The use of financial incentives in Australian general practice.

    PubMed

    Kecmanovic, Milica; Hall, Jane P

    2015-05-18

    To examine the uptake of financial incentive payments in general practice, and identify what types of practitioners are more likely to participate in these schemes. Analysis of data on general practitioners and GP registrars from the Medicine in Australia - Balancing Employment and Life (MABEL) longitudinal panel survey of medical practitioners in Australia, from 2008 to 2011. Income received by GPs from government incentive schemes and grants and factors associated with the likelihood of claiming such incentives. Around half of GPs reported receiving income from financial incentives in 2008, and there was a small fall in this proportion by 2011. There was considerable movement into and out of the incentives schemes, with more GPs exiting than taking up grants and payments. GPs working in larger practices with greater administrative support, GPs practising in rural areas and those who were principals or partners in practices were more likely to use grants and incentive payments. Administrative support available to GPs appears to be an increasingly important predictor of incentive use, suggesting that the administrative burden of claiming incentives is large and not always worth the effort. It is, therefore, crucial to consider such costs (especially relative to the size of the payment) when designing incentive payments. As market conditions are also likely to influence participation in incentive schemes, the impact of incentives can change over time and these schemes should be reviewed regularly.

  7. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  8. Glimpse: Sparsity based weak lensing mass-mapping tool

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.

    2018-02-01

    Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.

  9. The dynamics of innovation through the expansion in the adjacent possible

    NASA Astrophysics Data System (ADS)

    Tria, F.

    2016-03-01

    The experience of something new is part of our daily life. At different scales, innovation is also a crucial feature of many biological, technological and social systems. Recently, large databases witnessing human activities allowed the observation that novelties -such as the individual process of listening a song for the first time- and innovation processes -such as the fixation of new genes in a population of bacteria- share striking statistical regularities. We here indicate the expansion into the adjacent possible as a very general and powerful mechanism able to explain such regularities. Further, we will identify statistical signatures of the presence of the expansion into the adjacent possible in the analyzed datasets, and we will show that our modeling scheme is able to predict remarkably well these observations.

  10. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  11. Comparison of Electrochemical Immunosensors and Aptasensors for Detection of Small Organic Molecules in Environment, Food Safety, Clinical and Public Security

    PubMed Central

    Piro, Benoit; Shi, Shihui; Reisberg, Steeve; Noël, Vincent; Anquetin, Guillaume

    2016-01-01

    We review here the most frequently reported targets among the electrochemical immunosensors and aptasensors: antibiotics, bisphenol A, cocaine, ochratoxin A and estradiol. In each case, the immobilization procedures are described as well as the transduction schemes and the limits of detection. It is shown that limits of detections are generally two to three orders of magnitude lower for immunosensors than for aptasensors, due to the highest affinities of antibodies. No significant progresses have been made to improve these affinities, but transduction schemes were improved instead, which lead to a regular improvement of the limit of detections corresponding to ca. five orders of magnitude over these last 10 years. These progresses depend on the target, however. PMID:26938570

  12. The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs

    NASA Astrophysics Data System (ADS)

    Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.

    2017-12-01

    The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.

  13. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the range of 1.985–2.156 mm and 1.966–2.234 mm, for NLP and ILD affected regions, respectively, excluding schemes with statistically significant lower performance (Wilcoxon signed-ranks test, p < 0.05), resulting in 13 finally selected registration schemes. Conclusions: Selected registration schemes in case of ILD CT follow-up analysis indicate the significance of adaptive stochastic gradient descent optimizer, as well as the importance of combined rigid and nonrigid schemes providing high accuracy and time efficiency. The selected optimal deformable registration schemes are equivalent in terms of their accuracy and thus compatible in terms of their clinical outcome.« less

  14. Multichannel feedforward control schemes with coupling compensation for active sound profiling

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.

  15. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming

    2010-11-04

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less

  16. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  17. Technical Basis and Implementation Guidelines for a Technique for Human Event Analysis (ATHEANA)

    DTIC Science & Technology

    2000-05-01

    posted at NRC’s Web site address www.nrc.gov/NRC/NUREGS/indexnum.html are updated regularly and may differ from the last printed version. Non-NRC...distinctly different in that it provides structured search schemes for finding such EFCs, by using and integrating knowledge and experience in...Learned from Serious Accidents The record of significant incidents in nuclear power plant NPP operations shows a substantially different picture of

  18. Compression in visual working memory: using statistical regularities to form more efficient memory representations.

    PubMed

    Brady, Timothy F; Konkle, Talia; Alvarez, George A

    2009-11-01

    The information that individuals can hold in working memory is quite limited, but researchers have typically studied this capacity using simple objects or letter strings with no associations between them. However, in the real world there are strong associations and regularities in the input. In an information theoretic sense, regularities introduce redundancies that make the input more compressible. The current study shows that observers can take advantage of these redundancies, enabling them to remember more items in working memory. In 2 experiments, covariance was introduced between colors in a display so that over trials some color pairs were more likely to appear than other color pairs. Observers remembered more items from these displays than from displays where the colors were paired randomly. The improved memory performance cannot be explained by simply guessing the high-probability color pair, suggesting that observers formed more efficient representations to remember more items. Further, as observers learned the regularities, their working memory performance improved in a way that is quantitatively predicted by a Bayesian learning model and optimal encoding scheme. These results suggest that the underlying capacity of the individuals' working memory is unchanged, but the information they have to remember can be encoded in a more compressed fashion. Copyright 2009 APA

  19. Penalized weighted least-squares approach for multienergy computed tomography image reconstruction via structure tensor total variation regularization.

    PubMed

    Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua

    2016-10-01

    Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Assembling mesoscopic particles by various optical schemes

    NASA Astrophysics Data System (ADS)

    Fournier, Jean-Marc; Rohner, Johann; Jacquot, Pierre; Johann, Robert; Mias, Solon; Salathé, René-P.

    2005-08-01

    Shaping optical fields is the key issue in the control of optical forces that pilot the manipulation of mesoscopic polarizable dielectric particles. The latter can be positioned according to endless configurations. The scope of this paper is to review and discuss several unusual designs which produce what we think are among some of the most interesting arrangements. The simplest schemes result from interference between two or several coherent light beams, leading to periodic as well as pseudo-periodic arrays of optical traps. Complex assemblages of traps can be created with holographic-type set-ups; this case is widely used by the trapping community. Clusters of traps can also be configured through interferometric-type set-ups or by generating external standing waves by diffractive elements. The particularly remarkable possibilities of the Talbot effect to generate three-dimensional optical lattices and several schemes of self-organization represent further very interesting means for trapping. They will also be described and discussed. in this paper. The mechanisms involved in those trapping schemes do not require the use of high numerical aperture optics; by avoiding the need for bulky microscope objectives, they allow for more physical space around the trapping area to perform experiments. Moreover, very large regular arrays of traps can be manufactured, opening numerous possibilities for new applications.

  1. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  2. Well-balanced compressible cut-cell simulation of atmospheric flow.

    PubMed

    Klein, R; Bates, K R; Nikiforakis, N

    2009-11-28

    Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.

  3. Optimizing phonon space in the phonon-coupling model

    NASA Astrophysics Data System (ADS)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2017-08-01

    We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.

  4. RUASN: a robust user authentication framework for wireless sensor networks.

    PubMed

    Kumar, Pardeep; Choudhury, Amlan Jyoti; Sain, Mangal; Lee, Sang-Gon; Lee, Hoon-Jae

    2011-01-01

    In recent years, wireless sensor networks (WSNs) have been considered as a potential solution for real-time monitoring applications and these WSNs have potential practical impact on next generation technology too. However, WSNs could become a threat if suitable security is not considered before the deployment and if there are any loopholes in their security, which might open the door for an attacker and hence, endanger the application. User authentication is one of the most important security services to protect WSN data access from unauthorized users; it should provide both mutual authentication and session key establishment services. This paper proposes a robust user authentication framework for wireless sensor networks, based on a two-factor (password and smart card) concept. This scheme facilitates many services to the users such as user anonymity, mutual authentication, secure session key establishment and it allows users to choose/update their password regularly, whenever needed. Furthermore, we have provided the formal verification using Rubin logic and compare RUASN with many existing schemes. As a result, we found that the proposed scheme possesses many advantages against popular attacks, and achieves better efficiency at low computation cost.

  5. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  6. Mathematical simulation of the drying of suspensions and colloidal solutions by their depressurization

    NASA Astrophysics Data System (ADS)

    Lashkov, V. A.; Levashko, E. I.; Safin, R. G.

    2006-05-01

    The heat and mass transfer in the process of drying of high-humidity materials by their depressurization has been investigated. The results of experimental investigation and mathematical simulation of the indicated process are presented. They allow one to determine the regularities of this process and predict the quality of the finished product. A technological scheme and an engineering procedure for calculating the drying of the liquid base of a soap are presented.

  7. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    DTIC Science & Technology

    2016-01-01

    reconstruction. The array topology samples the scene on a regular grid of phase centers, using a tiling of Boundary Arrays (BAs). Following a simple correction...hardware. Fig. 1 depicts the multistatic array topology. As seen, the topology is a tiled arrangement of Boundary Arrays (BAs). The BA is a well-known...sparse array layout comprised of two linear transmit arrays, and two linear receive arrays [6]. A slightly different tiled arrangement of BAs was used

  8. Accelerating NBODY6 with graphics processing units

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Aarseth, Sverre J.

    2012-07-01

    We describe the use of graphics processing units (GPUs) for speeding up the code NBODY6 which is widely used for direct N-body simulations. Over the years, the N2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 per cent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present NBODY6-GPU code is well balanced for simulations in the particle range 104-2 × 105 for a dual-GPU system attached to a standard PC.

  9. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  10. On the regularity of the covariance matrix of a discretized scalar field on the sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao-Ahedo, J.D.; Barreiro, R.B.; Herranz, D.

    2017-02-01

    We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizationsmore » that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.« less

  11. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  12. Regular Deployment of Wireless Sensors to Achieve Connectivity and Information Coverage

    PubMed Central

    Cheng, Wei; Li, Yong; Jiang, Yi; Yin, Xipeng

    2016-01-01

    Coverage and connectivity are two of the most critical research subjects in WSNs, while regular deterministic deployment is an important deployment strategy and results in some pattern-based lattice WSNs. Some studies of optimal regular deployment for generic values of rc/rs were shown recently. However, most of these deployments are subject to a disk sensing model, and cannot take advantage of data fusion. Meanwhile some other studies adapt detection techniques and data fusion to sensing coverage to enhance the deployment scheme. In this paper, we provide some results on optimal regular deployment patterns to achieve information coverage and connectivity as a variety of rc/rs, which are all based on data fusion by sensor collaboration, and propose a novel data fusion strategy for deployment patterns. At first the relation between variety of rc/rs and density of sensors needed to achieve information coverage and connectivity is derived in closed form for regular pattern-based lattice WSNs. Then a dual triangular pattern deployment based on our novel data fusion strategy is proposed, which can utilize collaborative data fusion more efficiently. The strip-based deployment is also extended to a new pattern to achieve information coverage and connectivity, and its characteristics are deduced in closed form. Some discussions and simulations are given to show the efficiency of all deployment patterns, including previous patterns and the proposed patterns, to help developers make more impactful WSN deployment decisions. PMID:27529246

  13. Risk management assessment of Health Maintenance Organisations participating in the National Health Insurance Scheme

    PubMed Central

    Campbell, Princess Christina; Korie, Patrick Chukwuemeka; Nnaji, Feziechukwu Collins

    2014-01-01

    Background: The National Health Insurance Scheme (NHIS), operated majorly in Nigeria by health maintenance organisations (HMOs), took off formally in June 2005. In view of the inherent risks in the operation of any social health insurance, it is necessary to efficiently manage these risks for sustainability of the scheme. Consequently the risk-management strategies deployed by HMOs need regular assessment. This study assessed the risk management in the Nigeria social health insurance scheme among HMOs. Materials and Methods: Cross-sectional survey of 33 HMOs participating in the NHIS. Results: Utilisation of standard risk-management strategies by the HMOs was 11 (52.6%). The other risk-management strategies not utilised in the NHIS 10 (47.4%) were risk equalisation and reinsurance. As high as 11 (52.4%) of participating HMOs had a weak enrollee base (less than 30,000 and poor monthly premium and these impacted negatively on the HMOs such that a large percentage 12 (54.1%) were unable to meet up with their financial obligations. Most of the HMOs 15 (71.4%) participated in the Millennium development goal (MDG) maternal and child health insurance programme. Conclusions: Weak enrollee base and poor monthly premium predisposed the HMOs to financial risk which impacted negatively on the overall performance in service delivery in the NHIS, further worsened by the non-utilisation of risk equalisation and reinsurance as risk-management strategies in the NHIS. There is need to make the scheme compulsory and introduce risk equalisation and reinsurance. PMID:25298605

  14. Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, M.K.; Fomel, S.B.; Sethian, J.A.

    2009-01-01

    In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less

  15. Symmetry-preserving contact interaction model for heavy-light mesons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serna, F. E.; Brito, M. A.; Krein, G.

    2016-01-22

    We use a symmetry-preserving regularization method of ultraviolet divergences in a vector-vector contact interaction model for low-energy QCD. The contact interaction is a representation of nonperturbative kernels used Dyson-Schwinger and Bethe-Salpeter equations. The regularization method is based on a subtraction scheme that avoids standard steps in the evaluation of divergent integrals that invariably lead to symmetry violation. Aiming at the study of heavy-light mesons, we have implemented the method to the pseudoscalar π and K mesons. We have solved the Dyson-Schwinger equation for the u, d and s quark propagators, and obtained the bound-state Bethe-Salpeter amplitudes in a way thatmore » the Ward-Green-Takahashi identities reflecting global symmetries of the model are satisfied for arbitrary routing of the momenta running in loop integrals.« less

  16. Zeroth order regular approximation approach to electric dipole moment interactions of the electron.

    PubMed

    Gaul, Konstantin; Berger, Robert

    2017-07-07

    A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  17. Zeroth order regular approximation approach to electric dipole moment interactions of the electron

    NASA Astrophysics Data System (ADS)

    Gaul, Konstantin; Berger, Robert

    2017-07-01

    A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  18. Comparison of four stable numerical methods for Abel's integral equation

    NASA Technical Reports Server (NTRS)

    Murio, Diego A.; Mejia, Carlos E.

    1991-01-01

    The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

  19. Green's function enriched Poisson solver for electrostatics in many-particle systems

    NASA Astrophysics Data System (ADS)

    Sutmann, Godehard

    2016-06-01

    A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.

  20. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    NASA Astrophysics Data System (ADS)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  1. Geometric integration in Born-Oppenheimer molecular dynamics.

    PubMed

    Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N

    2011-12-14

    Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics

  2. Generalized teleportation by quantum walks

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shang, Yun; Xue, Peng

    2017-09-01

    We develop a generalized teleportation scheme based on quantum walks with two coins. For an unknown qubit state, we use two-step quantum walks on the line and quantum walks on the cycle with four vertices for teleportation. For any d-dimensional states, quantum walks on complete graphs and quantum walks on d-regular graphs can be used for implementing teleportation. Compared with existing d-dimensional states teleportation, prior entangled state is not required and the necessary maximal entanglement resource is generated by the first step of quantum walk. Moreover, two projective measurements with d elements are needed by quantum walks on the complete graph, rather than one joint measurement with d^2 basis states. Quantum walks have many applications in quantum computation and quantum simulations. This is the first scheme of realizing communicating protocol with quantum walks, thus opening wider applications.

  3. Increase in furfural tolerance by combinatorial overexpression of NAD salvage pathway enzymes in engineered isobutanol-producing E. coli.

    PubMed

    Song, Hun-Suk; Jeon, Jong-Min; Kim, Hyun-Joong; Bhatia, Shashi Kant; Sathiyanarayanan, Ganesan; Kim, Junyoung; Won Hong, Ju; Gi Hong, Yoon; Young Choi, Kwon; Kim, Yun-Gon; Kim, Wooseong; Yang, Yung-Hun

    2017-12-01

    To reduce the furfural toxicity for biochemical production in E. coli, a new strategy was successfully applied by supplying NAD(P)H through the nicotine amide salvage pathway. To alleviate the toxicity, nicotinamide salvage pathway genes were overexpressed in recombinant, isobutanol-producing E. coli. Gene expression of pncB and nadE respectively showed increased tolerance to furfural among these pathways. The combined expression of pncB and nadE was the most effective in increasing the tolerance of the cells to toxic aldehydes. By comparing noxE- and fdh-harbouring strains, the form of NADH, rather than NAD + , was the major effector of furfural tolerance. Overall, this study is the application of the salvage pathway to isobutanol production in the presence of furfural, and this system seems to be applicable to alleviate furfural toxicity in the production of other biochemical. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Passport-PeopleSoft integration for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    The integration between the PeopleSoft applications and Passport modules are accomplished with an off the shelf package developed by lNDUS. The product was updated to the PeopleSoft Release 7.O. The Integration product interacts with data from multiple products within Passport and PeopleSoft. For 10/l/98 the Integration will interlace between the following: (1) PassPort Accounts Payable, Contract Management, Inventory Management, Purchasing; and (2) PeopleSoft General Ledger, Project Costing, Human Resources, Payroll. The current supply systems and financial systems interact with each other via multiple custom interfaces. Data integrity and Y2K issues were some of the driving factors in replacement of thesemore » systems. The new systems allow FDH the opportunity to change the current business processes to go to a best business practice that the commercial off the shelf software was adopted.« less

  5. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  6. The hydrogen atom in D = 3 - 2ɛ dimensions

    NASA Astrophysics Data System (ADS)

    Adkins, Gregory S.

    2018-06-01

    The nonrelativistic hydrogen atom in D = 3 - 2 ɛ dimensions is the reference system for perturbative schemes used in dimensionally regularized nonrelativistic effective field theories to describe hydrogen-like atoms. Solutions to the D-dimensional Schrödinger-Coulomb equation are given in the form of a double power series. Energies and normalization integrals are obtained numerically and also perturbatively in terms of ɛ. The utility of the series expansion is demonstrated by the calculation of the divergent expectation value <(V‧)2 >.

  7. Post-Newtonian and numerical calculations of the gravitational self-force for circular orbits in the Schwarzschild geometry

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    2010-03-01

    The problem of a compact binary system whose components move on circular orbits is addressed using two different approximation techniques in general relativity. The post-Newtonian (PN) approximation involves an expansion in powers of v/c≪1, and is most appropriate for small orbital velocities v. The perturbative self-force analysis requires an extreme mass ratio m1/m2≪1 for the components of the binary. A particular coordinate-invariant observable is determined as a function of the orbital frequency of the system using these two different approximations. The post-Newtonian calculation is pushed up to the third post-Newtonian (3PN) order. It involves the metric generated by two point particles and evaluated at the location of one of the particles. We regularize the divergent self-field of the particle by means of dimensional regularization. We show that the poles ∝(d-3)-1 appearing in dimensional regularization at the 3PN order cancel out from the final gauge invariant observable. The 3PN analytical result, through first order in the mass ratio, and the numerical self-force calculation are found to agree well. The consistency of this cross cultural comparison confirms the soundness of both approximations in describing compact binary systems. In particular, it provides an independent test of the very different regularization procedures invoked in the two approximation schemes.

  8. An Exponential Regulator for Rapidity Divergences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ye; Neill, Duff; Zhu, Hua Xing

    2016-04-01

    Finding an efficient and compelling regularization of soft and collinear degrees of freedom at the same invariant mass scale, but separated in rapidity is a persistent problem in high-energy factorization. In the course of a calculation, one encounters divergences unregulated by dimensional regularization, often called rapidity divergences. Once regulated, a general framework exists for their renormalization, the rapidity renormalization group (RRG), leading to fully resummed calculations of transverse momentum (to the jet axis) sensitive quantities. We examine how this regularization can be implemented via a multi-differential factorization of the soft-collinear phase-space, leading to an (in principle) alternative non-perturbative regularization ofmore » rapidity divergences. As an example, we examine the fully-differential factorization of a color singlet's momentum spectrum in a hadron-hadron collision at threshold. We show how this factorization acts as a mother theory to both traditional threshold and transverse momentum resummation, recovering the classical results for both resummations. Examining the refactorization of the transverse momentum beam functions in the threshold region, we show that one can directly calculate the rapidity renormalized function, while shedding light on the structure of joint resummation. Finally, we show how using modern bootstrap techniques, the transverse momentum spectrum is determined by an expansion about the threshold factorization, leading to a viable higher loop scheme for calculating the relevant anomalous dimensions for the transverse momentum spectrum.« less

  9. MIB Galerkin method for elliptic interface problems.

    PubMed

    Xia, Kelin; Zhan, Meng; Wei, Guo-Wei

    2014-12-15

    Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.

  10. Lower Tropospheric Ozone Retrievals from Infrared Satellite Observations Using a Self-Adapting Regularization Method

    NASA Astrophysics Data System (ADS)

    Eremenko, M.; Sgheri, L.; Ridolfi, M.; Dufour, G.; Cuesta, J.

    2017-12-01

    Lower tropospheric ozone (O3) retrievals from nadir sounders is challenging due to the lack of vertical sensitivity of the measurements and towards the lowest layers. If improvements have been made during the last decade, it is still important to explore possibilities to improve the retrieval algorithms themselves. O3 retrieval from nadir satellite observations is an ill-conditioned problem, which requires regularization using constraint matrices. Up to now, most of the retrieval algorithms rely on a fixed constraint. The constraint is determined and fixed beforehand, on the basis of sensitivity tests. This does not allow ones to take advantage of the entire capabilities of the satellite measurements, which vary with the thermal conditions of the observed scenes. To overcome this limitation, we developed a self-adapting and altitude-dependent regularization scheme. A crucial step is the choice of the strength of the constraint. This choice is done during an iterative process and depends on the measurement errors and on the sensitivity of the measurements to the target parameters at the different altitudes. The challenge is to limit the use of a priori constraints to the minimal amount needed to perform the inversion. The algorithm has been tested on synthetic observations matching the future IASI-NG satellite instrument. IASI-NG measurements are simulated on the basis of O3 concentrations taken from an atmospheric model and retrieved using two retrieval schemes (the standard and self-adapting ones). Comparison of the results shows that the sensitivity of the observations to the O3 amount in the lowest layers (given by the degrees of freedom for the solution) is increased, which allows a better description of the ozone distribution, especially in the case of large ozone plumes. Biases are reduced and the spatial correlation is improved. Tentative of application to real observations from IASI, currently onboard the Metop satellite will also be presented.

  11. GA-based fuzzy reinforcement learning for control of a magnetic bearing system.

    PubMed

    Lin, C T; Jou, C P

    2000-01-01

    This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.

  12. Progress on Implementing Additional Physics Schemes into ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and options available in the Weather Research and Forecasting (WRF) model are regularly used by the USEPA with the Community Multiscale Air Quality (CMAQ) model to conduct retrospective air quality simulations. These include the Pleim surface layer, the Pleim-Xiu (PX) land surface model with fractional land use for a 40-class National Land Cover Database (NLCD40), the Asymmetric Convective Model 2 (ACM2) planetary boundary layer scheme, the Kain-Fritsch (KF) convective parameterization with subgrid-scale cloud feedback to the radiation schemes and a scale-aware convective time scale, and analysis nudging four-dimensional data assimilation (FDDA). All of these physics modules and options have already been implemented by the USEPA into MPAS-A v4.0, tested, and evaluated (please see the presentations of R. Gilliam and R. Bullock at this workshop). Since the release of MPAS v5.1 in May 2017, work has been under way to implement these preferred physics options into the MPAS-A v5.1 code. Test simulations of a summer month are being conducted on a global variable resolution mesh with the higher resolution cells centered over the contiguous United States. Driving fields for the FDDA and soil nudging are

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Z.; Department of Applied Mathematics and Mechanics, University of Science and Technology Beijing, Beijing 100083; Lin, P.

    In this paper, we investigate numerically a diffuse interface model for the Navier–Stokes equation with fluid–fluid interface when the fluids have different densities [48]. Under minor reformulation of the system, we show that there is a continuous energy law underlying the system, assuming that all variables have reasonable regularities. It is shown in the literature that an energy law preserving method will perform better for multiphase problems. Thus for the reformulated system, we design a C{sup 0} finite element method and a special temporal scheme where the energy law is preserved at the discrete level. Such a discrete energy lawmore » (almost the same as the continuous energy law) for this variable density two-phase flow model has never been established before with C{sup 0} finite element. A Newton method is introduced to linearise the highly non-linear system of our discretization scheme. Some numerical experiments are carried out using the adaptive mesh to investigate the scenario of coalescing and rising drops with differing density ratio. The snapshots for the evolution of the interface together with the adaptive mesh at different times are presented to show that the evolution, including the break-up/pinch-off of the drop, can be handled smoothly by our numerical scheme. The discrete energy functional for the system is examined to show that the energy law at the discrete level is preserved by our scheme.« less

  14. Ionospheric current source modeling and global geomagnetic induction using ground geomagnetic observatory data

    USGS Publications Warehouse

    Sun, Jin; Kelbert, Anna; Egbert, G.D.

    2015-01-01

    Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.

  15. Zymographic differentiation of [NiFe]-Hydrogenases 1, 2 and 3 of Escherichia coli K-12

    PubMed Central

    2012-01-01

    Background When grown under anaerobic conditions, Escherichia coli K-12 is able to synthesize three active [NiFe]-hydrogenases (Hyd1-3). Two of these hydrogenases are respiratory enzymes catalysing hydrogen oxidation, whereby Hyd-1 is oxygen-tolerant and Hyd-2 is considered a standard oxygen-sensitive hydrogenase. Hyd-3, together with formate dehydrogenase H (Fdh-H), forms the formate hydrogenlyase (FHL) complex, which is responsible for H2 evolution by intact cells. Hydrogen oxidation activity can be assayed for all three hydrogenases using benzyl viologen (BV; Eo′ = -360 mV) as an artificial electron acceptor; however ascribing activities to specific isoenzymes is not trivial. Previously, an in-gel assay could differentiate Hyd-1 and Hyd-2, while Hyd-3 had long been considered too unstable to be visualized on such native gels. This study identifies conditions allowing differentiation of all three enzymes using simple in-gel zymographic assays. Results Using a modified in-gel assay hydrogen-dependent BV reduction catalyzed by Hyd-3 has been described for the first time. High hydrogen concentrations facilitated visualization of Hyd-3 activity. The activity was membrane-associated and although not essential for visualization of Hyd-3, the activity was maximal in the presence of a functional Fdh-H enzyme. Furthermore, through the use of nitroblue tetrazolium (NBT; Eo′ = -80 mV) it was demonstrated that Hyd-1 reduces this redox dye in a hydrogen-dependent manner, while neither Hyd-2 nor Hyd-3 could couple hydrogen oxidation to NBT reduction. Hydrogen-dependent reduction of NBT was also catalysed by an oxygen-sensitive variant of Hyd-1 that had a supernumerary cysteine residue at position 19 of the small subunit substituted for glycine. This finding suggests that tolerance toward oxygen is not the main determinant that governs electron donation to more redox-positive electron acceptors such as NBT. Conclusions The utilization of particular electron acceptors at different hydrogen concentrations and redox potentials correlates with the known physiological functions of the respective hydrogenase. The ability to rapidly distinguish between oxygen-tolerant and standard [NiFe]-hydrogenases provides a facile new screen for the discovery of novel enzymes. A reliable assay for Hyd-3 will reinvigorate studies on the characterisation of the hydrogen-evolving FHL complex. PMID:22769583

  16. Negative muon chemistry: the quantum muon effect and the finite nuclear mass effect.

    PubMed

    Posada, Edwin; Moncada, Félix; Reyes, Andrés

    2014-10-09

    The any-particle molecular orbital method at the full configuration interaction level has been employed to study atoms in which one electron has been replaced by a negative muon. In this approach electrons and muons are described as quantum waves. A scheme has been proposed to discriminate nuclear mass and quantum muon effects on chemical properties of muonic and regular atoms. This study reveals that the differences in the ionization potentials of isoelectronic muonic atoms and regular atoms are of the order of millielectronvolts. For the valence ionizations of muonic helium and muonic lithium the nuclear mass effects are more important. On the other hand, for 1s ionizations of muonic atoms heavier than beryllium, the quantum muon effects are more important. In addition, this study presents an assessment of the nuclear mass and quantum muon effects on the barrier of Heμ + H2 reaction.

  17. National Institutes of Health phase I, Small Business Innovation Research applications: fiscal year 1983 results.

    PubMed

    Vener, K J

    1985-08-01

    A review of the 356 disapproved Small Business Innovation Research (SBIR) proposals submitted to the National Institutes of Health (NIH) for fiscal year 1983 funding was undertaken to identify the most common shortcomings of those disapproved applications. The shortcomings were divided into four general classes by using the scheme developed by other authors when describing the reasons for the disapproval of regular NIH research applications. Comparison of the reasons for disapproval of SBIR applications with regular applications suggests comparable difficulties in the areas of the problem and the approach. There is some indication, however, that the SBIR proposals may have been weaker in the category of the principal investigator (PI). In general, it is the responsibility of the PI to demonstrate that the work is timely and can be performed with available technology and expertise, and that the guidelines for the NIH SBIR program have been satisfied.

  18. Visual tracking based on the sparse representation of the PCA subspace

    NASA Astrophysics Data System (ADS)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  19. The Social Relations of a Health Walk Group: An Ethnographic Study.

    PubMed

    Grant, Gordon; Pollard, Nick; Allmark, Peter; Machaczek, Kasia; Ramcharan, Paul

    2017-09-01

    It is already well established that regular walks are conducive to health and well-being. This article considers the production of social relations of regular, organized weekly group walks for older people. It is based on an ethnographic study of a Walking for Health group in a rural area of the United Kingdom. Different types of social relations are identified arising from the walk experience. The social relations generated are seen to be shaped by organizational factors that are constitutive of the walks; the resulting culture having implications for the sustainability of the experience. As there appears to be no single uniting theory linking group walk experiences to the production of social relations at this time, the findings are considered against therapeutic landscape, therapeutic mobility, and social capital theorizing. Finally, implications for the continuance of walking schemes for older people and for further research are considered.

  20. Source localization in electromyography using the inverse potential problem

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  1. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  2. Regularization destriping of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  3. Diffraction of a shock wave by a compression corner; regular and single Mach reflection

    NASA Technical Reports Server (NTRS)

    Vijayashankar, V. S.; Kutler, P.; Anderson, D.

    1976-01-01

    The two dimensional, time dependent Euler equations which govern the flow field resulting from the injection of a planar shock with a compression corner are solved with initial conditions that result in either regular reflection or single Mach reflection of the incident planar shock. The Euler equations which are hyperbolic are transformed to include the self similarity of the problem. A normalization procedure is employed to align the reflected shock and the Mach stem as computational boundaries to implement the shock fitting procedure. A special floating fitting scheme is developed in conjunction with the method of characteristics to fit the slip surface. The reflected shock, the Mach stem, and the slip surface are all treated as harp discontinuities, thus, resulting in a more accurate description of the inviscid flow field. The resulting numerical solutions are compared with available experimental data and existing first-order, shock-capturing numerical solutions.

  4. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  5. Access to diabetes care and medicines in the Philippines.

    PubMed

    Higuchi, Michiyo

    2010-07-01

    In the Philippines, diabetes is rapidly becoming a major public health issue, as in other low- and middle-income countries. Availability and affordability of care and medicines are crucial to control diabetes. This study describes the situations of diabetes patients and identifies possible barriers to diabetes care and medicines in the Philippines. Quantitative and qualitative data were collected from multilevel respondents using different semistructured questionnaires/checklists. The study revealed that many patients took intermittent medication based on their own judgment, and/ or selected certain pieces of medical advice, subjectively weighing symptoms against household budget. The current public health insurance scheme and decentralized health systems did not promote access to diabetes care. Investing in regular care is expected to be less expensive both for individuals and for society in the long-term. Insurance outpatient coverage and application of standard treatment/management guidelines will be of help to encourage providing and receiving regular care.

  6. Backup agreements with penalty scheme under supply disruptions

    NASA Astrophysics Data System (ADS)

    Hou, Jing; Zhao, Lindu

    2012-05-01

    This article considers a supply chain for a single product involving one retailer and two independent suppliers, when the main supplier might fail to supply the products, the backup supplier can always supply the products at a higher price. The retailer could use the backup supplier as a regular provider or a stand-by source by reserving some products at the supplier. A backup agreement with penalty scheme is constructed between the retailer and the backup supplier to mitigate the supply disruptions and the demand uncertainty. The expected profit functions and the optimal decisions of the two players are derived through a sequential optimisation process. Then, the sensitivity of two players' expected profits to various input factors is examined through numerical examples. The impacts of the disruption probability and the demand uncertainty on the backup agreement are also investigated, which could provide guideline for how to use each sourcing method.

  7. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  8. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  9. Past tense in the brain's time: neurophysiological evidence for dual-route processing of past-tense verbs.

    PubMed

    Bakker, Iske; Macgregor, Lucy J; Pulvermüller, Friedemann; Shtyrov, Yury

    2013-05-01

    A controversial issue in neuro- and psycholinguistics is whether regular past-tense forms of verbs are stored lexically or generated productively by the application of abstract combinatorial schemas, for example affixation rules. The success or failure of models in accounting for this particular issue can be used to draw more general conclusions about cognition and the degree to which abstract, symbolic representations and rules are psychologically and neurobiologically real. This debate can potentially be resolved using a neurophysiological paradigm, in which alternative predictions of the brain response patterns for lexical and syntactic processing are put to the test. We used magnetoencephalography (MEG) to record neural responses to spoken monomorphemic words ('hide'), pseudowords ('smide'), regular past-tense forms ('cried') and ungrammatical (overregularised) past-tense forms ('flied') in a passive listening oddball paradigm, in which lexically and syntactically modulated stimuli are known to elicit distinct patterns of the mismatch negativity (MMN) brain response. We observed an enhanced ('lexical') MMN to monomorphemic words relative to pseudowords, but a reversed ('syntactic') MMN to ungrammatically inflected past tenses relative to grammatical forms. This dissociation between responses to monomorphemic and bimorphemic stimuli indicates that regular past tenses are processed more similarly to syntactic sequences than to lexically stored monomorphemic words, suggesting that regular past tenses are generated productively by the application of a combinatorial scheme to their separately represented stems and affixes. We suggest discrete combinatorial neuronal assemblies, which bind classes of sequentially occurring lexical elements into morphologically complex units, as the neurobiological basis of regular past tense inflection. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The use of salinity contrast for density difference compensation to improve the thermal recovery efficiency in high-temperature aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    van Lopik, Jan H.; Hartog, Niels; Zaadnoordijk, Willem Jan

    2016-08-01

    The efficiency of heat recovery in high-temperature (>60 °C) aquifer thermal energy storage (HT-ATES) systems is limited due to the buoyancy of the injected hot water. This study investigates the potential to improve the efficiency through compensation of the density difference by increased salinity of the injected hot water for a single injection-recovery well scheme. The proposed method was tested through numerical modeling with SEAWATv4, considering seasonal HT-ATES with four consecutive injection-storage-recovery cycles. Recovery efficiencies for the consecutive cycles were investigated for six cases with three simulated scenarios: (a) regular HT-ATES, (b) HT-ATES with density difference compensation using saline water, and (c) theoretical regular HT-ATES without free thermal convection. For the reference case, in which 80 °C water was injected into a high-permeability aquifer, regular HT-ATES had an efficiency of 0.40 after four consecutive recovery cycles. The density difference compensation method resulted in an efficiency of 0.69, approximating the theoretical case (0.76). Sensitivity analysis showed that the net efficiency increase by using the density difference compensation method instead of regular HT-ATES is greater for higher aquifer hydraulic conductivity, larger temperature difference between injection water and ambient groundwater, smaller injection volume, and larger aquifer thickness. This means that density difference compensation allows the application of HT-ATES in thicker, more permeable aquifers and with larger temperatures than would be considered for regular HT-ATES systems.

  11. An evaluation of supervised classifiers for indirectly detecting salt-affected areas at irrigation scheme level

    NASA Astrophysics Data System (ADS)

    Muller, Sybrand Jacobus; van Niekerk, Adriaan

    2016-07-01

    Soil salinity often leads to reduced crop yield and quality and can render soils barren. Irrigated areas are particularly at risk due to intensive cultivation and secondary salinization caused by waterlogging. Regular monitoring of salt accumulation in irrigation schemes is needed to keep its negative effects under control. The dynamic spatial and temporal characteristics of remote sensing can provide a cost-effective solution for monitoring salt accumulation at irrigation scheme level. This study evaluated a range of pan-fused SPOT-5 derived features (spectral bands, vegetation indices, image textures and image transformations) for classifying salt-affected areas in two distinctly different irrigation schemes in South Africa, namely Vaalharts and Breede River. The relationship between the input features and electro conductivity measurements were investigated using regression modelling (stepwise linear regression, partial least squares regression, curve fit regression modelling) and supervised classification (maximum likelihood, nearest neighbour, decision tree analysis, support vector machine and random forests). Classification and regression trees and random forest were used to select the most important features for differentiating salt-affected and unaffected areas. The results showed that the regression analyses produced weak models (<0.4 R squared). Better results were achieved using the supervised classifiers, but the algorithms tend to over-estimate salt-affected areas. A key finding was that none of the feature sets or classification algorithms stood out as being superior for monitoring salt accumulation at irrigation scheme level. This was attributed to the large variations in the spectral responses of different crops types at different growing stages, coupled with their individual tolerances to saline conditions.

  12. Customer satisfaction survey to improve the European cystic fibrosis external quality assessment scheme.

    PubMed

    Berwouts, Sarah; Dequeker, Elisabeth

    2011-08-01

    The Cystic Fibrosis European Network, coordinated from within the Katholieke Universiteit Leuven, is the provider of the European cystic fibrosis external quality assessment (EQA) scheme. The network aimed to seek feedback from laboratories that participated in the cystic fibrosis scheme in order to improve services offered. In this study we analysed responses to an on-line customer satisfaction survey conducted between September and November 2009. The survey was sent to 213 laboratories that participated in the cystic fibrosis EQA scheme of 2008; 69 laboratories (32%) responded. Scores for importance and satisfaction were obtained from a five-point Likert scale for 24 attributes. A score of one corresponded to very dissatisfied/very unimportant and five corresponded to very satisfied/very important. Means were calculated and placed in a two-dimensional grid (importance-satisfaction analysis). Means were subtracted from each other to obtain gap values (gap-analysis). No attribute had a mean score below 3.63. The overall mean of satisfaction was 4.35. Opportunities for improvement enclosed clarity, usefulness and completeness of the general report and individual comments, and user-friendliness of the electronic datasheet. This type of customer satisfaction survey was a valuable instrument to identify opportunities to improve the cystic fibrosis EQA scheme. It should be conducted on a regular basis to reveal new opportunities in the future and to assess effectiveness of actions taken. Moreover, it could be a model for other EQA providers seeking feedback from participants. Overall, the customer satisfaction survey provided a powerful quality of care improvement tool.

  13. A Variational Approach to Video Registration with Subspace Constraints.

    PubMed

    Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes

    2013-01-01

    This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.

  14. Preconditioned steepest descent methods for some nonlinear elliptic equations involving p-Laplacian terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Wenqiang, E-mail: wfeng1@vols.utk.edu; Salgado, Abner J., E-mail: asalgad1@utk.edu; Wang, Cheng, E-mail: cwang1@umassd.edu

    We describe and analyze preconditioned steepest descent (PSD) solvers for fourth and sixth-order nonlinear elliptic equations that include p-Laplacian terms on periodic domains in 2 and 3 dimensions. The highest and lowest order terms of the equations are constant-coefficient, positive linear operators, which suggests a natural preconditioning strategy. Such nonlinear elliptic equations often arise from time discretization of parabolic equations that model various biological and physical phenomena, in particular, liquid crystals, thin film epitaxial growth and phase transformations. The analyses of the schemes involve the characterization of the strictly convex energies associated with the equations. We first give a generalmore » framework for PSD in Hilbert spaces. Based on certain reasonable assumptions of the linear pre-conditioner, a geometric convergence rate is shown for the nonlinear PSD iteration. We then apply the general theory to the fourth and sixth-order problems of interest, making use of Sobolev embedding and regularity results to confirm the appropriateness of our pre-conditioners for the regularized p-Lapacian problems. Our results include a sharper theoretical convergence result for p-Laplacian systems compared to what may be found in existing works. We demonstrate rigorously how to apply the theory in the finite dimensional setting using finite difference discretization methods. Numerical simulations for some important physical application problems – including thin film epitaxy with slope selection and the square phase field crystal model – are carried out to verify the efficiency of the scheme.« less

  15. Computational analysis of nonlinearities within dynamics of cable-based driving systems

    NASA Astrophysics Data System (ADS)

    Anghelache, G. D.; Nastac, S.

    2017-08-01

    This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.

  16. Preconditioned steepest descent methods for some nonlinear elliptic equations involving p-Laplacian terms

    NASA Astrophysics Data System (ADS)

    Feng, Wenqiang; Salgado, Abner J.; Wang, Cheng; Wise, Steven M.

    2017-04-01

    We describe and analyze preconditioned steepest descent (PSD) solvers for fourth and sixth-order nonlinear elliptic equations that include p-Laplacian terms on periodic domains in 2 and 3 dimensions. The highest and lowest order terms of the equations are constant-coefficient, positive linear operators, which suggests a natural preconditioning strategy. Such nonlinear elliptic equations often arise from time discretization of parabolic equations that model various biological and physical phenomena, in particular, liquid crystals, thin film epitaxial growth and phase transformations. The analyses of the schemes involve the characterization of the strictly convex energies associated with the equations. We first give a general framework for PSD in Hilbert spaces. Based on certain reasonable assumptions of the linear pre-conditioner, a geometric convergence rate is shown for the nonlinear PSD iteration. We then apply the general theory to the fourth and sixth-order problems of interest, making use of Sobolev embedding and regularity results to confirm the appropriateness of our pre-conditioners for the regularized p-Lapacian problems. Our results include a sharper theoretical convergence result for p-Laplacian systems compared to what may be found in existing works. We demonstrate rigorously how to apply the theory in the finite dimensional setting using finite difference discretization methods. Numerical simulations for some important physical application problems - including thin film epitaxy with slope selection and the square phase field crystal model - are carried out to verify the efficiency of the scheme.

  17. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  18. A Potential Proxy of the Second Integral of Motion (I2) in a Rotating Barred Potential

    NASA Astrophysics Data System (ADS)

    Shen, Juntai; Qin, Yujing

    2017-06-01

    The only analytically known integral of motion in a 2-D rotating barred potential is the Jacobi constant (EJ). In addition to EJ, regular orbits also obey a second integral of motion (I2) whose analytical form is unknown. We show that the time-averaged characteristics of angular momentum in a rotating bar potential resemble the behavior of the analytically-unknown I2. For a given EJ, regular orbits of various families follow a continuous sequence in the space of net angular momentum and its dispersion ("angular momentum space"). In the limiting case where regular orbits of the well-known x1/x4 orbital families dominate the phase space, the orbital sequence can be monotonically traced by a single parameter, namely the ratio of mean angular momentum to its dispersion. This ratio behaves well even in the 3-D case, and thus may be used as a proxy of I2. The potential proxy of I2 may be used as an efficient way to probe the phase space structure, and a convenient new scheme of orbit classification in addition to the frequency mapping technique.

  19. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  20. Simple picture for neutrino flavor transformation in supernovae

    NASA Astrophysics Data System (ADS)

    Duan, Huaiyu; Fuller, George M.; Qian, Yong-Zhong

    2007-10-01

    We can understand many recently discovered features of flavor evolution in dense, self-coupled supernova neutrino and antineutrino systems with a simple, physical scheme consisting of two quasistatic solutions. One solution closely resembles the conventional, adiabatic single-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mechanism, in that neutrinos and antineutrinos remain in mass eigenstates as they evolve in flavor space. The other solution is analogous to the regular precession of a gyroscopic pendulum in flavor space, and has been discussed extensively in recent works. Results of recent numerical studies are best explained with combinations of these solutions in the following general scenario: (1) Near the neutrino sphere, the MSW-like many-body solution obtains. (2) Depending on neutrino vacuum mixing parameters, luminosities, energy spectra, and the matter density profile, collective flavor transformation in the nutation mode develops and drives neutrinos away from the MSW-like evolution and toward regular precession. (3) Neutrino and antineutrino flavors roughly evolve according to the regular precession solution until neutrino densities are low. In the late stage of the precession solution, a stepwise swapping develops in the energy spectra of νe and νμ/ντ. We also discuss some subtle points regarding adiabaticity in flavor transformation in dense-neutrino systems.

  1. Educational and intervention strategies for improving a shift system: an experience in a disabled persons' facility.

    PubMed

    Sakai, K; Watanabe, A; Kogi, K

    1993-01-01

    The improvement of an irregular three-shift system with anti-clockwise rotation of workers of a disabled persons' facility covering 42 h a week was a subject for management-labour debate. Workers were complaining of physical fatigue, high prevalence of low back pain, sleep shortages associated with short inter-shift intervals, and irregular holidays. With the co-operation of trade union members, an educational and intervention programme was designed to analyse, plan, and implement improved shift rotation schemes. The programme consisted of (a) a group study on the existing system and effects on health and working life; (b) joint planning of potential schemes; (c) communication and feedback (d) testing and evaluation; and (e) agreement on an improved system. The group study was undertaken by means of time study, questionnaire and physiological methods, and the results were jointly discussed. This led to the planning of alternative shift schemes incorporating more regular, clockwise rotation. It was agreed to stage a trial period with a view to shorter working hours. This experience indicated the importance of a stepwise intervention strategy with frequent dialogues and a participatory process focusing on the broad range of working life and health issues.

  2. An RBF-FD closest point method for solving PDEs on surfaces

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ling, L.; Ruuth, S. J.

    2018-10-01

    Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.

  3. RUASN: A Robust User Authentication Framework for Wireless Sensor Networks

    PubMed Central

    Kumar, Pardeep; Choudhury, Amlan Jyoti; Sain, Mangal; Lee, Sang-Gon; Lee, Hoon-Jae

    2011-01-01

    In recent years, wireless sensor networks (WSNs) have been considered as a potential solution for real-time monitoring applications and these WSNs have potential practical impact on next generation technology too. However, WSNs could become a threat if suitable security is not considered before the deployment and if there are any loopholes in their security, which might open the door for an attacker and hence, endanger the application. User authentication is one of the most important security services to protect WSN data access from unauthorized users; it should provide both mutual authentication and session key establishment services. This paper proposes a robust user authentication framework for wireless sensor networks, based on a two-factor (password and smart card) concept. This scheme facilitates many services to the users such as user anonymity, mutual authentication, secure session key establishment and it allows users to choose/update their password regularly, whenever needed. Furthermore, we have provided the formal verification using Rubin logic and compare RUASN with many existing schemes. As a result, we found that the proposed scheme possesses many advantages against popular attacks, and achieves better efficiency at low computation cost. PMID:22163888

  4. End-User Tools Towards AN Efficient Electricity Consumption: the Dynamic Smart Grid

    NASA Astrophysics Data System (ADS)

    Kamel, Fouad; Kist, Alexander A.

    2010-06-01

    Growing uncontrolled electrical demands have caused increased supply requirements. This causes volatile electrical markets and has detrimental unsustainable environmental impacts. The market is presently characterized by regular daily peak demand conditions associated with high electricity prices. A demand-side response system can limit peak demands to an acceptable level. The proposed scheme is based on energy demand and price information which is available online. An online server is used to communicate the information of electricity suppliers to users, who are able to use the information to manage and control their own demand. A configurable, intelligent switching system is used to control local loads during peak events and mange the loads at other times as necessary. The aim is to shift end user loads towards periods where energy demand and therefore also prices are at the lowest. As a result, this will flatten the load profile and avoiding load peeks which are costly for suppliers. The scheme is an endeavour towards achieving a dynamic smart grid demand-side-response environment using information-based communication and computer-controlled switching. Diffusing the scheme shall lead to improved electrical supply services and controlled energy consumption and prices.

  5. Impact of integrated child development scheme on child malnutrition in West Bengal, India.

    PubMed

    Dutta, Arijita; Ghosh, Smritikana

    2017-10-01

    With child malnutrition detected as a persistent problem in most of the developing countries, public policy has been directed towards offering community-based supplementary feeding provision and nutritional information to caregivers. India, being no exception, has initiated these programs as early as 1970s under integrated child development scheme. Using propensity score matching technique on primary data of 390 households in two districts of West Bengal, an Eastern state in India, the study finds that impact of being included in the program and receiving supplementary feeding is insignificant on child stunting measures, though the program can break the intractable barriers of child stunting only when the child successfully receives not only just the supplementary feeding but also his caregiver collects crucial information on nutritional awareness and growth trajectory of the child. Availability of regular eggs in the feeding diet too can reduce protein-related undernutrition. Focusing on just feeding means low depth of other services offered under integrated child development scheme, including pre-school education, nutritional awareness, and hygiene behavior; thus repealing a part of the apparent food-secure population who puts far more importance on the latter services. © 2016 John Wiley & Sons Ltd.

  6. Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube: IceCube Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Ackermann, M.; Adams, J.

    Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less

  7. Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube: IceCube Collaboration

    DOE PAGES

    Aartsen, M. G.; Ackermann, M.; Adams, J.; ...

    2015-03-11

    Here we present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrinomore » flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.« less

  8. Renormalization of quark propagators from twisted-mass lattice QCD at N{sub f}=2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blossier, B.; Boucaud, Ph.; Pene, O.

    2011-04-01

    We present results concerning the nonperturbative evaluation of the renormalization constant for the quark field, Z{sub q}, from lattice simulations with twisted-mass quarks and three values of the lattice spacing. We use the regularization-invariant momentum-subtraction (RI'-MOM) scheme. Z{sub q} has very large lattice spacing artefacts; it is considered here as a test bed to elaborate accurate methods which will be used for other renormalization constants. We recall and develop the nonperturbative correction methods and propose tools to test the quality of the correction. These tests are also applied to the perturbative correction method. We check that the lattice-spacing artefacts indeedmore » scale as a{sup 2}p{sup 2}. We then study the running of Z{sub q} with particular attention to the nonperturbative effects, presumably dominated by the dimension-two gluon condensate in Landau gauge. We show indeed that this effect is present, and not small. We check its scaling in physical units, confirming that it is a continuum effect. It gives a {approx}4% contribution at 2 GeV. Different variants are used in order to test the reliability of our result and estimate the systematic uncertainties. Finally, combining all our results and using the known Wilson coefficient of , we find g{sup 2}({mu}{sup 2}){sub {mu}}{sup 2}{sub CM}=2.01(11)({sub -0.73}{sup +0.61})GeV{sup 2} at {mu}=10 GeV, the local operator A{sup 2} being renormalized in the MS scheme. This last result is in fair agreement within uncertainties with the value independently extracted from the strong coupling constant. We convert the nonperturbative part of Z{sub q} from the regularization-invariant momentum-subtraction (RI'-MOM) scheme to MS. Our result for the quark field renormalization constant in the MS scheme is Z{sub q} {sup MS} {sup pert}((2 GeV){sup 2},g{sub bare}{sup 2})=0.750(3)(7)-0.313(20)(g{sub bare}{sup 2}-1.5) for the perturbative contribution and Z{sub q}{sup MSnonperturbative}((2 GeV){sup 2},g{sub bare}{sup 2})=0.781(6)(21)-0.313(20)(g{sub bare}{sup 2}-1.5) when the nonperturbative contribution is included.« less

  9. Algebraic classification of Weyl anomalies in arbitrary dimensions.

    PubMed

    Boulanger, Nicolas

    2007-06-29

    Conformally invariant systems involving only dimensionless parameters are known to describe particle physics at very high energy. In the presence of an external gravitational field, the conformal symmetry may generalize to the Weyl invariance of classical massless field systems in interaction with gravity. In the quantum theory, the latter symmetry no longer survives: A Weyl anomaly appears. Anomalies are a cornerstone of quantum field theory, and, for the first time, a general, purely algebraic understanding of the universal structure of the Weyl anomalies is obtained, in arbitrary dimensions and independently of any regularization scheme.

  10. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  11. Analysis models for the estimation of oceanic fields

    NASA Technical Reports Server (NTRS)

    Carter, E. F.; Robinson, A. R.

    1987-01-01

    A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.

  12. Working with families of children with special needs: the parent adviser scheme.

    PubMed

    Buchan, L; Clemerson, J; Davis, H

    1988-01-01

    This paper describes a project in which an attempt is made to provide regular, ongoing support and counselling for families of children with severe developmental delays and intellectual or physical impairments. This service is available to both English speaking and Bangladeshi families, and is concerned with the needs of the whole family, not just the child. Professionals already working in this field are trained in counselling skills and then work in partnership with the families, attempting to develop a respectful, open relationship based upon active listening.

  13. On-Orbit Performance of the Helioseismic and Magnetic Imager Instrument onboard the Solar Dynamics Observatory

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Baldner, C. S.; Bush, R. I.; Schou, J.; Scherrer, P. H.

    2018-03-01

    The Helioseismic and Magnetic Imager (HMI) instrument is a major component of NASA's Solar Dynamics Observatory (SDO) spacecraft. Since commencement of full regular science operations on 1 May 2010, HMI has operated with remarkable continuity, e.g. during the more than five years of the SDO prime mission that ended 30 September 2015, HMI collected 98.4% of all possible 45-second velocity maps; minimizing gaps in these full-disk Dopplergrams is crucial for helioseismology. HMI velocity, intensity, and magnetic-field measurements are used in numerous investigations, so understanding the quality of the data is important. This article describes the calibration measurements used to track the performance of the HMI instrument, and it details trends in important instrument parameters during the prime mission. Regular calibration sequences provide information used to improve and update the calibration of HMI data. The set-point temperature of the instrument front window and optical bench is adjusted regularly to maintain instrument focus, and changes in the temperature-control scheme have been made to improve stability in the observable quantities. The exposure time has been changed to compensate for a 20% decrease in instrument throughput. Measurements of the performance of the shutter and tuning mechanisms show that they are aging as expected and continue to perform according to specification. Parameters of the tunable optical-filter elements are regularly adjusted to account for drifts in the central wavelength. Frequent measurements of changing CCD-camera characteristics, such as gain and flat field, are used to calibrate the observations. Infrequent expected events such as eclipses, transits, and spacecraft off-points interrupt regular instrument operations and provide the opportunity to perform additional calibration. Onboard instrument anomalies are rare and seem to occur quite uniformly in time. The instrument continues to perform very well.

  14. A long-term risk-benefit analysis of low-dose aspirin in primary prevention.

    PubMed

    Wu, I-Chen; Hsieh, Hui-Min; Yu, Fang-Jung; Wu, Meng-Chieh; Wu, Tzung-Shiun; Wu, Ming-Tsang

    2016-02-01

    The long-term risk-benefit effect of occasional and regular use of low-dose aspirin (≤ 100 mg per day) in primary prevention of vascular diseases and cancers was calculated. One representative database of 1 000 000 participants from Taiwan's National Health Insurance scheme in 1997-2000 was used. The potential study subjects were those aged 30-95 years, were found not to have been prescribed aspirin before 1 January 2000, but to have first been prescribed low-dose aspirin (≤ 100 mg per day) after that date and were followed up to 31 December 2009. Participants prescribed low-dose aspirin < 20% during the study period were considered occasional users and those prescribed ≥ 80% regular users. After the propensity score matching, rate differences of haemorrhage, ischaemia and cancer between these users were calculated their net clinical risk. A total of 1720 pairs were analysed. During the study period, haemorrhage and ischaemia occurred in 25 (1·45%) and 67 participants (3·90%) in occasional users and 69 (4·01%) and 100 participants (5·81%) in regular users, whereas cancer occurred in 32 participants (1·86%) in occasional users and 26 participants (1·51%) in regular users. The crude and adjusted net clinical risks of low-dose aspirin use between the two frequency of users (≥ 80% vs. < 20%) were 4·12% (95% CI = 2·19%, 6·07%; P < 0·001) and 3·93% (95% CI = 2·01%, 5·84%; P < 0·001). A long-term regular use of low-dose aspirin might not be better than occasional use in the primary prevention against major vascular diseases and cancer. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.

  15. Multi-objective optimization of radiotherapy: distributed Q-learning and agent-based simulation

    NASA Astrophysics Data System (ADS)

    Jalalimanesh, Ammar; Haghighi, Hamidreza Shahabi; Ahmadi, Abbas; Hejazian, Hossein; Soltani, Madjid

    2017-09-01

    Radiotherapy (RT) is among the regular techniques for the treatment of cancerous tumours. Many of cancer patients are treated by this manner. Treatment planning is the most important phase in RT and it plays a key role in therapy quality achievement. As the goal of RT is to irradiate the tumour with adequately high levels of radiation while sparing neighbouring healthy tissues as much as possible, it is a multi-objective problem naturally. In this study, we propose an agent-based model of vascular tumour growth and also effects of RT. Next, we use multi-objective distributed Q-learning algorithm to find Pareto-optimal solutions for calculating RT dynamic dose. We consider multiple objectives and each group of optimizer agents attempt to optimise one of them, iteratively. At the end of each iteration, agents compromise the solutions to shape the Pareto-front of multi-objective problem. We propose a new approach by defining three schemes of treatment planning created based on different combinations of our objectives namely invasive, conservative and moderate. In invasive scheme, we enforce killing cancer cells and pay less attention about irradiation effects on normal cells. In conservative scheme, we take more care of normal cells and try to destroy cancer cells in a less stressed manner. The moderate scheme stands in between. For implementation, each of these schemes is handled by one agent in MDQ-learning algorithm and the Pareto optimal solutions are discovered by the collaboration of agents. By applying this methodology, we could reach Pareto treatment plans through building different scenarios of tumour growth and RT. The proposed multi-objective optimisation algorithm generates robust solutions and finds the best treatment plan for different conditions.

  16. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    NASA Astrophysics Data System (ADS)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  17. MO-DE-207A-02: A Feature-Preserving Image Reconstruction Method for Improved Pancreaticlesion Classification in Diagnostic CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Tsui, B; Noo, F

    Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internalmore » features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The Senior Author receives financial support from Siemens GmbH Healthcare.« less

  18. Disaster recovery plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, D.E.

    The BMS production implementation will be complete by October 1, 1998 and the server environment will be comprised of two types of platforms. The PassPort Supply and the PeopleSoft Financials will reside on LNIX servers and the PeopleSoft Human Resources and Payroll will reside on Microsoft NT servers. Because of the wide scope and the requirements of the COTS products to run in various environments backup and recovery responsibilities are divided between two groups in Technical Operations. The Central Computer Systems Management group provides support for the LTNIX/NT Backup Data Center, and the Network Infrastructure Systems group provides support formore » the NT Application Server Backup outside the Data Center. The disaster recovery process is dependent on a good backup and recovery process. Information and integrated system data for determining the disaster recovery process is identified from the Fluor Daniel Hanford (FDH) Risk Assessment Plan, Contingency Plan, and Backup and Recovery Plan, and Backup Form for HANDI 2000 BMS.« less

  19. When Foreign Domestic Helpers Care for and About Older People in Their Homes: I Am a Maid or a Friend

    PubMed Central

    Chiang, Vico C. L.; Leung, Doris; Ku, Ben H. B.

    2018-01-01

    We examine the lived experiences of foreign domestic helpers (FDH) working with community-dwelling older people in Hong Kong. Unstructured interviews were conducted with 11 female FDHs, and thematically analyzed. The theme inescapable functioning commodity represented the embodied commodification of FDHs to be functional for older people in home care. Another theme, destined reciprocity of companionship, highlighted the FDHs’ capacity to commit to home care and be concerned about older people. The waxing and waning of the possibilities of commodified companionship indicated the intermittent capacity of FDHs to find meaning in their care, in which performative nature for functional purposes and emotional engagement took turns to be the foci in migrant home care. This study addresses the transition of FDHs from task-oriented relation to companions of older people through care work. Discussion draws on the development of a kin-like relationship between FDHs and older people with emotional reciprocity grounded in moral values. PMID:29404382

  20. Establishing a Mathematical Equations and Improving the Production of L-tert-Leucine by Uniform Design and Regression Analysis.

    PubMed

    Jiang, Wei; Xu, Chao-Zhen; Jiang, Si-Zhi; Zhang, Tang-Duo; Wang, Shi-Zhen; Fang, Bai-Shan

    2017-04-01

    L-tert-Leucine (L-Tle) and its derivatives are extensively used as crucial building blocks for chiral auxiliaries, pharmaceutically active ingredients, and ligands. Combining with formate dehydrogenase (FDH) for regenerating the expensive coenzyme NADH, leucine dehydrogenase (LeuDH) is continually used for synthesizing L-Tle from α-keto acid. A multilevel factorial experimental design was executed for research of this system. In this work, an efficient optimization method for improving the productivity of L-Tle was developed. And the mathematical model between different fermentation conditions and L-Tle yield was also determined in the form of the equation by using uniform design and regression analysis. The multivariate regression equation was conveniently implemented in water, with a space time yield of 505.9 g L -1  day -1 and an enantiomeric excess value of >99 %. These results demonstrated that this method might become an ideal protocol for industrial production of chiral compounds and unnatural amino acids such as chiral drug intermediates.

  1. A continuous system for biocatalytic hydrogenation of CO2 to formate.

    PubMed

    Mourato, Cláudia; Martins, Mónica; da Silva, Sofia M; Pereira, Inês A C

    2017-07-01

    In this work a novel bioprocess for hydrogenation of CO 2 to formate was developed, using whole cell catalysis by a sulfate-reducing bacterium. Three Desulfovibrio species were tested (D. vulgaris Hildenborough, D. alaskensis G20, and D. desulfuricans ATCC 27774), of which D. desulfuricans showed the highest activity, producing 12mM of formate in batch, with a production rate of 0.09mMh -1 . Gene expression analysis indicated that among the three formate dehydrogenases and five hydrogenases, the cytoplasmic FdhAB and the periplasmic [FeFe] HydAB are the main enzymes expressed in D. desulfuricans in these conditions. The new bioprocess for continuous formate production by D. desulfuricans had a maximum specific formate production rate of 14mMg dcw -1 h -1 , and more than 45mM of formate were obtained with a production rate of 0.40mMh -1 . This is the first report of a continuous process for biocatalytic formate production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Cloning of the Arabidopsis and Rice Formaldehyde Dehydrogenase Genes: Implications for the Origin of Plant Adh Enzymes

    PubMed Central

    Dolferus, R.; Osterman, J. C.; Peacock, W. J.; Dennis, E. S.

    1997-01-01

    This article reports the cloning of the genes encoding the Arabidopsis and rice class III ADH enzymes, members of the alcohol dehydrogenase or medium chain reductase/dehydrogenase superfamily of proteins with glutathione-dependent formaldehyde dehydrogenase activity (GSH-FDH). Both genes contain eight introns in exactly the same positions, and these positions are conserved in plant ethanol-active Adh genes (class P). These data provide further evidence that plant class P genes have evolved from class III genes by gene duplication and acquisition of new substrate specificities. The position of introns and similarities in the nucleic acid and amino acid sequences of the different classes of ADH enzymes in plants and humans suggest that plant and animal class III enzymes diverged before they duplicated to give rise to plant and animal ethanol-active ADH enzymes. Plant class P ADH enzymes have gained substrate specificities and evolved promoters with different expression properties, in keeping with their metabolic function as part of the alcohol fermentation pathway. PMID:9215914

  3. Differential Activation of Fast-Spiking and Regular-Firing Neuron Populations During Movement and Reward in the Dorsal Medial Frontal Cortex

    PubMed Central

    Insel, Nathan; Barnes, Carol A.

    2015-01-01

    The medial prefrontal cortex is thought to be important for guiding behavior according to an animal's expectations. Efforts to decode the region have focused not only on the question of what information it computes, but also how distinct circuit components become engaged during behavior. We find that the activity of regular-firing, putative projection neurons contains rich information about behavioral context and firing fields cluster around reward sites, while activity among putative inhibitory and fast-spiking neurons is most associated with movement and accompanying sensory stimulation. These dissociations were observed even between adjacent neurons with apparently reciprocal, inhibitory–excitatory connections. A smaller population of projection neurons with burst-firing patterns did not show clustered firing fields around rewards; these neurons, although heterogeneous, were generally less selective for behavioral context than regular-firing cells. The data suggest a network that tracks an animal's behavioral situation while, at the same time, regulating excitation levels to emphasize high valued positions. In this scenario, the function of fast-spiking inhibitory neurons is to constrain network output relative to incoming sensory flow. This scheme could serve as a bridge between abstract sensorimotor information and single-dimensional codes for value, providing a neural framework to generate expectations from behavioral state. PMID:24700585

  4. Pseudospectral method for gravitational wave collapse

    NASA Astrophysics Data System (ADS)

    Hilditch, David; Weyhausen, Andreas; Brügmann, Bernd

    2016-03-01

    We present a new pseudospectral code, bamps, for numerical relativity written with the evolution of collapsing gravitational waves in mind. We employ the first-order generalized harmonic gauge formulation. The relevant theory is reviewed, and the numerical method is critically examined and specialized for the task at hand. In particular, we investigate formulation parameters—gauge- and constraint-preserving boundary conditions well suited to nonvanishing gauge source functions. Different types of axisymmetric twist-free moment-of-time-symmetry gravitational wave initial data are discussed. A treatment of the axisymmetric apparent horizon condition is presented with careful attention to regularity on axis. Our apparent horizon finder is then evaluated in a number of test cases. Moving on to evolutions, we investigate modifications to the generalized harmonic gauge constraint damping scheme to improve conservation in the strong-field regime. We demonstrate strong-scaling of our pseudospectral penalty code. We employ the Cartoon method to efficiently evolve axisymmetric data in our 3 +1 -dimensional code. We perform test evolutions of the Schwarzschild spacetime perturbed by gravitational waves and by gauge pulses, both to demonstrate the use of our black-hole excision scheme and for comparison with earlier results. Finally, numerical evolutions of supercritical Brill waves are presented to demonstrate durability of the excision scheme for the dynamical formation of a black hole.

  5. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  6. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  7. Promoting older peoples' participation in activity, whose responsibility? A case study of the response of health, local government and voluntary organizations.

    PubMed

    Goodman, C; Davies, S; Tai, S See; Dinan, S; Iliffe, S

    2007-10-01

    The benefits for older people of participating in regular activity are well documented. This paper focuses on how publicly funded community-based organizations enable older people to engage in physical activity. The research questions were: (i) What activity promotion schemes/initiatives exist for older people? (ii) Who has responsibility for them, how are they funded and organized and what evidence exists of interagency working? (iii) Who are the older people that participate? (iv) What are the perceived and measurable outcomes of the initiatives identified? To establish the type and range of provision for older people in a sector of London, the strategies and information about existing activity promoting schemes of inner city health, local government and voluntary organizations were reviewed. Key informants were then interviewed to establish the rationale, achievements and different schemes. One hundred and nine activity-promoting initiatives for older people were identified. Most were provided within an environment of short-term funding and organizational upheaval and reflected eclectic theoretical and ideological approaches. The findings demonstrate: (i) the need for organizations to apply evidence about what attracts and sustains older people's participation in physical activity, and (ii) the need to develop funded programmes that build on past achievements, have explicit outcomes and exploit opportunities for cross agency working.

  8. A divergence-cleaning scheme for cosmological SPMHD simulations

    NASA Astrophysics Data System (ADS)

    Stasyszyn, F. A.; Dolag, K.; Beck, A. M.

    2013-01-01

    In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (SPMHD) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from {nabla }\\cdot {boldsymbol B} related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. to give good results and prevent numerical artefacts from growing. Additionally, we demonstrate that certain current SPMHD implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in SPMHD at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical {nabla }\\cdot {boldsymbol B} errors at small scales.

  9. Sparse Learning with Stochastic Composite Optimization.

    PubMed

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  10. E2 transition probabilities for decays of isomers observed in neutron-rich odd Sn isotopes

    DOE PAGES

    Iskra, Ł. W.; Broda, R.; Janssens, R. V.F.; ...

    2015-01-01

    High-spin states were investigated with gamma coincidence techniques in neutron-rich Sn isotopes produced in fission processes following ⁴⁸Ca + ²⁰⁸Pb, ⁴⁸Ca + ²³⁸U, and ⁶⁴Ni + ²³⁸U reactions. By exploiting delayed and cross-coincidence techniques, level schemes have been delineated in odd ¹¹⁹⁻¹²⁵Sn isotopes. Particular attention was paid to the occurrence of 19/2⁺ and 23/2⁺ isomeric states for which the available information has now been significantly extended. Reduced transition probabilities, B(E2), extracted from the measured half-lives and the established details of the isomeric decays exhibit a striking regularity. This behavior was compared with the previously observed regularity of the B(E2) amplitudesmore » for the seniority ν = 2 and 3, 10⁺ and 27/2⁻ isomers in even- and odd-Sn isotopes, respectively.« less

  11. Structural Health Monitoring for a Z-Type Special Vehicle

    PubMed Central

    Yuan, Chaolin; Ren, Liang; Li, Hongnan

    2017-01-01

    Nowadays there exist various kinds of special vehicles designed for some purposes, which are different from regular vehicles in overall dimension and design. In that case, accidents such as overturning will lead to large economical loss and casualties. There are still no technical specifications to follow to ensure the safe operation and driving of these special vehicles. Owing to the poor efficiency of regular maintenance, it is more feasible and effective to apply real-time monitoring during the operation and driving process. In this paper, the fiber Bragg grating (FBG) sensors are used to monitor the safety of a z-type special vehicle. Based on the structural features and force distribution, a reasonable structural health monitoring (SHM) scheme is presented. Comparing the monitoring results with the finite element simulation results guarantees the accuracy and reliability of the monitoring results. Large amounts of data are collected during the operation and driving progress to evaluate the structural safety condition and provide reference for SHM systems developed for other special vehicles. PMID:28587161

  12. Measuring and Modeling the Growth Dynamics of Self-Catalyzed GaP Nanowire Arrays.

    PubMed

    Oehler, Fabrice; Cattoni, Andrea; Scaccabarozzi, Andrea; Patriarche, Gilles; Glas, Frank; Harmand, Jean-Christophe

    2018-02-14

    The bottom-up fabrication of regular nanowire (NW) arrays on a masked substrate is technologically relevant, but the growth dynamic is rather complex due to the superposition of severe shadowing effects that vary with array pitch, NW diameter, NW height, and growth duration. By inserting GaAsP marker layers at a regular time interval during the growth of a self-catalyzed GaP NW array, we are able to retrieve precisely the time evolution of the diameter and height of a single NW. We then propose a simple numerical scheme which fully computes shadowing effects at play in infinite arrays of NWs. By confronting the simulated and experimental results, we infer that re-emission of Ga from the mask is necessary to sustain the NW growth while Ga migration on the mask must be negligible. When compared to random cosine or random uniform re-emission from the mask, the simple case of specular reflection on the mask gives the most accurate account of the Ga balance during the growth.

  13. Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.

    PubMed

    Wang, Xiudong; Gu, Yuantao

    2017-05-10

    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.

  14. Regularized Stokeslet representations for the flow around a human sperm

    NASA Astrophysics Data System (ADS)

    Ishimoto, Kenta; Gadelha, Hermes; Gaffney, Eamonn; Smith, David; Kirkman-Brown, Jackson

    2017-11-01

    The sperm flagellum does not simply push the sperm. We have established a new theoretical scheme for the dimensional reduction of swimming sperm dynamics, via high-frame-rate digital microscopy of a swimming human sperm cell. This has allowed the reconstruction of the flagellar waveform as a limit cycle in a phase space of PCA modes. With this waveform, boundary element numerical simulation has successfully captured fine-scale sperm swimming trajectories. Further analyses on the flow field around the cell has also demonstrated a pusher-type time-averaged flow, though the instantaneous flow field can temporarily vary in a more complicated manner - even pulling the sperm. Applying PCA to the flow field, we have further found that a small number of PCA modes explain the temporal patterns of the flow, whose core features are well approximated by a few regularized Stokeslets. Such representations provide a methodology for coarse-graining the time-dependent flow around a human sperm and other flagellar microorganisms for use in developing population level models that retain individual cell dynamics.

  15. Huygens' optical vector wave field synthesis via in-plane electric dipole metasurface.

    PubMed

    Park, Hyeonsoo; Yun, Hansik; Choi, Chulsoo; Hong, Jongwoo; Kim, Hwi; Lee, Byoungho

    2018-04-16

    We investigate Huygens' optical vector wave field synthesis scheme for electric dipole metasurfaces with the capability of modulating in-plane polarization and complex amplitude and discuss the practical issues involved in realizing multi-modulation metasurfaces. The proposed Huygens' vector wave field synthesis scheme identifies the vector Airy disk as a synthetic unit element and creates a designed vector optical field by integrating polarization-controlled and complex-modulated Airy disks. The metasurface structure for the proposed vector field synthesis is analyzed in terms of the signal-to-noise ratio of the synthesized field distribution. The design of practical metasurface structures with true vector modulation capability is possible through the analysis of the light field modulation characteristics of various complex modulated geometric phase metasurfaces. It is shown that the regularization of meta-atoms is a key factor that needs to be considered in field synthesis, given that it is essential for a wide range of optical field synthetic applications, including holographic displays, microscopy, and optical lithography.

  16. Variational discretization of the nonequilibrium thermodynamics of simple systems

    NASA Astrophysics Data System (ADS)

    Gay-Balmaz, François; Yoshimura, Hiroaki

    2018-04-01

    In this paper, we develop variational integrators for the nonequilibrium thermodynamics of simple closed systems. These integrators are obtained by a discretization of the Lagrangian variational formulation of nonequilibrium thermodynamics developed in (Gay-Balmaz and Yoshimura 2017a J. Geom. Phys. part I 111 169–93 Gay-Balmaz and Yoshimura 2017b J. Geom. Phys. part II 111 194–212) and thus extend the variational integrators of Lagrangian mechanics, to include irreversible processes. In the continuous setting, we derive the structure preserving property of the flow of such systems. This property is an extension of the symplectic property of the flow of the Euler–Lagrange equations. In the discrete setting, we show that the discrete flow solution of our numerical scheme verifies a discrete version of this property. We also present the regularity conditions which ensure the existence of the discrete flow. We finally illustrate our discrete variational schemes with the implementation of an example of a simple and closed system.

  17. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  18. Compile-time estimation of communication costs in multicomputers

    NASA Technical Reports Server (NTRS)

    Gupta, Manish; Banerjee, Prithviraj

    1991-01-01

    An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.

  19. Combined neural network/Phillips-Tikhonov approach to aerosol retrievals over land from the NASA Research Scanning Polarimeter

    NASA Astrophysics Data System (ADS)

    Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.

    2017-11-01

    In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.

  20. Multichannel blind iterative image restoration.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2003-01-01

    Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.

  1. A splitting algorithm for a novel regularization of Perona-Malik and application to image restoration

    NASA Astrophysics Data System (ADS)

    Karami, Fahd; Ziad, Lamia; Sadik, Khadija

    2017-12-01

    In this paper, we focus on a numerical method of a problem called the Perona-Malik inequality which we use for image denoising. This model is obtained as the limit of the Perona-Malik model and the p-Laplacian operator with p→ ∞. In Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014), the authors have proved the existence and uniqueness of the solution of the proposed model. However, in their work, they used the explicit numerical scheme for approximated problem which is strongly dependent to the parameter p. To overcome this, we use in this work an efficient algorithm which is a combination of the classical additive operator splitting and a nonlinear relaxation algorithm. At last, we have presented the experimental results in image filtering show, which demonstrate the efficiency and effectiveness of our algorithm and finally, we have compared it with the previous scheme presented in Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014).

  2. Blind estimation of blur in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  3. Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.

    PubMed

    Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata

    2008-09-01

    A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.

  4. Communication requirements of sparse Cholesky factorization with nested dissection ordering

    NASA Technical Reports Server (NTRS)

    Naik, Vijay K.; Patrick, Merrell L.

    1989-01-01

    Load distribution schemes for minimizing the communication requirements of the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems are presented. The total data traffic in factoring an n x n sparse symmetric positive definite matrix representing an n-vertex regular two-dimensional grid graph using n exp alpha, alpha not greater than 1, processors are shown to be O(n exp 1 + alpha/2). It is O(n), when n exp alpha, alpha not smaller than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal.

  5. An ultra-wideband microwave tomography system: preliminary results.

    PubMed

    Gilmore, Colin; Mojabi, Puyan; Zakaria, Amer; Ostadrahimi, Majid; Kaye, Cam; Noghanian, Sima; Shafai, Lotfollah; Pistorius, Stephen; LoVetri, Joe

    2009-01-01

    We describe a 2D wide-band multi-frequency microwave imaging system intended for biomedical imaging. The system is capable of collecting data from 2-10 GHz, with 24 antenna elements connected to a vector network analyzer via a 2 x 24 port matrix switch. Through the use of two different nonlinear reconstruction schemes: the Multiplicative-Regularized Contrast Source Inversion method and an enhanced version of the Distorted Born Iterative Method, we show preliminary imaging results from dielectric phantoms where data were collected from 3-6 GHz. The early inversion results show that the system is capable of quantitatively reconstructing dielectric objects.

  6. Fate of superconductivity in three-dimensional disordered Luttinger semimetals

    NASA Astrophysics Data System (ADS)

    Mandal, Ipsita

    2018-05-01

    Superconducting instability can occur in three-dimensional quadratic band crossing semimetals only at a finite coupling strength due to the vanishing of density of states at the quadratic band touching point. Since realistic materials are always disordered to some extent, we study the effect of short-ranged-correlated disorder on this superconducting quantum critical point using a controlled loop-expansion applying dimensional regularization. The renormalization group (RG) scheme allows us to determine the RG flows of the various interaction strengths and shows that disorder destroys the superconducting quantum critical point. In fact, the system exhibits a runaway flow to strong disorder.

  7. Characterizing the functional MRI response using Tikhonov regularization.

    PubMed

    Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E

    2007-09-20

    The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd

  8. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  9. Break-even cost of cloning in genetic improvement of dairy cattle.

    PubMed

    Dematawewa, C M; Berger, P J

    1998-04-01

    Twelve different models for alternative progeny-testing schemes based on genetic and economic gains were compared. The first 10 alternatives were considered to be optimally operating progeny-testing schemes. Alternatives 1 to 5 considered the following combinations of technologies: 1) artificial insemination, 2) artificial insemination with sexed semen, 3) artificial insemination with embryo transfer, 4) artificial insemination and embryo transfer with few bulls as sires, and 5) artificial insemination, embryo transfer, and sexed semen with few bulls, respectively. Alternatives 6 to 12 considered cloning from dams. Alternatives 11 and 12 considered a regular progeny-testing scheme that had selection gains (intensity x accuracy x genetic standard deviation) of 890, 300, 600, and 89 kg, respectively, for the four paths. The sums of the generation intervals of the four paths were 19 yr for the first 8 alternatives and 19.5, 22, 29, and 29.5 yr for alternatives 9 to 12, respectively. Rates of genetic gain in milk yield for alternatives 1 to 5 were 257, 281, 316, 327, and 340 kg/yr, respectively. The rate of gain for other alternatives increased as number of clones increased. The use of three records per clone increased both accuracy and generation interval of a path. Cloning was highly beneficial for progeny-testing schemes with lower intensity and accuracy of selection. The discounted economic gain (break-even cost) per clone was the highest ($84) at current selection levels using sexed semen and three records on clones of the dam. The total cost associated with cloning has to be below $84 for cloning to be an economically viable option.

  10. Multi-stream face recognition on dedicated mobile devices for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2006-09-01

    Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.

  11. Kaon BSM B -parameters using improved staggered fermions from N f = 2 + 1 unquenched QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Benjamin J.

    2016-01-28

    In this paper, we present results for the matrix elements of the additional ΔS = 2 operators that appear in models of physics beyond the Standard Model (BSM), expressed in terms of four BSM B -parameters. Combined with experimental results for ΔM K and ε K, these constrain the parameters of BSM models. We use improved staggered fermions, with valence hypercubic blocking transfromation (HYP)-smeared quarks and N f = 2 + 1 flavors of “asqtad” sea quarks. The configurations have been generated by the MILC Collaboration. The matching between lattice and continuum four-fermion operators and bilinears is done perturbatively at one-loop order. We use three lattice spacings for the continuum extrapolation: a ≈ 0.09 , 0.06 and 0.045 fm. Valence light-quark masses range down to ≈ mmore » $$phys\\atop{s}$$ /13 while the light sea-quark masses range down to ≈ m$$phys\\atop{s}$$ / 20 . Compared to our previous published work, we have added four additional lattice ensembles, leading to better controlled extrapolations in the lattice spacing and sea-quark masses. We report final results for two renormalization scales, μ = 2 and 3 GeV, and compare them to those obtained by other collaborations. Agreement is found for two of the four BSM B-parameters (B 2 and B$$SUSY\\atop{3}$$ ). The other two (B 4 and B 5) differ significantly from those obtained using regularization independent momentum subtraction (RI-MOM) renormalization as an intermediate scheme, but are in agreement with recent preliminary results obtained by the RBC-UKQCD Collaboration using regularization independent symmetric momentum subtraction (RI-SMOM) intermediate schemes.« less

  12. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  13. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  14. Global regularizing flows with topology preservation for active contours and polygons.

    PubMed

    Sundaramoorthi, Ganesh; Yezzi, Anthony

    2007-03-01

    Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature.

  15. Recursive regularization step for high-order lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  16. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  17. Approximate matching of regular expressions.

    PubMed

    Myers, E W; Miller, W

    1989-01-01

    Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.

  18. A distributed air index based on maximum boundary rectangle over grid-cells for wireless non-flat spatial data broadcast.

    PubMed

    Im, Seokjin; Choi, JinTak

    2014-06-17

    In the pervasive computing environment using smart devices equipped with various sensors, a wireless data broadcasting system for spatial data items is a natural way to efficiently provide a location dependent information service, regardless of the number of clients. A non-flat wireless broadcast system can support the clients in accessing quickly their preferred data items by disseminating the preferred data items more frequently than regular data on the wireless channel. To efficiently support the processing of spatial window queries in a non-flat wireless data broadcasting system, we propose a distributed air index based on a maximum boundary rectangle (MaxBR) over grid-cells (abbreviated DAIM), which uses MaxBRs for filtering out hot data items on the wireless channel. Unlike the existing index that repeats regular data items in close proximity to hot items at same frequency as hot data items in a broadcast cycle, DAIM makes it possible to repeat only hot data items in a cycle and reduces the length of the broadcast cycle. Consequently, DAIM helps the clients access the desired items quickly, improves the access time, and reduces energy consumption. In addition, a MaxBR helps the clients decide whether they have to access regular data items or not. Simulation studies show the proposed DAIM outperforms existing schemes with respect to the access time and energy consumption.

  19. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    NASA Astrophysics Data System (ADS)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; Stocks, G. Malcolm

    2018-03-01

    The Green function plays an essential role in the Korringa-Kohn-Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn-Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). The pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. By using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.

  20. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus

    The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less

  1. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    DOE PAGES

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...

    2017-10-28

    The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less

  2. Local Control Models of Cardiac Excitation–Contraction Coupling

    PubMed Central

    Stern, Michael D.; Song, Long-Sheng; Cheng, Heping; Sham, James S.K.; Yang, Huang Tian; Boheler, Kenneth R.; Ríos, Eduardo

    1999-01-01

    In cardiac muscle, release of activator calcium from the sarcoplasmic reticulum occurs by calcium- induced calcium release through ryanodine receptors (RyRs), which are clustered in a dense, regular, two-dimensional lattice array at the diad junction. We simulated numerically the stochastic dynamics of RyRs and L-type sarcolemmal calcium channels interacting via calcium nano-domains in the junctional cleft. Four putative RyR gating schemes based on single-channel measurements in lipid bilayers all failed to give stable excitation–contraction coupling, due either to insufficiently strong inactivation to terminate locally regenerative calcium-induced calcium release or insufficient cooperativity to discriminate against RyR activation by background calcium. If the ryanodine receptor was represented, instead, by a phenomenological four-state gating scheme, with channel opening resulting from simultaneous binding of two Ca2+ ions, and either calcium-dependent or activation-linked inactivation, the simulations gave a good semiquantitative accounting for the macroscopic features of excitation–contraction coupling. It was possible to restore stability to a model based on a bilayer-derived gating scheme, by introducing allosteric interactions between nearest-neighbor RyRs so as to stabilize the inactivated state and produce cooperativity among calcium binding sites on different RyRs. Such allosteric coupling between RyRs may be a function of the foot process and lattice array, explaining their conservation during evolution. PMID:10051521

  3. Anisotropic smoothing regularization (AnSR) in Thirion's Demons registration evaluates brain MRI tissue changes post-laser ablation.

    PubMed

    Hwuang, Eileen; Danish, Shabbar; Rusu, Mirabela; Sparks, Rachel; Toth, Robert; Madabhushi, Anant

    2013-01-01

    MRI-guided laser-induced interstitial thermal therapy (LITT) is a form of laser ablation and a potential alternative to craniotomy in treating glioblastoma multiforme (GBM) and epilepsy patients, but its effectiveness has yet to be fully evaluated. One way of assessing short-term treatment of LITT is by evaluating changes in post-treatment MRI as a measure of response. Alignment of pre- and post-LITT MRI in GBM and epilepsy patients via nonrigid registration is necessary to detect subtle localized treatment changes on imaging, which can then be correlated with patient outcome. A popular deformable registration scheme in the context of brain imaging is Thirion's Demons algorithm, but its flexibility often introduces artifacts without physical significance, which has conventionally been corrected by Gaussian smoothing of the deformation field. In order to prevent such artifacts, we instead present the Anisotropic smoothing regularizer (AnSR) which utilizes edge-detection and denoising within the Demons framework to regularize the deformation field at each iteration of the registration more aggressively in regions of homogeneously oriented displacements while simultaneously regularizing less aggressively in areas containing heterogeneous local deformation and tissue interfaces. In contrast, the conventional Gaussian smoothing regularizer (GaSR) uniformly averages over the entire deformation field, without carefully accounting for transitions across tissue boundaries and local displacements in the deformation field. In this work we employ AnSR within the Demons algorithm and perform pairwise registration on 2D synthetic brain MRI with and without noise after inducing a deformation that models shrinkage of the target region expected from LITT. We also applied Demons with AnSR for registering clinical T1-weighted MRI for one epilepsy and one GBM patient pre- and post-LITT. Our results demonstrate that by maintaining select displacements in the deformation field, AnSR outperforms both GaSR and no regularizer (NoR) in terms of normalized sum of squared differences (NSSD) with values such as 0.743, 0.807, and 1.000, respectively, for GBM.

  4. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.

  5. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  6. 3D CSEM data inversion using Newton and Halley class methods

    NASA Astrophysics Data System (ADS)

    Amaya, M.; Hansen, K. R.; Morten, J. P.

    2016-05-01

    For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.

  7. Chains of benzenes with lithium-atom adsorption: Vibrations and spontaneous symmetry breaking

    NASA Astrophysics Data System (ADS)

    Ortiz, Yenni P.; Stegmann, Thomas; Klein, Douglas J.; Seligman, Thomas H.

    2017-09-01

    We study effects of different configurations of adsorbates on the vibrational modes as well as symmetries of polyacenes and poly-p-phenylenes focusing on lithium atom adsorption. We found that the spectra of the vibrational modes distinguish the different configurations. For more regular adsorption schemes the lowest states are bending and torsion modes of the skeleton, which are essentially followed by the adsorbate. On poly-p-phenylenes we found that lithium adsorption reduces and often eliminates the torsion between rings thus increasing symmetry. There is spontaneous symmetry breaking in poly-p-phenylenes due to double adsorption of lithium atoms on alternating rings.

  8. Pseudodynamic systems approach based on a quadratic approximation of update equations for diffuse optical tomography.

    PubMed

    Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish

    2011-08-01

    We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data.

  9. Simulation of forming a flat forging

    NASA Astrophysics Data System (ADS)

    Solomonov, K.; Tishchuk, L.; Fedorinin, N.

    2017-11-01

    The metal flow in some of the metal shaping processes (rolling, pressing, die forging) is subjected to the regularities which determine the scheme of deformation in the metal samples upsetting. The object of the study was the research of the metal flow picture including the contour of the part, the demarcation lines of the metal flow and the flow lines. We have created an algorithm for constructing the metal flow picture, which is based on the representation of the metal flow demarcation line as an equidistant. Computer and physical simulation of the metal flow picture with the help of various software systems confirms the suggested hypothesis.

  10. Baryon octet electromagnetic form factors in a confining NJL model

    NASA Astrophysics Data System (ADS)

    Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloët, Ian C.; Thomas, Anthony W.

    2016-08-01

    Electromagnetic form factors of the baryon octet are studied using a Nambu-Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that rEp < rEΣ+ and | rEn | < | rEΞ0 |, whereas the magnetic radii have a pattern largely consistent with a naive expectation based on the dressed quark masses.

  11. Massive Photons: An Infrared Regularization Scheme for Lattice QCD+QED.

    PubMed

    Endres, Michael G; Shindler, Andrea; Tiburzi, Brian C; Walker-Loud, André

    2016-08-12

    Standard methods for including electromagnetic interactions in lattice quantum chromodynamics calculations result in power-law finite-volume corrections to physical quantities. Removing these by extrapolation requires costly computations at multiple volumes. We introduce a photon mass to alternatively regulate the infrared, and rely on effective field theory to remove its unphysical effects. Electromagnetic modifications to the hadron spectrum are reliably estimated with a precision and cost comparable to conventional approaches that utilize multiple larger volumes. A significant overall cost advantage emerges when accounting for ensemble generation. The proposed method may benefit lattice calculations involving multiple charged hadrons, as well as quantum many-body computations with long-range Coulomb interactions.

  12. The theory and implementation of a high quality pulse width modulated waveform synthesiser applicable to voltage FED inverters

    NASA Astrophysics Data System (ADS)

    Lower, Kim Nigel

    1985-03-01

    Modulation processes associated with the digital implementation of pulse width modulation (PWM) switching strategies were examined. A software package based on a portable turnkey structure is presented. Waveform synthesizer implementation techniques are reviewed. A three phase PWM waveform synthesizer for voltage fed inverters was realized. It is based on a constant carrier frequency of 18 kHz and a regular sample, single edge, asynchronous PWM switching scheme. With high carrier frequencies, it is possible to utilize simple switching strategies and as a consequence, many advantages are highlighted, emphasizing the importance to industrial and office markets.

  13. High-order FDTD methods for transverse electromagnetic systems in dispersive inhomogeneous media.

    PubMed

    Zhao, Shan

    2011-08-15

    This Letter introduces a novel finite-difference time-domain (FDTD) formulation for solving transverse electromagnetic systems in dispersive media. Based on the auxiliary differential equation approach, the Debye dispersion model is coupled with Maxwell's equations to derive a supplementary ordinary differential equation for describing the regularity changes in electromagnetic fields at the dispersive interface. The resulting time-dependent jump conditions are rigorously enforced in the FDTD discretization by means of the matched interface and boundary scheme. High-order convergences are numerically achieved for the first time in the literature in the FDTD simulations of dispersive inhomogeneous media. © 2011 Optical Society of America

  14. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  15. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less

  16. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  17. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  18. Group-regularized individual prediction: theory and application to pain.

    PubMed

    Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D

    2017-01-15

    Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Disease Prediction based on Functional Connectomes using a Scalable and Spatially-Informed Support Vector Machine

    PubMed Central

    Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra

    2014-01-01

    Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268

  20. Computer-aided classification of breast masses using contrast-enhanced digital mammograms

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin

    2018-02-01

    By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (p<0.01). Since DES images eliminate overlapping effect of dense breast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.

  1. The Caspian Sea water dynamics based on satellite imagery and altimetry

    NASA Astrophysics Data System (ADS)

    Kostianoy, Andrey G.; Lebedev, Sergey

    The Caspian Sea water dynamics is poorly known due to a lack of special hydrographic measurements. The known schemes of general circulation of the sea proposed by N.M. Knipovich in 1914-1915 and 1921, A.I. Mikhalevskiy (1931), G.N. Zaitsev (1935) and V.N. Zenin (1942) represent the basin-scale cyclonic gyres in the Middle and Southern Caspian, and no clear scheme for the shallow Northern Caspian. Later numerical models could move forward from these simple schemes of circulation to the more detailed seasonal or climatic schemes of currents, but different approaches and models give different results which significantly differ from each other (Trukhchev et al., 1995; Ibrayev et al., 2003, 2010; Popov, 2004, 2009; Knysh et al., 2008). Satellite monitoring of the Caspian Sea, we perform since 2000, is a useful tool for investigation of water dynamics in the Caspian Sea. To determine mesoscale water structure and dynamics, we used different kind of physical (SST and ice), chemical (suspended matter and water turbidity) and biological (chlorophyll concentration and algal bloom) tracers on satellite imagery. Satellite altimetry (sea level anomalies in combination with the mean dynamic level derived from numerical modeling) provides fields of currents in the whole Caspian Sea on a regular basis (every 10 days). Seasonal fields of currents derived from satellite altimetry also differ from those obtained in numerical models. Finally, we show the results of the first drifter experiment performed in the Caspian Sea in 2006-2008 in the framework of the MACE Project. Special attention is paid to the seasonal upwelling along the eastern coast of the sea, coastal currents, and a giant intrusion of warm water from the Southern to the Middle Caspian Sea.

  2. Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao

    2013-12-01

    A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.

  3. Universal dual amplitudes and asymptotic expansions for gg→ H and H→ γ γ in four dimensions

    NASA Astrophysics Data System (ADS)

    Driencourt-Mangin, Félix; Rodrigo, Germán; Sborlini, Germán F. R.

    2018-03-01

    Though the one-loop amplitudes of the Higgs boson to massless gauge bosons are finite because there is no direct interaction at tree level in the Standard Model, a well-defined regularization scheme is still required for their correct evaluation. We reanalyze these amplitudes in the framework of the four-dimensional unsubtraction and the loop-tree duality (FDU/LTD), and show how a local renormalization solves potential regularization ambiguities. The Higgs boson interactions are also used to illustrate new additional advantages of this formalism. We show that LTD naturally leads to very compact integrand expressions in four space-time dimensions of the one-loop amplitude with virtual electroweak gauge bosons. They exhibit the same functional form as the amplitudes with top quarks and charged scalars, thus opening further possibilities for simplifications in higher-order computations. Another outstanding application is the straightforward implementation of asymptotic expansions by using dual amplitudes. One of the main benefits of the LTD representation is that it is supported in a Euclidean space. This characteristic feature naturally leads to simpler asymptotic expansions.

  4. Spatially multiplexed interferometric microscopy with partially coherent illumination

    NASA Astrophysics Data System (ADS)

    Picazo-Bueno, José Ángel; Zalevsky, Zeev; García, Javier; Ferreira, Carlos; Micó, Vicente

    2016-10-01

    We have recently reported on a simple, low cost, and highly stable way to convert a standard microscope into a holographic one [Opt. Express 22, 14929 (2014)]. The method, named spatially multiplexed interferometric microscopy (SMIM), proposes an off-axis holographic architecture implemented onto a regular (nonholographic) microscope with minimum modifications: the use of coherent illumination and a properly placed and selected one-dimensional diffraction grating. In this contribution, we report on the implementation of partially (temporally reduced) coherent illumination in SMIM as a way to improve quantitative phase imaging. The use of low coherence sources forces the application of phase shifting algorithm instead of off-axis holographic recording to recover the sample's phase information but improves phase reconstruction due to coherence noise reduction. In addition, a less restrictive field of view limitation (1/2) is implemented in comparison with our previously reported scheme (1/3). The proposed modification is experimentally validated in a regular Olympus BX-60 upright microscope considering a wide range of samples (resolution test, microbeads, swine sperm cells, red blood cells, and prostate cancer cells).

  5. Multitask SVM learning for remote sensing data classification

    NASA Astrophysics Data System (ADS)

    Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo

    2010-10-01

    Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.

  6. Similarity-based Regularized Latent Feature Model for Link Prediction in Bipartite Networks.

    PubMed

    Wang, Wenjun; Chen, Xue; Jiao, Pengfei; Jin, Di

    2017-12-05

    Link prediction is an attractive research topic in the field of data mining and has significant applications in improving performance of recommendation system and exploring evolving mechanisms of the complex networks. A variety of complex systems in real world should be abstractly represented as bipartite networks, in which there are two types of nodes and no links connect nodes of the same type. In this paper, we propose a framework for link prediction in bipartite networks by combining the similarity based structure and the latent feature model from a new perspective. The framework is called Similarity Regularized Nonnegative Matrix Factorization (SRNMF), which explicitly takes the local characteristics into consideration and encodes the geometrical information of the networks by constructing a similarity based matrix. We also develop an iterative scheme to solve the objective function based on gradient descent. Extensive experiments on a variety of real world bipartite networks show that the proposed framework of link prediction has a more competitive, preferable and stable performance in comparison with the state-of-art methods.

  7. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons.

    PubMed

    French, D A; Gruenstein, E I

    2006-12-01

    It has been suggested that spontaneous synchronous neuronal activity is an essential step in the formation of functional networks in the central nervous system. The key features of this type of activity consist of bursts of action potentials with associated spikes of elevated cytoplasmic calcium. These features are also observed in networks of rat cortical neurons that have been formed in culture. Experimental studies of these cultured networks have led to several hypotheses for the mechanisms underlying the observed synchronized oscillations. In this paper, bursting integrate-and-fire type mathematical models for regular spiking (RS) and intrinsic bursting (IB) neurons are introduced and incorporated through a small-world connection scheme into a two-dimensional excitatory network similar to those in the cultured network. This computer model exhibits spontaneous synchronous activity through mechanisms similar to those hypothesized for the cultured experimental networks. Traces of the membrane potential and cytoplasmic calcium from the model closely match those obtained from experiments. We also consider the impact on network behavior of the IB neurons, the geometry and the small world connection scheme.

  9. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  10. Numerical 3+1 General Relativistic Magnetohydrodynamics: A Local Characteristic Approach

    NASA Astrophysics Data System (ADS)

    Antón, Luis; Zanotti, Olindo; Miralles, Juan A.; Martí, José M.; Ibáñez, José M.; Font, José A.; Pons, José A.

    2006-01-01

    We present a general procedure to solve numerically the general relativistic magnetohydrodynamics (GRMHD) equations within the framework of the 3+1 formalism. The work reported here extends our previous investigation in general relativistic hydrodynamics (Banyuls et al. 1997) where magnetic fields were not considered. The GRMHD equations are written in conservative form to exploit their hyperbolic character in the solution procedure. All theoretical ingredients necessary to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems (i.e., Godunov-type schemes) are described. In particular, we use a renormalized set of regular eigenvectors of the flux Jacobians of the relativistic MHD equations. In addition, the paper describes a procedure based on the equivalence principle of general relativity that allows the use of Riemann solvers designed for special relativistic MHD in GRMHD. Our formulation and numerical methodology are assessed by performing various test simulations recently considered by different authors. These include magnetized shock tubes, spherical accretion onto a Schwarzschild black hole, equatorial accretion onto a Kerr black hole, and magnetized thick disks accreting onto a black hole and subject to the magnetorotational instability.

  11. Optimization of hybrid model on hajj travel

    NASA Astrophysics Data System (ADS)

    Cahyandari, R.; Ariany, R. L.; Sukono

    2018-03-01

    Hajj travel insurance is an insurance product offered by the insurance company in preparing funds to perform the pilgrimage. This insurance product helps would-be pilgrims to set aside a fund of saving hajj with regularly, but also provides funds of profit sharing (mudharabah) and insurance protection. Scheme of insurance product fund management is largely using the hybrid model, which is the fund from would-be pilgrims will be divided into three account management, that is personal account, tabarru’, and ujrah. Scheme of hybrid model on hajj travel insurance was already discussed at the earlier paper with titled “The Hybrid Model Algorithm on Sharia Insurance”, taking the example case of Mitra Mabrur Plus product from Bumiputera company. On these advanced paper, will be made the previous optimization model design, with partition of benefit the tabarru’ account. Benefits such as compensation for 40 critical illness which initially only for participants of insurance only, on optimization is intended for participants of the insurance and his heir, also to benefit the hospital bills. Meanwhile, the benefits of death benefit is given if the participant is fixed die.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Shaohua; School of Automation, Chongqing University, Chongqing 400044; Sun, Quanping

    This paper addresses chaos control of the micro-electro- mechanical resonator by using adaptive dynamic surface technology with extended state observer. To reveal the mechanism of the micro- electro-mechanical resonator, the phase diagrams and corresponding time histories are given to research the nonlinear dynamics and chaotic behavior, and Homoclinic and heteroclinic chaos which relate closely with the appearance of chaos are presented based on the potential function. To eliminate the effect of chaos, an adaptive dynamic surface control scheme with extended state observer is designed to convert random motion into regular motion without precise system model parameters and measured variables. Puttingmore » tracking differentiator into chaos controller solves the ‘explosion of complexity’ of backstepping and poor precision of the first-order filters. Meanwhile, to obtain high performance, a neural network with adaptive law is employed to approximate unknown nonlinear function in the process of controller design. The boundedness of all the signals of the closed-loop system is proved in theoretical analysis. Finally, numerical simulations are executed and extensive results illustrate effectiveness and robustness of the proposed scheme.« less

  13. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  14. Development of a simple, low cost chronoamperometric assay for fructose based on a commercial graphite-nanoparticle modified screen-printed carbon electrode.

    PubMed

    Nicholas, Phil; Pittson, Robin; Hart, John P

    2018-02-15

    This paper describes the development of a simple, low cost chronoamperometric assay, for the measurement of fructose, using a graphite-nanoparticle modified screen-printed electrode (SPCE-G-COOH). Cyclic voltammetry showed that the response of the SPCE-G-COOH enhanced the sensitivity and precision, towards the enzymatically generated ferrocyanide species, over a plain SPCE; therefore the former was employed in subsequent studies. Calibration studies were carried out using chronoamperometry with a 40µl mixture containing fructose, mediator and FDH, deposited onto the SPCE-G-COOH. The response was linear from 0.1mM to 1.0mM. A commercial fruit juice sample was analysed using the developed assay and the fructose concentration was calculated to be 477mM with a precision of 3.03% (n=5). Following fortification (477mM fructose) the mean recovery was found to be 97.12% with a coefficient of variation of 6.42% (n=5); consequently, the method holds promise for the analysis of commercial fruit juices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    PubMed

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  16. Payday, ponchos, and promotions: a qualitative analysis of perspectives from non-governmental organization programme managers on community health worker motivation and incentives.

    PubMed

    B-Lajoie, Marie-Renée; Hulme, Jennifer; Johnson, Kirsten

    2014-12-05

    Community health workers (CHWs) have been central to broadening the access and coverage of preventative and curative health services worldwide. Much has been debated about how to best remunerate and incentivize this workforce, varying from volunteers to full time workers. Policy bodies, including the WHO and USAID, now advocate for regular stipends. This qualitative study examines the perspective of health programme managers from 16 international non-governmental organizations (NGOs) who directly oversee programmes in resource-limited settings. It aimed to explore institutional guidelines and approaches to designing CHW incentives, and inquire about how NGO managers are adapting their approaches to working with CHWs in this shifting political and funding climate. Second, it meant to understand the position of stakeholders who design and manage non-governmental organization-run CHW programmes on what they consider priorities to boost CHW motivation. Individuals were recruited using typical case sampling through chain referral at the semi-annual CORE Group meeting in the spring of 2012. Semi-structured interviews were guided by a peer reviewed tool. Two reviewers analyzed the transcripts for thematic saturation. Six key factors influenced programme manager decision-making: National-level government policy, donor practice, implicit organizational approaches, programmatic, cultural, and community contexts, experiences and values of managers, and the nature of the work asked of CHWs. Programme managers strongly relied on national government to provide clear guidance on CHW incentives schemes. Perspectives on remuneration varied greatly, from fears that it is unsustainable, to the view that it is a basic human right, and a mechanism to achieve greater gender equity. Programme managers were interested in exploring career paths and innovative financing schemes for CHWs, such as endowment funds or material sales, to heighten local ownership and sustainability of programmes. Participants also supported the creation of both national-level and global interfaces for sharing practical experience and best practices with other CHW programmes. Prescriptive recommendations for monetary remuneration, aside from those coming from national governments, will likely continue to meet resistance by NGOs, as contexts are nuanced. There is growing consensus that incentives should reflect the nature of the work asked of CHWs, and the potential for motivation through sustainable financial schemes other than regular salaries. Programme managers advocate for greater transparency and information sharing among organizations.

  17. Adaptive non-local means on local principle neighborhood for noise/artifacts reduction in low-dose CT images.

    PubMed

    Zhang, Yuanke; Lu, Hongbing; Rong, Junyan; Meng, Jing; Shang, Junliang; Ren, Pinghong; Zhang, Junying

    2017-09-01

    Low-dose CT (LDCT) technique can reduce the x-ray radiation exposure to patients at the cost of degraded images with severe noise and artifacts. Non-local means (NLM) filtering has shown its potential in improving LDCT image quality. However, currently most NLM-based approaches employ a weighted average operation directly on all neighbor pixels with a fixed filtering parameter throughout the NLM filtering process, ignoring the non-stationary noise nature of LDCT images. In this paper, an adaptive NLM filtering scheme on local principle neighborhoods (PC-NLM) is proposed for structure-preserving noise/artifacts reduction in LDCT images. Instead of using neighboring patches directly, in the PC-NLM scheme, the principle component analysis (PCA) is first applied on local neighboring patches of the target patch to decompose the local patches into uncorrelated principle components (PCs), then a NLM filtering is used to regularize each PC of the target patch and finally the regularized components is transformed to get the target patch in image domain. Especially, in the NLM scheme, the filtering parameter is estimated adaptively from local noise level of the neighborhood as well as the signal-to-noise ratio (SNR) of the corresponding PC, which guarantees a "weaker" NLM filtering on PCs with higher SNR and a "stronger" filtering on PCs with lower SNR. The PC-NLM procedure is iteratively performed several times for better removal of the noise and artifacts, and an adaptive iteration strategy is developed to reduce the computational load by determining whether a patch should be processed or not in next round of the PC-NLM filtering. The effectiveness of the presented PC-NLM algorithm is validated by experimental phantom studies and clinical studies. The results show that it can achieve promising gain over some state-of-the-art methods in terms of artifact suppression and structure preservation. With the use of PCA on local neighborhoods to extract principal structural components, as well as adaptive NLM filtering on PCs of the target patch using filtering parameter estimated based on the local noise level and corresponding SNR, the proposed PC-NLM method shows its efficacy in preserving fine anatomical structures and suppressing noise/artifacts in LDCT images. © 2017 American Association of Physicists in Medicine.

  18. Global High Resolution Crustal Magnetic Field Mapping at the Surface of the Moon from Lunar Prospector and SELENE/Kaguya Satellites

    NASA Astrophysics Data System (ADS)

    Ravat, D.; Purucker, M.; Olsen, N.; Finlay, C.

    2017-12-01

    We derive new models of the lunar crustal magnetic field at the lunar surface with data from Lunar Prospector (LP) and SELENE/Kaguya (K) satellite using a global set of 35820 1° equal area monopoles (O'Brien and Parker, 1994; Olsen et al., 2017). The resulting fields have similar features to surface fields obtained by Tsunakawa et al. (2015) using 230 subset regions and the primary differences are due to our stringent data selection (see below). The use of monopoles allows closer spacing than dipoles with lesser amount of regularization and moderate cluster computer resources. We use the scheme of iteratively reweighted least-squares inversion to compute the initial model. Then the amplitudes of these monopoles are determined by minimizing the misfit to the components together with the global average of |Br| at the ellipsoid surface (i.e. applying a L1 model regularization of Br). In a final step we transform the point-source representation to a spherical harmonic expansion. We extract high quality data segments using a processing scheme based on internal/external dipole field removal, low order polynomial removal, and a new processing scheme called Joint Equivalent Source Cross-validation. In the cross-validation procedure we analyze the fit of modeled components to data in 10° latitudinal segments from an inversion of triplets of nearby passes to a single set of dipoles along the passes. We evaluate the fit using four criteria in each segment: correlation coefficient, amplitude ratio, RMS of the misfit, and standard deviation of field values themselves. We fine-tune the criteria to the choice we would have made in visually retaining pass segments and this yields a global dataset of more than 2.87 million (x 3 components) points at altitudes <60 km. The selected Lunar Prospector and Kaguya magnetic data independently show similar features and statistics for altitudes, observed and modeled components, and their misfit (number of observation locations: LP 1.8 million and K 1.07 million x 3 components). We use these data to make a regional assessment of key magnetic features on the Moon (including impacts and swirls), the depth of magnetization of regional sources, and source parameters of isolated anomalies.

  19. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    PubMed

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  20. Structure-Function Network Mapping and Its Assessment via Persistent Homology

    PubMed Central

    2017-01-01

    Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127

  1. Touchscreen everywhere: on transferring a normal planar surface to a touch-sensitive display.

    PubMed

    Dai, Jingwen; Chung, Chi-Kit Ronald

    2014-08-01

    We address how a human-computer interface with small device size, large display, and touch-input facility can be made possible by a mere projector and camera. The realization is through the use of a properly embedded structured light sensing scheme that enables a regular light-colored table surface to serve the dual roles of both a projection screen and a touch-sensitive display surface. A random binary pattern is employed to code structured light in pixel accuracy, which is embedded into the regular projection display in a way that the user perceives only regular display but not the structured pattern hidden in the display. With the projection display on the table surface being imaged by a camera, the observed image data, plus the known projection content, can work together to probe the 3-D workspace immediately above the table surface, like deciding if there is a finger present and if the finger touches the table surface, and if so, at what position on the table surface the contact is made. All the decisions hinge upon a careful calibration of the projector-camera-table surface system, intelligent segmentation of the hand in the image data, and exploitation of the homography mapping existing between the projector's display panel and the camera's image plane. Extensive experimentation including evaluation of the display quality, hand segmentation accuracy, touch detection accuracy, trajectory tracking accuracy, multitouch capability and system efficiency are shown to illustrate the feasibility of the proposed realization.

  2. Pragmatic approach to gravitational radiation reaction in binary black holes

    PubMed

    Lousto

    2000-06-05

    We study the relativistic orbit of binary black holes in systems with small mass ratio. The trajectory of the smaller object (another black hole or a neutron star), represented as a particle, is determined by the geodesic equation on the perturbed massive black hole spacetime. Here we study perturbations around a Schwarzschild black hole using Moncrief's gauge invariant formalism. We decompose the perturbations into l multipoles to show that all l-metric coefficients are C0 at the location of the particle. Summing over l, to reconstruct the full metric, gives a formally divergent result. We succeed in bringing this sum to a Riemann's zeta-function regularization scheme and numerically compute the first-order geodesics.

  3. Casimir self-entropy of a spherical electromagnetic δ -function shell

    NASA Astrophysics Data System (ADS)

    Milton, Kimball A.; Kalauni, Pushpa; Parashar, Prachi; Li, Yang

    2017-10-01

    In this paper we continue our program of computing Casimir self-entropies of idealized electrical bodies. Here we consider an electromagnetic δ -function sphere ("semitransparent sphere") whose electric susceptibility has a transverse polarization with arbitrary strength. Dispersion is incorporated by a plasma-like model. In the strong-coupling limit, a perfectly conducting spherical shell is realized. We compute the entropy for both low and high temperatures. The transverse electric self-entropy is negative as expected, but the transverse magnetic self-entropy requires ultraviolet and infrared renormalization (subtraction), and, surprisingly, is only positive for sufficiently strong coupling. Results are robust under different regularization schemes. These rather surprising findings require further investigation.

  4. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  5. Design of Cancelable Palmprint Templates Based on Look Up Table

    NASA Astrophysics Data System (ADS)

    Qiu, Jian; Li, Hengjian; Dong, Jiwen

    2018-03-01

    A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.

  6. New Leading Contribution to Neutrinoless Double-β Decay

    NASA Astrophysics Data System (ADS)

    Cirigliano, Vincenzo; Dekens, Wouter; de Vries, Jordy; Graesser, Michael L.; Mereghetti, Emanuele; Pastore, Saori; van Kolck, Ubirajara

    2018-05-01

    Within the framework of chiral effective field theory, we discuss the leading contributions to the neutrinoless double-beta decay transition operator induced by light Majorana neutrinos. Based on renormalization arguments in both dimensional regularization with minimal subtraction and a coordinate-space cutoff scheme, we show the need to introduce a leading-order short-range operator, missing in all current calculations. We discuss strategies to determine the finite part of the short-range coupling by matching to lattice QCD or by relating it via chiral symmetry to isospin-breaking observables in the two-nucleon sector. Finally, we speculate on the impact of this new contribution on nuclear matrix elements of relevance to experiment.

  7. Baryon octet electromagnetic form factors in a confining NJL model

    DOE PAGES

    Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloet, Ian C.; ...

    2016-05-25

    Electromagnetic form factors of the baryon octet are studied using a Nambu–Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Here, comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that r p E < r Σ+ E and |r n E| < |r Ξ0 E|, whereas the magnetic radii have a pattern largely consistent with a naivemore » expectation based on the dressed quark masses.« less

  8. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  9. Teaching parents to look after children's teeth.

    PubMed

    Lloyd, S

    1994-03-01

    Children's toothpastes with fluoride help to prevent decay, but parents should ask their dentist before giving fluoride supplements to children. Overdosage is harmful. Sugars eaten as part of a meal do less harm to teeth than those eaten frequently as snacks. Sugar-free infant drinks and children's confectionery are now on the market and are more "tooth friendly". Look out for the "happy tooth" symbol. Babies can be registered with NHS dentists as soon as the first teeth start to come through, and should be taken regularly to the dentist throughout childhood. Under the NHS scheme, dentists are paid a capitation fee to provide continuing preventive care and treatment for children free of charge.

  10. Self-adjoint realisations of the Dirac-Coulomb Hamiltonian for heavy nuclei

    NASA Astrophysics Data System (ADS)

    Gallone, Matteo; Michelangeli, Alessandro

    2018-02-01

    We derive a classification of the self-adjoint extensions of the three-dimensional Dirac-Coulomb operator in the critical regime of the Coulomb coupling. Our approach is solely based upon the Kreĭn-Višik-Birman extension scheme, or also on Grubb's universal classification theory, as opposite to previous works within the standard von Neumann framework. This let the boundary condition of self-adjointness emerge, neatly and intrinsically, as a multiplicative constraint between regular and singular part of the functions in the domain of the extension, the multiplicative constant giving also immediate information on the invertibility property and on the resolvent and spectral gap of the extension.

  11. Phase-shift detection in a Fourier-transform method for temperature sensing using a tapered fiber microknot resonator.

    PubMed

    Larocque, Hugo; Lu, Ping; Bao, Xiaoyi

    2016-04-01

    Phase-shift detection in a fast-Fourier-transform (FFT)-based spectrum analysis technique for temperature sensing using a tapered fiber microknot resonator is proposed and demonstrated. Multiple transmission peaks in the FFT spectrum of the device were identified as optical modes having completed different amounts of round trips within the ring structure. Temperature variation induced phase shifts for each set of peaks were characterized, and experimental results show that different peaks have distinct temperature sensitivities reaching values up to -0.542  rad/°C, which is about 10 times greater than that of a regular adiabatic taper Mach-Zehnder interferometer when using similar phase-tracking schemes.

  12. Fifty Years of the Index to Dental Literature: A Critical Appraisal

    PubMed Central

    1971-01-01

    The year 1971 marks the fiftieth anniversary of the Index to Dental Literature. The Index had a slow and stormy birth, with twenty-three years of hard work put in until the first volume was issued. The first Index is described and the changes in its contents and format are traced through the years until its production in 1965 by the National Library of Medicine. The current Index is analyzed with attention paid to nomenclature, classification scheme, quality of the index entries and cross references. The results of a survey of regular users of the Index are interpreted, and suggestions gleaned for the improvement of this most useful tool in dental research. Images PMID:4947815

  13. A denoising algorithm for CT image using low-rank sparse coding

    NASA Astrophysics Data System (ADS)

    Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng

    2018-03-01

    We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

  14. Book Review:

    NASA Astrophysics Data System (ADS)

    Louko, Jorma

    2007-04-01

    Bastianelli and van Nieuwenhuizen's monograph `Path Integrals and Anomalies in Curved Space' collects in one volume the results of the authors' 15-year research programme on anomalies that arise in Feynman diagrams of quantum field theories on curved manifolds. The programme was spurred by the path-integral techniques introduced in Alvarez-Gaumé and Witten's renowned 1983 paper on gravitational anomalies which, together with the anomaly cancellation paper by Green and Schwarz, led to the string theory explosion of the 1980s. The authors have produced a tour de force, giving a comprehensive and pedagogical exposition of material that is central to current research. The first part of the book develops from scratch a formalism for defining and evaluating quantum mechanical path integrals in nonlinear sigma models, using time slicing regularization, mode regularization and dimensional regularization. The second part applies this formalism to quantum fields of spin 0, 1/2, 1 and 3/2 and to self-dual antisymmetric tensor fields. The book concludes with a discussion of gravitational anomalies in 10-dimensional supergravities, for both classical and exceptional gauge groups. The target audience is researchers and graduate students in curved spacetime quantum field theory and string theory, and the aims, style and pedagogical level have been chosen with this audience in mind. Path integrals are treated as calculational tools, and the notation and terminology are throughout tailored to calculational convenience, rather than to mathematical rigour. The style is closer to that of an exceedingly thorough and self-contained review article than to that of a textbook. As the authors mention, the first part of the book can be used as an introduction to path integrals in quantum mechanics, although in a classroom setting perhaps more likely as supplementary reading than a primary class text. Readers outside the core audience, including this reviewer, will gain from the book a heightened appreciation of the central role of regularization as a defining ingredient of a quantum field theory and will be impressed by the agreement of results arising from different regularization schemes. The readers may in particular enjoy the authors' `brief history of anomalies' in quantum field theory, as well as a similar historical discussion of path integrals in quantum mechanics.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less

  16. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    PubMed

    Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Vouchers as demand side financing instruments for health care: a review of the Bangladesh maternal voucher scheme.

    PubMed

    Schmidt, Jean-Olivier; Ensor, Tim; Hossain, Atia; Khan, Salam

    2010-07-01

    Demand side financing (DSF) mechanisms transfer purchasing power to specified groups for defined goods and services in order to increase access to specified services. This is an important innovation in health care systems where access remains poor despite substantial subsidies towards the supply side. In Bangladesh, a maternal health DSF pilot in 33 sub-districts was launched in 2007. We report the results of a rapid review of this scheme undertaken during 2008 after 1 year of its setup. Quantitative data collected by DSF committees, facilities and national information systems were assessed alongside qualitative data, i.e. key informant interviews and focus group discussions with beneficiaries and health service providers on the operation of the scheme in 6 sub-districts. The scheme provides vouchers to women distributed by health workers that entitle mainly poor women to receive skilled care at home or a facility and also provide payments for transport and food. After initial setbacks voucher distribution rose quickly. The data also suggest that the rise in facility based delivery appeared to be more rapid in DSF than in other non-DSF areas, although the methods do not allow for a strict causal attribution as there might be co-founding effects. Fears that the financial incentives for surgical delivery would lead to an over emphasis on Caesarean section appear to be unfounded although the trends need further monitoring. DSF provides substantial additional funding to facilities but remains complex to administer, requiring a parallel administrative mechanism putting additional work burden on the health workers. There is little evidence that the mechanism encourages competition due to the limited provision of health care services. The main question outstanding is whether the achievements of the DSF scheme could be achieved more efficiently by adapting the regular government funding rather than creating an entirely new mechanism. Also, improving the quality of health care services cannot be expected by the DSF mechanism alone within an environment lacking the pre-requirements for competition. Quality assurance mechanisms need to be put in place. A large-scale impact evaluation is currently underway. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Converting point-wise nuclear cross sections to pole representation using regularized vector fitting

    NASA Astrophysics Data System (ADS)

    Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord

    2018-03-01

    Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.

  19. A Pseudorange Measurement Scheme Based on Snapshot for Base Station Positioning Receivers.

    PubMed

    Mo, Jun; Deng, Zhongliang; Jia, Buyun; Bian, Xinmei

    2017-12-01

    Digital multimedia broadcasting signal is promised to be a wireless positioning signal. This paper mainly studies a multimedia broadcasting technology, named China mobile multimedia broadcasting (CMMB), in the context of positioning. Theoretical and practical analysis on the CMMB signal suggests that the existing CMMB signal does not have the meter positioning capability. So, the CMMB system has been modified to achieve meter positioning capability by multiplexing the CMMB signal and pseudo codes in the same frequency band. The time difference of arrival (TDOA) estimation method is used in base station positioning receivers. Due to the influence of a complex fading channel and the limited bandwidth of receivers, the regular tracking method based on pseudo code ranging is difficult to provide continuous and accurate TDOA estimations. A pseudorange measurement scheme based on snapshot is proposed to solve the problem. This algorithm extracts the TDOA estimation from the stored signal fragments, and utilizes the Taylor expansion of the autocorrelation function to improve the TDOA estimation accuracy. Monte Carlo simulations and real data tests show that the proposed algorithm can significantly reduce the TDOA estimation error for base station positioning receivers, and then the modified CMMB system achieves meter positioning accuracy.

  20. Electromagnetically-induced-absorption resonance with high contrast and narrow width in the Hanle configuration

    NASA Astrophysics Data System (ADS)

    Brazhnikov, D. V.; Taichenachev, A. V.; Tumaikin, A. M.; Yudin, V. I.

    2014-12-01

    The method for observing the high-contrast and narrow-width resonances of electromagnetically induced absorption (EIA) in the Hanle configuration under counter-propagating pump and probe light waves is proposed. Here, as an example, we study a ‘dark’ type of atomic dipole transition {{F}\\text{g}}={1}\\to {{F}\\text{e}}={1} in D1 line of 87Rb, where usually the electromagnetically induced transparency can be observed. To obtain the EIA signal one should properly choose the polarizations of light waves and intensities. In contrast to regular schemes for observing EIA signals (under a single traveling light wave in the Hanle configuration or under a bichromatic light field consisting of two traveling waves), the proposed scheme allows one to use buffer gas for significantly improving the properties of the resonance. Also the dramatic influence of atomic transition openness on the contrast of the resonance is revealed, which is advantageous in comparison with cyclic atomic transitions. The nonlinear resonances in a probe-wave transmitted signal with contrast close to 100% and sub-kHz widths can be obtained. The results are interesting in high-resolution spectroscopy, nonlinear and magneto-optics.

  1. Corner-transport-upwind lattice Boltzmann model for bubble cavitation

    NASA Astrophysics Data System (ADS)

    Sofonea, V.; Biciuşcǎ, T.; Busuioc, S.; Ambruş, Victor E.; Gonnella, G.; Lamura, A.

    2018-02-01

    Aiming to study the bubble cavitation problem in quiescent and sheared liquids, a third-order isothermal lattice Boltzmann model that describes a two-dimensional (2D) fluid obeying the van der Waals equation of state, is introduced. The evolution equations for the distribution functions in this off-lattice model with 16 velocities are solved using the corner-transport-upwind (CTU) numerical scheme on large square lattices (up to 6144 ×6144 nodes). The numerical viscosity and the regularization of the model are discussed for first- and second-order CTU schemes finding that the latter choice allows to obtain a very accurate phase diagram of a nonideal fluid. In a quiescent liquid, the present model allows us to recover the solution of the 2D Rayleigh-Plesset equation for a growing vapor bubble. In a sheared liquid, we investigated the evolution of the total bubble area, the bubble deformation, and the bubble tilt angle, for various values of the shear rate. A linear relation between the dimensionless deformation coefficient D and the capillary number Ca is found at small Ca but with a different factor than in equilibrium liquids. A nonlinear regime is observed for Ca≳0.2 .

  2. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  3. Interaction of ozone and carbon dioxide with polycrystalline potassium bromide and its atmospheric implication

    NASA Astrophysics Data System (ADS)

    Levanov, Alexander V.; Isaikina, Oksana Ya.; Maksimov, Ivan B.; Lunin, Valerii V.

    2017-03-01

    It has been discovered for the first time that gaseous ozone in the presence of carbon dioxide and water vapor interacts with crystalline potassium bromide giving gaseous Br2 and solid salts KHCO3 and KBrO3. Molecular bromine and hydrocarbonate ion are the products of one and the same reaction described by the stoichiometric equation 2KBr(cr.) + O3(gas) + 2CO2(gas) + H2O(gas) → 2KHCO3(cr.) + Br2(gas) + O2(gas). The dependencies of Br2, KHCO3 and KBrO3 formation rates on the concentrations of O3 and CO2, humidity of initial gas mixture, and temperature have been investigated. A kinetic scheme has been proposed that explains the experimental regularities found in this work on the quantitative level. According to the scheme, the formation of molecular bromine and hydrocarbonate is due to the reaction between hypobromite BrO-, the primary product of bromide oxidation by ozone, with carbon dioxide and water; bromate results from consecutive oxidation of bromide ion by ozone Br- → +O3 , -O2 BrO- → +O3 , -O2 BrO2- → +O3, -O2 BrO3- .

  4. A New Eddy Dissipation Rate Formulation for the Terminal Area PBL Prediction System(TAPPS)

    NASA Technical Reports Server (NTRS)

    Charney, Joseph J.; Kaplan, Michael L.; Lin, Yuh-Lang; Pfeiffer, Karl D.

    2000-01-01

    The TAPPS employs the MASS model to produce mesoscale atmospheric simulations in support of the Wake Vortex project at Dallas Fort-Worth International Airport (DFW). A post-processing scheme uses the simulated three-dimensional atmospheric characteristics in the planetary boundary layer (PBL) to calculate the turbulence quantities most important to the dissipation of vortices: turbulent kinetic energy and eddy dissipation rate. TAPPS will ultimately be employed to enhance terminal area productivity by providing weather forecasts for the Aircraft Vortex Spacing System (AVOSS). The post-processing scheme utilizes experimental data and similarity theory to determine the turbulence quantities from the simulated horizontal wind field and stability characteristics of the atmosphere. Characteristic PBL quantities important to these calculations are determined based on formulations from the Blackadar PBL parameterization, which is regularly employed in the MASS model to account for PBL processes in mesoscale simulations. The TAPPS forecasts are verified against high-resolution observations of the horizontal winds at DFW. Statistical assessments of the error in the wind forecasts suggest that TAPPS captures the essential features of the horizontal winds with considerable skill. Additionally, the turbulence quantities produced by the post-processor are shown to compare favorably with corresponding tower observations.

  5. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    PubMed

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  6. Women and finance.

    PubMed

    Seaforth, W

    1995-12-01

    This article discusses women's rights to inherit land, the impact of flexible loan schemes for women, and the paucity of available loan schemes for women. The poor without land ownership have many problems obtaining credit for shelter from conventional finance markets. The poor must limit loans to small amounts that banks do not want to bother with. Eligibility criteria for loans usually require collateral, such as a high and regular income, savings, or land. The poor, and particularly poor women, cannot acquire credit or can do so only through a husband or male relative. Female heads of households are discriminated against when the man is assumed to be the major income source. Most housing purchases in developing countries are made through cash payments from family savings or informal loans. Even small loans may place a heavy burden on women. The author gives several examples of credit groups and their success in generating income and housing security. There remains a need to provide training and education for women and to improve women's access to credit and land. Information should be available to women on how to obtain credit. Governments and nongovernmental organizations have a responsibility to provide support for women's efforts to provide housing and support for their families.

  7. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  8. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  9. Lung cancer risk of airborne particles for Italian population

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buonanno, G., E-mail: buonanno@unicas.it; International Laboratory for Air Quality and Health, Queensland University of Technology, 2 George Street 2, 4001 Brisbane, Qld.; Giovinco, G., E-mail: giovinco@unicas.it

    Airborne particles, including both ultrafine and supermicrometric particles, contain various carcinogens. Exposure and risk-assessment studies regularly use particle mass concentration as dosimetry parameter, therefore neglecting the potential impact of ultrafine particles due to their negligible mass compared to supermicrometric particles. The main purpose of this study was the characterization of lung cancer risk due to exposure to polycyclic aromatic hydrocarbons and some heavy metals associated with particle inhalation by Italian non-smoking people. A risk-assessment scheme, modified from an existing risk model, was applied to estimate the cancer risk contribution from both ultrafine and supermicrometric particles. Exposure assessment was carried outmore » on the basis of particle number distributions measured in 25 smoke-free microenvironments in Italy. The predicted lung cancer risk was then compared to the cancer incidence rate in Italy to assess the number of lung cancer cases attributed to airborne particle inhalation, which represents one of the main causes of lung cancer, apart from smoking. Ultrafine particles are associated with a much higher risk than supermicrometric particles, and the modified risk-assessment scheme provided a more accurate estimate than the conventional scheme. Great attention has to be paid to indoor microenvironments and, in particular, to cooking and eating times, which represent the major contributors to lung cancer incidence in the Italian population. The modified risk assessment scheme can serve as a tool for assessing environmental quality, as well as setting up exposure standards for particulate matter. - Highlights: • Lung cancer risk for non-smoking Italian population due to particle inhalation. • The average lung cancer risk for Italian population is equal to 1.90×10{sup −2}. • Ultrafine particle is the aerosol metric mostly contributing to lung cancer risk. • B(a)P is the main (particle-bounded) compound contributing to lung cancer risk. • Cooking activities represent the principal contributor to the lung cancer risk.« less

  10. The "Universal" in UHC and Ghana's National Health Insurance Scheme: policy and implementation challenges and dilemmas of a lower middle income country.

    PubMed

    Agyepong, Irene Akua; Abankwah, Daniel Nana Yaw; Abroso, Angela; Chun, ChangBae; Dodoo, Joseph Nii Otoe; Lee, Shinye; Mensah, Sylvester A; Musah, Mariam; Twum, Adwoa; Oh, Juwhan; Park, Jinha; Yang, DoogHoon; Yoon, Kijong; Otoo, Nathaniel; Asenso-Boadi, Francis

    2016-09-21

    Despite universal population coverage and equity being a stated policy goal of its NHIS, over a decade since passage of the first law in 2003, Ghana continues to struggle with how to attain it. The predominantly (about 70 %) tax funded NHIS currently has active enrolment hovering around 40 % of the population. This study explored in-depth enablers and barriers to enrolment in the NHIS to provide lessons and insights for Ghana and other low and middle income countries (LMIC) into attaining the goal of universality in Universal Health Coverage (UHC). We conducted a cross sectional mixed methods study of an urban and a rural district in one region of Southern Ghana. Data came from document review, analysis of routine data on enrolment, key informant in-depth interviews with local government, regional and district insurance scheme and provider staff and community member in-depth interviews and focus group discussions. Population coverage in the NHIS in the study districts was not growing towards near universal because of failure of many of those who had ever enrolled to regularly renew annually as required by the NHIS policy. Factors facilitating and enabling enrolment were driven by the design details of the scheme that emanate from national level policy and program formulation, frontline purchaser and provider staff implementation arrangements and contextual factors. The factors inter-related and worked together to affect client experience of the scheme, which were not always the same as the declared policy intent. This then also affected the decision to enrol and stay enrolled. UHC policy and program design needs to be such that enrolment is effectively compulsory in practice. It also requires careful attention and responsiveness to actual and potential subscriber, purchaser and provider (stakeholder) incentives and related behaviour generated at implementation levels.

  11. Efficient sampling of reversible cross-linking polymers: Self-assembly of single-chain polymeric nanoparticles

    NASA Astrophysics Data System (ADS)

    Oyarzún, Bernardo; Mognetti, Bortolo Matteo

    2018-03-01

    We present a new simulation technique to study systems of polymers functionalized by reactive sites that bind/unbind forming reversible linkages. Functionalized polymers feature self-assembly and responsive properties that are unmatched by the systems lacking selective interactions. The scales at which the functional properties of these materials emerge are difficult to model, especially in the reversible regime where such properties result from many binding/unbinding events. This difficulty is related to large entropic barriers associated with the formation of intra-molecular loops. In this work, we present a simulation scheme that sidesteps configurational costs by dedicated Monte Carlo moves capable of binding/unbinding reactive sites in a single step. Cross-linking reactions are implemented by trial moves that reconstruct chain sections attempting, at the same time, a dimerization reaction between pairs of reactive sites. The model is parametrized by the reaction equilibrium constant of the reactive species free in solution. This quantity can be obtained by means of experiments or atomistic/quantum simulations. We use the proposed methodology to study the self-assembly of single-chain polymeric nanoparticles, starting from flexible precursors carrying regularly or randomly distributed reactive sites. We focus on understanding differences in the morphology of chain nanoparticles when linkages are reversible as compared to the well-studied case of irreversible reactions. Intriguingly, we find that the size of regularly functionalized chains, in good solvent conditions, is non-monotonous as a function of the degree of functionalization. We clarify how this result follows from excluded volume interactions and is peculiar of reversible linkages and regular functionalizations.

  12. A MATLAB-based graphical user interface program for computing functionals of the geopotential up to ultra-high degrees and orders

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj

    2013-07-01

    We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

  13. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography.

    PubMed

    Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2007-06-01

    Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.

  14. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  15. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  16. Characterizing the Effects of Washing by Different Detergents on the Wavelength-Scale Microstructures of Silk Samples Using Mueller Matrix Polarimetry.

    PubMed

    Dong, Yang; He, Honghui; He, Chao; Zhou, Jialing; Zeng, Nan; Ma, Hui

    2016-08-10

    Silk fibers suffer from microstructural changes due to various external environmental conditions including daily washings. In this paper, we take the backscattering Mueller matrix images of silk samples for non-destructive and real-time quantitative characterization of the wavelength-scale microstructure and examination of the effects of washing by different detergents. The 2D images of the 16 Mueller matrix elements are reduced to the frequency distribution histograms (FDHs) whose central moments reveal the dominant structural features of the silk fibers. A group of new parameters are also proposed to characterize the wavelength-scale microstructural changes of the silk samples during the washing processes. Monte Carlo (MC) simulations are carried out to better understand how the Mueller matrix parameters are related to the wavelength-scale microstructure of silk fibers. The good agreement between experiments and simulations indicates that the Mueller matrix polarimetry and FDH based parameters can be used to quantitatively detect the wavelength-scale microstructural features of silk fibers. Mueller matrix polarimetry may be used as a powerful tool for non-destructive and in situ characterization of the wavelength-scale microstructures of silk based materials.

  17. Characterizing the Effects of Washing by Different Detergents on the Wavelength-Scale Microstructures of Silk Samples Using Mueller Matrix Polarimetry

    PubMed Central

    Dong, Yang; He, Honghui; He, Chao; Zhou, Jialing; Zeng, Nan; Ma, Hui

    2016-01-01

    Silk fibers suffer from microstructural changes due to various external environmental conditions including daily washings. In this paper, we take the backscattering Mueller matrix images of silk samples for non-destructive and real-time quantitative characterization of the wavelength-scale microstructure and examination of the effects of washing by different detergents. The 2D images of the 16 Mueller matrix elements are reduced to the frequency distribution histograms (FDHs) whose central moments reveal the dominant structural features of the silk fibers. A group of new parameters are also proposed to characterize the wavelength-scale microstructural changes of the silk samples during the washing processes. Monte Carlo (MC) simulations are carried out to better understand how the Mueller matrix parameters are related to the wavelength-scale microstructure of silk fibers. The good agreement between experiments and simulations indicates that the Mueller matrix polarimetry and FDH based parameters can be used to quantitatively detect the wavelength-scale microstructural features of silk fibers. Mueller matrix polarimetry may be used as a powerful tool for non-destructive and in situ characterization of the wavelength-scale microstructures of silk based materials. PMID:27517919

  18. Monitoring temporal microstructural variations of skeletal muscle tissues by multispectral Mueller matrix polarimetry

    NASA Astrophysics Data System (ADS)

    Dong, Yang; He, Honghui; He, Chao; Ma, Hui

    2017-02-01

    Mueller matrix polarimetry is a powerful tool for detecting microscopic structures, therefore can be used to monitor physiological changes of tissue samples. Meanwhile, spectral features of scattered light can also provide abundant microstructural information of tissues. In this paper, we take the 2D multispectral backscattering Mueller matrix images of bovine skeletal muscle tissues, and analyze their temporal variation behavior using multispectral Mueller matrix parameters. The 2D images of the Mueller matrix elements are reduced to the multispectral frequency distribution histograms (mFDHs) to reveal the dominant structural features of the muscle samples more clearly. For quantitative analysis, the multispectral Mueller matrix transformation (MMT) parameters are calculated to characterize the microstructural variations during the rigor mortis and proteolysis processes of the skeletal muscle tissue samples. The experimental results indicate that the multispectral MMT parameters can be used to judge different physiological stages for bovine skeletal muscle tissues in 24 hours, and combining with the multispectral technique, the Mueller matrix polarimetry and FDH analysis can monitor the microstructural variation features of skeletal muscle samples. The techniques may be used for quick assessment and quantitative monitoring of meat qualities in food industry.

  19. Membrane cytochromes of Escherichia coli chl mutants.

    PubMed Central

    Hackett, N R; Bragg, P D

    1983-01-01

    The cytochromes present in the membranes of Escherichia coli cells having defects in the formate dehydrogenase-nitrate reductase system have been analyzed by spectroscopic, redox titration, and enzyme fractionation techniques. Four phenotypic classes differing in cytochrome composition were recognized. Class I is represented by strains with defects in the synthesis or insertion of molybdenum cofactor. Cytochromes of the formate dehydrogenase-nitrate reductase pathway are present. Class II strains map in the chlC-chlI region. The cytochrome associated with nitrate reductase (cytochrome bnr) is absent in these strains, whereas that associated with formate dehydrogenase (cytochrome bfdh) is the major cytochrome in the membranes. Class III strains lack both cytochromes bfdh and bnr but overproduce cytochrome d of the aerobic pathway even under anaerobic conditions in the presence of nitrate. Class III strains have defects in the regulation of cytochrome synthesis. An fdhA mutant produced cytochrome bnr but lacked cytochrome bfdh. These results support the view that chlI (narI) is the structural gene for cytochrome bnr and that chlC (narG) and chlI(narI) are in the same operon, and they provide evidence of the complexity of the regulation of cytochrome synthesis. PMID:6302081

  20. Regularization iteration imaging algorithm for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  1. Simultaneous scanning tunneling microscopy and synchrotron X-ray measurements in a gas environment.

    PubMed

    Mom, Rik V; Onderwaater, Willem G; Rost, Marcel J; Jankowski, Maciej; Wenzel, Sabine; Jacobse, Leon; Alkemade, Paul F A; Vandalon, Vincent; van Spronsen, Matthijs A; van Weeren, Matthijs; Crama, Bert; van der Tuijn, Peter; Felici, Roberto; Kessels, Wilhelmus M M; Carlà, Francesco; Frenken, Joost W M; Groot, Irene M N

    2017-11-01

    A combined X-ray and scanning tunneling microscopy (STM) instrument is presented that enables the local detection of X-ray absorption on surfaces in a gas environment. To suppress the collection of ion currents generated in the gas phase, coaxially shielded STM tips were used. The conductive outer shield of the coaxial tips can be biased to deflect ions away from the tip core. When tunneling, the X-ray-induced current is separated from the regular, 'topographic' tunneling current using a novel high-speed separation scheme. We demonstrate the capabilities of the instrument by measuring the local X-ray-induced current on Au(1 1 1) in 800 mbar Ar. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  3. Increasing cellular coverage within integrated terrestrial/satellite mobile networks

    NASA Technical Reports Server (NTRS)

    Castro, Jonathan P.

    1995-01-01

    When applying the hierarchical cellular concept, the satellite acts as giant umbrella cell covering a region with some terrestrial cells. If a mobile terminal traversing the region arrives to the border-line or limits of a regular cellular ground service, network transition occurs and the satellite system continues the mobile coverage. To adequately assess the boundaries of service of a mobile satellite system an a cellular network within an integrated environment, this paper provides an optimized scheme to predict when a network transition may be necessary. Under the assumption of a classified propagation phenomenon and Lognormal shadowing, the study applies an analytical approach to estimate the location of a mobile terminal based on a reception of the signal strength emitted by a base station.

  4. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  5. Unimodular Gravity and General Relativity UV divergent contributions to the scattering of massive scalar particles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Martin, S.; Martin, C. P.

    2018-01-01

    We work out the one-loop and order κ2 mphi2 UV divergent contributions, coming from Unimodular Gravity and General Relativity, to the S matrix element of the scattering process phi + phi→ phi + phi in a λ phi4 theory with mass mphi. We show that both Unimodular Gravity and General Relativity give rise to the same UV divergent contributions in Dimensional Regularization. This seems to be at odds with the known result that in a multiplicative MS dimensional regularization scheme the General Relativity corrections, in the de Donder gauge, to the beta function, βλ, of the λ coupling do not vanish, whereas the Unimodular Gravity corrections, in a certain gauge, do vanish. Actually, by comparing the UV divergent contributions calculated in this paper with those which give rise to the non-vanishing gravitational corrections to βλ, one readily concludes that the UV divergent contributions that yield the just mentioned non-vanishing gravitational corrections to βλ do not contribute to the UV divergent behaviour of the S matrix element of phi + phi→ phi + phi. This shows that any physical consequence—such as the existence of asymptotic freedom due to gravitational interactions—drawn from the value of βλ is not physically meaningful.

  6. Hypothesis of Lithocoding: Origin of the Genetic Code as a "Double Jigsaw Puzzle" of Nucleobase-Containing Molecules and Amino Acids Assembled by Sequential Filling of Apatite Mineral Cellules.

    PubMed

    Skoblikow, Nikolai E; Zimin, Andrei A

    2016-05-01

    The hypothesis of direct coding, assuming the direct contact of pairs of coding molecules with amino acid side chains in hollow unit cells (cellules) of a regular crystal-structure mineral is proposed. The coding nucleobase-containing molecules in each cellule (named "lithocodon") partially shield each other; the remaining free space determines the stereochemical character of the filling side chain. Apatite-group minerals are considered as the most preferable for this type of coding (named "lithocoding"). A scheme of the cellule with certain stereometric parameters, providing for the isomeric selection of contacting molecules is proposed. We modelled the filling of cellules with molecules involved in direct coding, with the possibility of coding by their single combination for a group of stereochemically similar amino acids. The regular ordered arrangement of cellules enables the polymerization of amino acids and nucleobase-containing molecules in the same direction (named "lithotranslation") preventing the shift of coding. A table of the presumed "LithoCode" (possible and optimal lithocodon assignments for abiogenically synthesized α-amino acids involved in lithocoding and lithotranslation) is proposed. The magmatic nature of the mineral, abiogenic synthesis of organic molecules and polymerization events are considered within the framework of the proposed "volcanic scenario".

  7. Hydrologic analysis of the challenges facing water resources and sustainable development of Wadi Feiran basin, southern Sinai, Egypt

    NASA Astrophysics Data System (ADS)

    Ahmed, Ayman A.; Diab, Maghawri S.

    2018-04-01

    Wadi Feiran basin is one of the most promising areas in southern Sinai (Egypt) for establishing new communities and for growth in agriculture, tourism, and industry. The present challenges against development include water runoff hazards (flash flooding), the increasing water demand, and water scarcity and contamination. These challenges could be mitigated by efficient use of runoff and rainwater through appropriate management, thereby promoting sustainable development. Strategies include the mitigation of runoff hazards and promoting the natural and artificial recharge of aquifers. This study uses a watershed modeling system, geographic information system, and classification scheme to predict the effects of various mitigation options on the basin's water resources. Rainwater-harvesting techniques could save more than 77% of the basin's runoff (by volume), which could be used for storage and aquifer recharge. A guide map is provided that shows possible locations for the proposed mitigation options in the study basin. Appropriate measures should be undertaken urgently: mitigation of groundwater contamination (including effective sewage effluent management); regular monitoring of the municipal, industrial and agricultural processes that release contaminants; rationalization and regulation of the application of agro-chemicals to farmland; and regular monitoring of contaminants in groundwater. Stringent regulations should be implemented to prevent wastewater disposal to the aquifers in the study area.

  8. Massive photons: An infrared regularization scheme for lattice QCD + QED

    DOE PAGES

    Endres, Michael G.; Shindler, Andrea; Tiburzi, Brian C.; ...

    2016-08-10

    The commonly adopted approach for including electromagnetic interactions in lattice QCD simulations relies on using finite volume as the infrared regularization for QED. The long-range nature of the electromagnetic interaction, however, implies that physical quantities are susceptible to power-law finite volume corrections, which must be removed by performing costly simulations at multiple lattice volumes, followed by an extrapolation to the infinite volume limit. In this work, we introduce a photon mass as an alternative means for gaining control over infrared effects associated with electromagnetic interactions. We present findings for hadron mass shifts due to electromagnetic interactions (i.e., for the proton,more » neutron, charged and neutral kaon) and corresponding mass splittings, and compare the results with those obtained from conventional QCD+QED calculations. Results are reported for numerical studies of three flavor electroquenched QCD using ensembles corresponding to 800 MeV pions, ensuring that the only appreciable volume corrections arise from QED effects. The calculations are performed with three lattice volumes with spatial extents ranging from 3.4 - 6.7 fm. As a result, we find that for equal computing time (not including the generation of the lattice configurations), the electromagnetic mass shifts can be extracted from computations on a single (our smallest) lattice volume with comparable or better precision than the conventional approach.« less

  9. Quantum implications of a scale invariant regularization

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2018-04-01

    We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).

  10. On the chiral magnetic effect in Weyl superfluid 3He-A

    NASA Astrophysics Data System (ADS)

    Volovik, G. E.

    2017-01-01

    In the theory of the chiral anomaly in relativistic quantum field theories (RQFTs), some results depend on a regularization scheme at ultraviolet. In the chiral superfluid 3He-A, which contains two Weyl points and also experiences the effects of chiral anomaly, the "trans-Planckian" physics is known and the results can be obtained without regularization. We discuss this on example of the chiral magnetic effect (CME), which has been observed in 3He-A in the 1990s [1]. There are two forms of the contribution of the CME to the Chern-Simons term in free energy, perturbative and non-perturbative. The perturbative term comes from the fermions living in the vicinity of the Weyl point, where the fermions are "relativistic" and obey the Weyl equation. The non-perturbative term originates from the deep vacuum, being determined by the separation of the two Weyl points in momentum space. Both terms are obtained using the Adler-Bell-Jackiw equation for chiral anomaly, and both agree with the results of the microscopic calculations in the "trans-Planckian" region. Existence of the two nonequivalent forms of the Chern-Simons term demonstrates that the results obtained within the RQFT depend on the specific properties of the underlying quantum vacuum and may reflect different physical phenomena in the same vacuum.

  11. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  12. An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.

    PubMed

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-09-01

    Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.

  13. Fault Diagnosis Strategies for SOFC-Based Power Generation Plants

    PubMed Central

    Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea

    2016-01-01

    The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472

  14. Unified Alignment of Protein-Protein Interaction Networks.

    PubMed

    Malod-Dognin, Noël; Ban, Kristina; Pržulj, Nataša

    2017-04-19

    Paralleling the increasing availability of protein-protein interaction (PPI) network data, several network alignment methods have been proposed. Network alignments have been used to uncover functionally conserved network parts and to transfer annotations. However, due to the computational intractability of the network alignment problem, aligners are heuristics providing divergent solutions and no consensus exists on a gold standard, or which scoring scheme should be used to evaluate them. We comprehensively evaluate the alignment scoring schemes and global network aligners on large scale PPI data and observe that three methods, HUBALIGN, L-GRAAL and NATALIE, regularly produce the most topologically and biologically coherent alignments. We study the collective behaviour of network aligners and observe that PPI networks are almost entirely aligned with a handful of aligners that we unify into a new tool, Ulign. Ulign enables complete alignment of two networks, which traditional global and local aligners fail to do. Also, multiple mappings of Ulign define biologically relevant soft clusterings of proteins in PPI networks, which may be used for refining the transfer of annotations across networks. Hence, PPI networks are already well investigated by current aligners, so to gain additional biological insights, a paradigm shift is needed. We propose such a shift come from aligning all available data types collectively rather than any particular data type in isolation from others.

  15. Theoretical and experimental study on multimode optical fiber grating

    NASA Astrophysics Data System (ADS)

    Yunming, Wang; Jingcao, Dai; Mingde, Zhang; Xiaohan, Sun

    2005-06-01

    The characteristics of multimode optical fiber Bragg grating (MMFBG) are studied theoretically and experimentally. For the first time the analysis of MMFBG based on a novel quasi-three-dimensional (Q-3D) finite-difference time-domain beam propagation method (Q-FDTD-BPM) is described through separating the angle component of vector field solution from the cylindrical coordinate so that several discrete two-dimensional (2D) equations are obtained, which simplify the 3D equations. And then these equations are developed using an alternating-direction implicit method and generalized Douglas scheme, which achieves higher accuracy than the regular FD scheme. All of the 2D solutions for the field intensities are also added with different power coefficients for different angle mode order numbers to obtain 3D field distributions in MMFBG. The presented method has been demonstrated as suitable simulation tool for analyzing MMFBG. In addition, based on the hydrogen-loaded and phase mask techniques, a series of Bragg grating have been written into the silicon multimode optical fiber loaded hydrogen for a month, and the spectrums for that have been measured, which obtain good results approximate to the results in the experiment. Group delay/differentiate group delay spectrums are obtained using Agilent 81910A Photonic All-Parameter Analyzer.

  16. Implementation of non-axisymmetric mesh system in the gyrokinetic PIC code (XGC) for Stellarators

    NASA Astrophysics Data System (ADS)

    Moritaka, Toseo; Hager, Robert; Cole, Micheal; Chang, Choong-Seock; Lazerson, Samuel; Ku, Seung-Hoe; Ishiguro, Seiji

    2017-10-01

    Gyrokinetic simulation is a powerful tool to investigate turbulent and neoclassical transports based on the first-principles of plasma kinetics. The gyrokinetic PIC code XGC has been developed for integrated simulations that cover the entire region of Tokamaks. Complicated field line and boundary structures should be taken into account to demonstrate edge plasma dynamics under the influence of X-point and vessel components. XGC employs gyrokinetic Poisson solver on unstructured triangle mesh to deal with this difficulty. We introduce numerical schemes newly developed for XGC simulation in non-axisymmetric Stellarator geometry. Triangle meshes in each poloidal plane are defined by PEST poloidal angle in the VMEC equilibrium so that they have the same regular structure in the straight field line coordinate. Electric charge of marker particle is distributed to the triangles specified by the field-following projection to the neighbor poloidal planes. 3D spline interpolation in a cylindrical mesh is also used to obtain equilibrium magnetic field at the particle position. These schemes capture the anisotropic plasma dynamics and resulting potential structure with high accuracy. The triangle meshes can smoothly connect to unstructured meshes in the edge region. We will present the validation test in the core region of Large Helical Device and discuss about future challenges toward edge simulations.

  17. Pion-nucleon scattering in covariant baryon chiral perturbation theory with explicit Delta resonances

    NASA Astrophysics Data System (ADS)

    Yao, De-Liang; Siemens, D.; Bernard, V.; Epelbaum, E.; Gasparyan, A. M.; Gegelia, J.; Krebs, H.; Meißner, Ulf-G.

    2016-05-01

    We present the results of a third order calculation of the pion-nucleon scattering amplitude in a chiral effective field theory with pions, nucleons and delta resonances as explicit degrees of freedom. We work in a manifestly Lorentz invariant formulation of baryon chiral perturbation theory using dimensional regularization and the extended on-mass-shell renormalization scheme. In the delta resonance sector, the on mass-shell renormalization is realized as a complex-mass scheme. By fitting the low-energy constants of the effective Lagrangian to the S- and P -partial waves a satisfactory description of the phase shifts from the analysis of the Roy-Steiner equations is obtained. We predict the phase shifts for the D and F waves and compare them with the results of the analysis of the George Washington University group. The threshold parameters are calculated both in the delta-less and delta-full cases. Based on the determined low-energy constants, we discuss the pion-nucleon sigma term. Additionally, in order to determine the strangeness content of the nucleon, we calculate the octet baryon masses in the presence of decuplet resonances up to next-to-next-to-leading order in SU(3) baryon chiral perturbation theory. The octet baryon sigma terms are predicted as a byproduct of this calculation.

  18. Cognitive Achievement and Motivation in Hands-on and Teacher-Centred Science Classes: Does an additional hands-on consolidation phase (concept mapping) optimise cognitive learning at work stations?

    NASA Astrophysics Data System (ADS)

    Gerstner, Sabine; Bogner, Franz X.

    2010-05-01

    Our study monitored the cognitive and motivational effects within different educational instruction schemes: On the one hand, teacher-centred versus hands-on instruction; on the other hand, hands-on instruction with and without a knowledge consolidation phase (concept mapping). All the instructions dealt with the same content. For all participants, the hands-on approach as well as the concept mapping adaptation were totally new. Our hands-on approach followed instruction based on "learning at work stations". A total of 397 high-achieving fifth graders participated in our study. We used a pre-test, post-test, retention test design both to detect students' short-term learning success and long-term learning success, and to document their decrease rates of newly acquired knowledge. Additionally, we monitored intrinsic motivation. Although the teacher-centred approach provided higher short-term learning success, hands-on instruction resulted in relatively lower decrease rates. However, after six weeks, all students reached similar levels of newly acquired knowledge. Nevertheless, concept mapping as a knowledge consolidation phase positively affected short-term increase in knowledge. Regularly placed in instruction, it might increase long-term retention rates. Scores of interest, perceived competence and perceived choice were very high in all the instructional schemes.

  19. A study on quantifying COPD severity by combining pulmonary function tests and CT image analysis

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2011-03-01

    This paper describes a novel method that can evaluate chronic obstructive pulmonary disease (COPD) severity by combining measurements of pulmonary function tests and measurements obtained from CT image analysis. There is no cure for COPD. However, with regular medical care and consistent patient compliance with treatments and lifestyle changes, the symptoms of COPD can be minimized and progression of the disease can be slowed. Therefore, many diagnosis methods based on CT image analysis have been proposed for quantifying COPD. Most of diagnosis methods for COPD extract the lesions as low-attenuation areas (LAA) by thresholding and evaluate the COPD severity by calculating the LAA in the lung (LAA%). However, COPD is usually the result of a combination of two conditions, emphysema and chronic obstructive bronchitis. Therefore, the previous methods based on only LAA% do not work well. The proposed method utilizes both of information including the measurements of pulmonary function tests and the results of the chest CT image analysis to evaluate the COPD severity. In this paper, we utilize a multi-class AdaBoost to combine both of information and classify the COPD severity into five stages automatically. The experimental results revealed that the accuracy rate of the proposed method was 88.9% (resubstitution scheme) and 64.4% (leave-one-out scheme).

  20. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

Top