Science.gov

Sample records for solution-phase parallel synthesis

  1. Sucrose and KF quenching system for solution phase parallel synthesis.

    PubMed

    Chavan, Sunil; Watpade, Rahul; Toche, Raghunath

    2016-01-01

    The KF, sucrose (table sugar) exploited as quenching system in solution phase parallel synthesis. Excess of electrophiles were covalently trapped with hydroxyl functionality of sucrose and due to polar nature of sucrose derivative was solubilize in water. Potassium fluoride used to convert various excess electrophilic reagents such as acid chlorides, sulfonyl chlorides, isocyanates to corresponding fluorides, which are less susceptible for hydrolysis and subsequently sucrose traps these fluorides and dissolves them in water thus removing them from reaction mixture. Various excess electrophilic reagents such as acid chlorides, sulfonyl chlorides, and isocyanates were quenched successfully to give pure products in excellent yields. PMID:27462506

  2. Functionalized Polymers-Emerging Versatile Tools for Solution-Phase Chemistry and Automated Parallel Synthesis.

    PubMed

    Kirschning, Andreas; Monenschein, Holger; Wittenberg, Rüdiger

    2001-02-16

    As part of the dramatic changes associated with the need for preparing compound libraries in pharmaceutical and agrochemical research laboratories, industry searches for new technologies that allow for the automation of synthetic processes. Since the pioneering work by Merrifield polymeric supports have been identified to play a key role in this field however, polymer-assisted solution-phase synthesis which utilizes immobilized reagents and catalysts has only recently begun to flourish. Polymer-assisted solution-phase synthesis has various advantages over conventional solution-phase chemistry, such as the ease of separation of the supported species from a reaction mixture by filtration and washing, the opportunity to use an excess of the reagent to force the reaction to completion without causing workup problems, and the adaptability to continuous-flow processes. Various strategies for employing functionalized polymers stoichiometrically have been developed. Apart from reagents that are covalently or ionically attached to the polymeric backbone and which are released into solution in the presence of a suitable substrate, scavenger reagents play an increasingly important role in purifying reaction mixtures. Employing functionalized polymers in solution-phase synthesis has been shown to be extremely useful in automated parallel synthesis and multistep sequences. So far, compound libraries containing as many as 88 members have been generated by using several polymer-bound reagents one after another. Furthermore, it has been demonstrated that complex natural products like the alkaloids (+/-)-oxomaritidine and (+/-)-epimaritidine can be prepared by a sequence of five and six consecutive polymer-assisted steps, respectively, and the potent analgesic compound (+/-)-epibatidine in twelve linear steps ten of which are based on functionalized polymers. These developments reveal the great future prospects of polymer-assisted solution-phase synthesis.

  3. Solution-Phase Parallel Synthesis of Acyclic Nucleoside Libraries of Purine, Pyrimidine, and Triazole Acetamides

    PubMed Central

    2015-01-01

    Molecular diversity plays a pivotal role in modern drug discovery against phenotypic or enzyme-based targets using high throughput screening technology. Under the auspices of the Pilot Scale Library Program of the NIH Roadmap Initiative, we produced and report herein a diverse library of 181 purine, pyrimidine, and 1,2,4-triazole-N-acetamide analogues which were prepared in a parallel high throughput solution-phase reaction format. A set of assorted amines were reacted with several nucleic acid N-acetic acids utilizing HATU as the coupling reagent to produce diverse acyclic nucleoside N-acetamide analogues. These reactions were performed using 24 well reaction blocks and an automatic reagent-dispensing platform under inert atmosphere. The targeted compounds were purified on an automated purification system using solid sample loading prepacked cartridges and prepacked silica gel columns. All compounds were characterized by NMR and HRMS, and were analyzed for purity by HPLC before submission to the Molecular Libraries Small Molecule Repository (MLSMR) at NIH. Initial screening through the Molecular Libraries Probe Production Centers Network (MLPCN) program, indicates that several analogues showed diverse and interesting biological activities. PMID:24933643

  4. Parallel solution-phase synthesis and general biological activity of a uridine antibiotic analog library.

    PubMed

    Moukha-chafiq, Omar; Reynolds, Robert C

    2014-05-12

    A small library of ninety four uridine antibiotic analogs was synthesized, under the Pilot Scale Library (PSL) Program of the NIH Roadmap initiative, from amine 2 and carboxylic acids 33 and 77 in solution-phase fashion. Diverse aldehyde, sulfonyl chloride, and carboxylic acid reactant sets were condensed to 2, leading after acid-mediated hydrolysis, to the targeted compounds 3-32 in good yields and high purity. Similarly, treatment of 33 with diverse amines and sulfonamides gave 34-75. The coupling of the amino terminus of d-phenylalanine methyl ester to the free 5'-carboxylic acid moiety of 33 followed by sodium hydroxide treatment led to carboxylic acid analog 77. Hydrolysis of this material gave analog 78. The intermediate 77 served as the precursor for the preparation of novel dipeptidyl uridine analogs 79-99 through peptide coupling reactions to diverse amine reactants. None of the described compounds show significant anticancer or antimalarial acivity. A number of samples exhibited a variety of promising inhibitory, agonist, antagonist, or activator properties with enzymes and receptors in primary screens supplied and reported through the NIH MLPCN program. PMID:24661222

  5. Solution-phase microwave assisted parallel synthesis of N,N'-disubstituted thioureas derived from benzoic acid: biological evaluation and molecular docking studies.

    PubMed

    Rauf, Muhammad Khawar; Talib, Ammara; Badshah, Amin; Zaib, Sumera; Shoaib, Khurram; Shahid, Mohammad; Flörke, Ulrich; Imtiaz-ud-Din; Iqbal, Jamshed

    2013-01-01

    An efficient and facile microwave-assisted solution phase parallel synthesis for a 26-member library of N,N'-disubstituted thiourea analogs were accomplished successfully. The reaction time for synthesis of analogs was drastically reduced from a reported 8-12 h to only 10 min. Compounds were more than 95% pure, as characterized by modern analytical techniques, i.e. (1)H &(13)C NMR and FT-IR. The solid phase structural analysis has also been performed by single crystal XRD analysis. Synthesized compounds were preliminary screened for their in vitro urease inhibition and antifungal activity. Most of the compounds were found to be potent inhibitors of urease. However, the most significant activity was found for 11 with IC₅₀ of 1.67 μM. The docking scores correlate with the IC₅₀ values of inhibitors. PMID:24185379

  6. Solution-phase parallel synthesis of a pharmacophore library of HUN-7293 analogues: a general chemical mutagenesis approach to defining structure-function properties of naturally occurring cyclic (depsi)peptides.

    PubMed

    Chen, Yan; Bilban, Melitta; Foster, Carolyn A; Boger, Dale L

    2002-05-15

    HUN-7293 (1), a naturally occurring cyclic heptadepsipeptide, is a potent inhibitor of cell adhesion molecule expression (VCAM-1, ICAM-1, E-selectin), the overexpression of which is characteristic of chronic inflammatory diseases. Representative of a general approach to defining structure-function relationships of such cyclic (depsi)peptides, the parallel synthesis and evaluation of a complete library of key HUN-7293 analogues are detailed enlisting solution-phase techniques and simple acid-base liquid-liquid extractions for isolation and purification of intermediates and final products. Significant to the design of the studies and unique to solution-phase techniques, the library was assembled superimposing a divergent synthetic strategy onto a convergent total synthesis. An alanine scan and N-methyl deletion of each residue of the cyclic heptadepsipeptide identified key sites responsible for or contributing to the biological properties. The simultaneous preparation of a complete set of individual residue analogues further simplifying the structure allowed an assessment of each structural feature of 1, providing a detailed account of the structure-function relationships in a single study. Within this pharmacophore library prepared by systematic chemical mutagenesis of the natural product structure, simplified analogues possessing comparable potency and, in some instances, improved selectivity were identified. One potent member of this library proved to be an additional natural product in its own right, which we have come to refer to as HUN-7293B (8), being isolated from the microbial strain F/94-499709.

  7. Solution-Phase Synthesis of Dipeptides: A Capstone Project That Employs Key Techniques in an Organic Laboratory Course

    ERIC Educational Resources Information Center

    Marchetti, Louis; DeBoef, Brenton

    2015-01-01

    A contemporary approach to the synthesis and purification of several UV-active dipeptides has been developed for the second-year organic laboratory. This experiment exposes students to the important technique of solution-phase peptide synthesis and allows an instructor to highlight the parallel between what they are accomplishing in the laboratory…

  8. Solution-phase synthesis of nanomaterials at low temperature

    NASA Astrophysics Data System (ADS)

    Zhu, Yongchun; Qian, Yitai

    2009-01-01

    This paper reviews the solution-phase synthesis of nanoparticles via some routes at low temperatures, such as room temperature route, wave-assisted synthesis (γ-irradiation route and sonochemical route), directly heating at low temperatures, and hydrothermal/solvothermal methods. A number of strategies were developed to control the shape, the size, as well as the dispersion of nanostructures. Using diethylamine or n-butylamine as solvent, semiconductor nanorods were yielded. By the hydrothermal treatment of amorphous colloids, Bi2S3 nanorods and Se nanowires were obtained. CdS nanowires were prepared in the presence of polyacrylamide. ZnS nanowires were obtained using liquid crystal. The polymer poly (vinyl acetate) tubule acted as both nanoreactor and template for the CdSe nanowire growth. Assisted by the surfactant of sodium dodecyl benzenesulfonate (SDBS), nickel nanobelts were synthesized. In addition, Ag nanowires, Te nanotubes and ZnO nanorod arrays could be prepared without adding any additives or templates.

  9. Fluorous tagging strategy for solution-phase synthesis of small molecules, peptides and oligosaccharides

    PubMed Central

    Zhang, Wei

    2005-01-01

    The purification of reaction mixtures is a slow process in organic synthesis, especially during the production of large numbers of analogs and compound libraries. Phase-tag methods such as solid-phase synthesis and fluorous synthesis, provide efficient ways of addressing the separation issue. Fluorous synthesis employs functionalized perfluoroalkyl groups attached to substrates or reagents. The separation of the resulting fluorous molecules can be achieved using strong and selective fluorous liquid-liquid extraction, fluorous silica gel-based solid-phase extraction or high-performance liquid chromatography. Fluorous technology is a novel solution-phase method, which has the advantages of fast reaction times in homogeneous environments, being readily adaptable to literature conditions, having easy intermediate analysis, and having flexibility in reaction scale and scope. In principle, any synthetic methods that use a solid-support could be conducted in solution-phase by replacing the polymer linker with a corresponding fluorous tag. This review summarizes the progress of fluorous tags in solution-phase synthesis of small molecules, peptides and oligosaccharides. PMID:15595439

  10. Synthesis of stable C-linked ferrocenyl amino acids and their use in solution-phase peptide synthesis.

    PubMed

    Philip, Anijamol T; Chacko, Shibin; Ramapanicker, Ramesh

    2015-12-01

    Incorporation of ferrocenyl group to peptides is an efficient method to alter their hydrophobicity. Ferrocenyl group can also act as an electrochemical probe when incorporated onto functional peptides. Most often, ferrocene is incorporated onto peptides post-synthesis via amide, ester or triazole linkages. Stable amino acids containing ferrocene as a C-linked side chain are potentially useful building units for the synthesis of ferrocene-containing peptides. We report here an efficient route to synthesize ferrocene-containing amino acids that are stable and can be used in peptide synthesis. Coupling of 2-ferrocenyl-1,3-dithiane and iodides derived from aspartic acid or glutamic acid using n-butyllithium leads to the incorporation of a ferrocenyl unit to the δ-position or ε-position of an α-amino acid. The reduction or hydrolysis of the dithiane group yields an alkyl or an oxo derivative. The usability of the synthesized amino acids is demonstrated by incorporating one of the amino acids in both C-terminus and N-terminus of tripeptides in solution phase.

  11. Solution-phase-peptide synthesis via the group-assisted purification (GAP) chemistry without using chromatography and recrystallization.

    PubMed

    Wu, Jianbin; An, Guanghui; Lin, Siqi; Xie, Jianbo; Zhou, Wei; Sun, Hao; Pan, Yi; Li, Guigen

    2014-02-01

    The solution phase synthesis of N-protected amino acids and peptides has been achieved through the Group-Assisted Purification (GAP) chemistry by avoiding disadvantages of other methods in regard to the difficult scale-up, expenses of solid and soluble polymers, etc. The GAP synthesis can reduce the use of solvents, silica gels, energy and manpower. In addition, the GAP auxiliary can be conveniently recovered for re-use and is environmentally friendly and benign, and substantially reduces waste production in academic labs and industry.

  12. A novel solution-phase route for the synthesis of crystalline silver nanowires

    SciTech Connect

    Liu Yang; Chu Ying . E-mail: chuying@nenu.edu.cn; Yang Likun; Han Dongxue; Lue Zhongxian

    2005-10-06

    A unique solution-phase route was devised to synthesize crystal Ag nanowires with high aspect-ratio (8-10 nm in diameter and length up to 10 {mu}m) by the reduction of AgNO{sub 3} with Vitamin C in SDS/ethanol solution. The resultant nanoproducts were characterized by transmission electron microscope (TEM), X-ray diffraction (XRD) and electron diffraction (ED). A soft template mechanism was put forward to interpret the formation of metal Ag nanowires.

  13. An Efficient Solution-Phase Synthesis of 4,5,7-Trisubstituted Pyrrolo[3,2-d]pyrimidines

    PubMed Central

    Zhang, Weihe; Liu, Jing; Stashko, Michael A.; Wang, Xiaodong

    2013-01-01

    We have developed an efficient and robust route to synthesize 4,5,7-trisubstituted pyrrolo[3,2-d]pyrimidines as potent kinase inhibitors. This solution-phase synthesis features a SNAr substitution reaction, cross-coupling reaction, one-pot reduction/reductive amination and N-alkylation reaction. These reactions occur rapidly with high yields and have broad substrate scopes. A variety of groups can be selectively introduced into the N5 and C7 positions of 4,5,7-trisubstituted pyrrolopyrimidines at a late stage of the synthesis, thereby providing a highly efficient approach to explore the structure-activity relationships of pyrrolopyrimidine derivatives. Four synthetic analogs have been profiled against a panel of 48 kinases and a new and selective FLT3 inhibitor 9 is identified. PMID:23181516

  14. Solution phase synthesis of short oligoribonucleotides on a precipitative tetrapodal support

    PubMed Central

    Gimenez Molina, Alejandro; Jabgunde, Amit M; Virta, Pasi

    2014-01-01

    Summary An effective method for the synthesis of short oligoribonucleotides in solution has been elaborated. Novel 2'-O-(2-cyanoethyl)-5'-O-(1-methoxy-1-methylethyl) protected ribonucleoside 3'-phosphoramidites have been prepared and their usefulness as building blocks in RNA synthesis on a soluble support has been demonstrated. As a proof of concept, a pentameric oligoribonucleotide, 3'-UUGCA-5', has been prepared on a precipitative tetrapodal tetrakis(4-azidomethylphenyl)pentaerythritol support. The 3'-terminal nucleoside was coupled to the support as a 3'-O-(4-pentynoyl) derivative by Cu(I) promoted 1,3-dipolar cycloaddition. Couplings were carried out with 1.5 equiv of the building block. In each coupling cycle, the small molecular reagents and byproducts were removed by two quantitative precipitations from MeOH, one after oxidation and the second after the 5'-deprotection. After completion of the chain assembly, treatment with triethylamine, ammonia and TBAF released the pentamer in high yields. PMID:25298795

  15. Strong interactive growth behaviours in solution-phase synthesis of three-dimensional metal oxide nanostructures

    NASA Astrophysics Data System (ADS)

    Lee, Jung Min; No, You-Shin; Kim, Sungwoong; Park, Hong-Gyu; Park, Won Il

    2015-02-01

    Wet-chemical synthesis is a promising alternative to the conventional vapour-phase method owing to its advantages in commercial-scale production at low cost. Studies on nanocrystallization in solution have suggested that growth rate is commonly affected by the size and density of surrounding crystals. However, systematic investigation on the mutual interaction among neighbouring crystals is still lacking. Here we report on strong interactive growth behaviours observed during anisotropic growth of zinc oxide hexagonal nanorods arrays. In particular, we found multiple growth regimes demonstrating that the diameter of the rod is dependent on its height. Local interactions among the growing rods result in cases where height is irrelevant to the diameter, increased with increasing diameter or inversely proportional to the diameter. These phenomena originate from material diffusion and the size-dependent Gibbs-Thomson effect that are universally applicable to a variety of material systems, thereby providing bottom-up strategies for diverse three-dimensional nanofabrication.

  16. Automated Solution-Phase Synthesis of Insect Glycans to Probe the Binding Affinity of Pea Enation Mosaic Virus

    PubMed Central

    2016-01-01

    Pea enation mosaic virus (PEMV)—a plant RNA virus transmitted exclusively by aphids—causes disease in multiple food crops. However, the aphid-virus interactions required for disease transmission are poorly understood. For virus transmission, PEMV binds to a heavily glycosylated receptor aminopeptidase N in the pea aphid gut and is transcytosed across the gut epithelium into the aphid body cavity prior to release in saliva as the aphid feeds. To investigate the role of glycans in PEMV–aphid interactions and explore the possibility of viral control through blocking a glycan interaction, we synthesized insect N-glycan terminal trimannosides by automated solution-phase synthesis. The route features a mannose building block with C-5 ester enforcing a β-linkage, which also provides a site for subsequent chain extension. The resulting insect N-glycan terminal trimannosides with fluorous tags were used in a fluorous microarray to analyze binding with fluorescein isothiocyanate-labeled PEMV; however, no specific binding between the insect glycan and PEMV was detected. To confirm these microarray results, we removed the fluorous tag from the trimannosides for isothermal titration calorimetry studies with unlabeled PEMV. The ITC studies confirmed the microarray results and suggested that this particular glycan–PEMV interaction is not involved in virus uptake and transport through the aphid. PMID:26457763

  17. Automated Solution-Phase Synthesis of Insect Glycans to Probe the Binding Affinity of Pea Enation Mosaic Virus.

    PubMed

    Tang, Shu-Lun; Linz, Lucas B; Bonning, Bryony C; Pohl, Nicola L B

    2015-11-01

    Pea enation mosaic virus (PEMV)--a plant RNA virus transmitted exclusively by aphids--causes disease in multiple food crops. However, the aphid-virus interactions required for disease transmission are poorly understood. For virus transmission, PEMV binds to a heavily glycosylated receptor aminopeptidase N in the pea aphid gut and is transcytosed across the gut epithelium into the aphid body cavity prior to release in saliva as the aphid feeds. To investigate the role of glycans in PEMV-aphid interactions and explore the possibility of viral control through blocking a glycan interaction, we synthesized insect N-glycan terminal trimannosides by automated solution-phase synthesis. The route features a mannose building block with C-5 ester enforcing a β-linkage, which also provides a site for subsequent chain extension. The resulting insect N-glycan terminal trimannosides with fluorous tags were used in a fluorous microarray to analyze binding with fluorescein isothiocyanate-labeled PEMV; however, no specific binding between the insect glycan and PEMV was detected. To confirm these microarray results, we removed the fluorous tag from the trimannosides for isothermal titration calorimetry studies with unlabeled PEMV. The ITC studies confirmed the microarray results and suggested that this particular glycan-PEMV interaction is not involved in virus uptake and transport through the aphid. PMID:26457763

  18. Automated Solution-Phase Synthesis of Insect Glycans to Probe the Binding Affinity of Pea Enation Mosaic Virus.

    PubMed

    Tang, Shu-Lun; Linz, Lucas B; Bonning, Bryony C; Pohl, Nicola L B

    2015-11-01

    Pea enation mosaic virus (PEMV)--a plant RNA virus transmitted exclusively by aphids--causes disease in multiple food crops. However, the aphid-virus interactions required for disease transmission are poorly understood. For virus transmission, PEMV binds to a heavily glycosylated receptor aminopeptidase N in the pea aphid gut and is transcytosed across the gut epithelium into the aphid body cavity prior to release in saliva as the aphid feeds. To investigate the role of glycans in PEMV-aphid interactions and explore the possibility of viral control through blocking a glycan interaction, we synthesized insect N-glycan terminal trimannosides by automated solution-phase synthesis. The route features a mannose building block with C-5 ester enforcing a β-linkage, which also provides a site for subsequent chain extension. The resulting insect N-glycan terminal trimannosides with fluorous tags were used in a fluorous microarray to analyze binding with fluorescein isothiocyanate-labeled PEMV; however, no specific binding between the insect glycan and PEMV was detected. To confirm these microarray results, we removed the fluorous tag from the trimannosides for isothermal titration calorimetry studies with unlabeled PEMV. The ITC studies confirmed the microarray results and suggested that this particular glycan-PEMV interaction is not involved in virus uptake and transport through the aphid.

  19. Solution-phase synthesis and photoluminescence characterization of quaternary Cu{sub 2}ZnSnS{sub 4} nanocrystals

    SciTech Connect

    Hamanaka, Yasushi; Tsuzuki, Masakazu; Ozawa, Kohei; Kuzuya, Toshihiro

    2013-12-04

    Cu{sub 2}ZnSnS{sub 4} (CZTS) nanocrystals were synthesized via solution phase route and their lattice defects were investigated by photoluminescence measurements. Ionization energies of the defect levels were estimated to be 10 and 72 meV from thermal quenching behavior of the photoluminescence spectra. These values are quite different from those experimentally estimated for vapor-grown CZTS films and crystals and theoretically calculated for bulk CZTS. The results indicate that the defects are characteristic of CZTS nanocrystals synthesized in the solution phase.

  20. Preparation of a disulfide-linked precipitative soluble support for solution-phase synthesis of trimeric oligodeoxyribonucleotide 3´-(2-chlorophenylphosphate) building blocks

    PubMed Central

    Molina, Alejandro Gimenez; Virta, Pasi; Lönnberg, Harri

    2015-01-01

    Summary The preparation of a disulfide-tethered precipitative soluble support and its use for solution-phase synthesis of trimeric oligodeoxyribonucleotide 3´-(2-chlorophenylphosphate) building blocks is described. To obtain the building blocks, N-acyl protected 2´-deoxy-5´-O-(4,4´-dimethoxytrityl)ribonucleosides were phosphorylated with bis(benzotriazol-1-yl) 2-chlorophenyl phosphate. The “outdated” phosphotriester strategy, based on coupling of PV building blocks in conjunction with quantitative precipitation of the oligodeoxyribonucleotide with MeOH is applied. Subsequent release of the resulting phosphate and base-protected oligodeoxyribonucleotide trimer 3’-pTpdCBzpdGibu-5’ as its 3’-(2-chlorophenyl phosphate) was achieved by reductive cleavage of the disulfide bond. PMID:26664575

  1. Organometallic complexes with biological molecules. XVIII. Alkyltin(IV) cephalexinate complexes: synthesis, solid state and solution phase investigations.

    PubMed

    Di Stefano, R; Scopelliti, M; Pellerito, C; Casella, G; Fiore, T; Stocco, G C; Vitturi, R; Colomba, M; Ronconi, L; Sciacca, I D; Pellerito, L

    2004-03-01

    Dialkyltin(IV) and trialkyltin(IV) complexes of the deacetoxycephalo-sporin-antibiotic cephalexin [7-(d-2-amino-2-phenylacetamido)-3-methyl-3-cephem-4-carboxylic acid] (Hceph) have been synthesized and investigated both in solid and solution phase. Analytical and thermogravimetric data supported the general formula Alk(2)SnOHceph(.)H(2)O and Alk(3)Snceph(.)H(2)O (Alk=Me, n-Bu), while structural information has been gained by FT-IR, (119)Sn Mössbauer and (1)H, (13)C, (119)Sn NMR data. In particular, IR results suggested polymeric structures both for Alk(2)SnOHceph(.)H(2)O and Alk(3)Snceph(.)H(2)O. Moreover, cephalexin appears to behave as monoanionic tridentate ligand coordinating the tin(IV) atom through ester-type carboxylate, as well as through beta-lactam carbonyl oxygen atoms and the amino nitrogen donor atoms in Alk(2)SnOHceph(.)H(2)O complexes. On the basis of (119)Sn Mössbauer spectroscopy it could be inferred that tin(IV) was hexacoordinated in such complexes in the solid state, showing skew trapezoidal configuration. As far as Alk(3)Sn(IV)ceph(.)H(2)O derivatives are concerned, cephalexin coordinated the Alk(3)Sn moiety through the carboxylate acting as a bridging bidentate monoanionic group. Again, (119)Sn Mössbauer spectroscopy led us to propose a trigonal configuration around the tin(IV) atom, with R(3)Sn equatorial disposition and bridging carboxylate oxygen atoms in the axial positions. The nature of the complexes in solution state was investigated by using (1)H, (13)C and (119)Sn NMR spectroscopy. Finally, the cytotoxic activity of organotin(IV) cephalexinate derivatives has been tested using two different chromosome-staining techniques Giemsa and CMA(3), towards spermatocyte chromosomes of the mussel Brachidontes pharaonis (Mollusca: Bivalvia). Colchicinized-like mitoses (c-mitoses) on slides obtained from animals exposed to organotin(IV) cephalexinate compounds, demonstrated the high mitotic spindle-inhibiting potentiality of these chemicals

  2. Pyrazole CCK(1) receptor antagonists. Part 1: Solution-phase library synthesis and determination of Free-Wilson additivity.

    PubMed

    McClure, Kelly; Hack, Michael; Huang, Liming; Sehon, Clark; Morton, Magda; Li, Lina; Barrett, Terrance D; Shankley, Nigel; Breitenbucher, J Guy

    2006-01-01

    High throughput screening revealed compound 1 as a potent antagonist of the CCK(1) receptor. Evaluation of the CCK(1) SAR in a series of these diarylpyrazole antagonists was conducted in a matrix synthesis format revealing additive (Free-Wilson) and non-additive SAR. This use of additive QSAR modeling in conjunction with combinatorial libraries represents a unique approach to the evaluation of SAR interactions between the variables of any combinatorial matrix.

  3. Controllable synthesis and growth mechanism of {alpha}-Co(OH){sub 2} nanorods and nanoplates by a facile solution-phase route

    SciTech Connect

    Wang Wenzhong; Feng Kai; Wang Zhi; Ma Yunyan; Zhang Suyun; Liang Yujie

    2011-12-15

    A facile chemical precipitation route has been developed to control synthesis of {alpha}-cobalt hydroxide nanostructures with rod-like and plate-like morphologies. The {alpha}-Co(OH){sub 2} nanorods were achieved in large quantity when the experiments were carried out in the presence of a suitable shape-controlling reagent polyvinyl pyrrolidone (PVP), while the {alpha}-Co(OH){sub 2} nanoplates were obtained when the experiments were conducted in the absence of PVP, whilst keeping other experimental conditions constant. The chemical composition and morphologies of the as-prepared {alpha}-Co(OH){sub 2} nanoparticles were characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The effect of polymer PVP on the morphologies of {alpha}-Co(OH){sub 2} nanoparticles were discussed in detail. The results indicated that PVP played a key role for the formation of {alpha}-Co(OH){sub 2} nanorods. The growth mechanism of the as-synthesized nanorods and nanoplates were discussed in detail based on the experimental results. A possible growth mechanism has been proposed to illustrate the growth of {alpha}-Co(OH){sub 2} nanorods. - Graphical abstract: A facile solution-phase route has been developed to synthesize {alpha}-Co(OH){sub 2} nanorods and nanoplates. The possible growth mechanism of nanorods and nanoplates was proposed. Highlights: Black-Right-Pointing-Pointer A facile controllable route was described for {alpha}-Co(OH){sub 2} nanowires and nanoplates. Black-Right-Pointing-Pointer The {alpha}-Co(OH){sub 2} nanowires were achieved in the presence of shape controller PVP. Black-Right-Pointing-Pointer The {alpha}-Co(OH){sub 2} nanoplates were obtained in the absence of shape controller PVP. Black-Right-Pointing-Pointer The shape controller PVP played a key role in the formation of {alpha}-Co(OH){sub 2} nanowires.

  4. New methods in combinatorial chemistry-robotics and parallel synthesis.

    PubMed

    Cargill, J F; Lebl, M

    1997-06-01

    Technological advances in the automation of parallel synthesis are following the model set by the semiconductor industry: miniaturization, increasing speed, lower costs. Recent work includes preparation of high-density reaction blocks, development of ink-jet dispensing to polypropylene sheets and synthesis inside customized microchips.

  5. Solution-phase synthesis of single-crystal Cu3Si nanowire arrays on diverse substrates with dual functions as high-performance field emitters and efficient anti-reflective layers

    NASA Astrophysics Data System (ADS)

    Yuan, Fang-Wei; Wang, Chiu-Yen; Li, Guo-An; Chang, Shu-Hao; Chu, Li-Wei; Chen, Lih-Juann; Tuan, Hsing-Yu

    2013-09-01

    There is strong and growing interest in applying metal silicide nanowires as building blocks for a new class of silicide-based applications, including spintronics, nano-scale interconnects, thermoelectronics, and anti-reflective coating materials. Solution-phase environments provide versatile materials chemistry as well as significantly lower production costs compared to gas-phase synthesis. However, solution-phase synthesis of silicide nanowires remains challenging due to the lack of fundamental understanding of silicidation reactions. In this study, single-crystalline Cu3Si nanowire arrays were synthesized in an organic solvent. Self-catalyzed, dense single-crystalline Cu3Si nanowire arrays were synthesized by thermal decomposition of monophenylsilane in the presence of copper films or copper substrates at 420 to 475 °C and 10.3 MPa in supercritical benzene. The solution-grown Cu3Si nanowire arrays serve dual functions as field emitters and anti-reflective layers, which are reported on copper silicide materials for the first time. Cu3Si nanowires exhibit superior field-emission properties, with a turn-on-voltage as low as 1.16 V μm-1, an emission current density of 8 mA cm-2 at 4.9 V μm-1, and a field enhancement factor (β) of 1500. Cu3Si nanowire arrays appear black with optical absorption less than 5% between 400 and 800 nm with minimal reflectance, serving as highly efficient anti-reflective layers. Moreover, the Cu3Si nanowires could be grown on either rigid or flexible substrates (PI). This study shows that solution-phase silicide reactions are adaptable for high-quality silicide nanowire growth and demonstrates their promise towards fabrication of metal silicide-based devices.There is strong and growing interest in applying metal silicide nanowires as building blocks for a new class of silicide-based applications, including spintronics, nano-scale interconnects, thermoelectronics, and anti-reflective coating materials. Solution-phase environments

  6. A Remarkably High-Speed Solution-Phase Combinatorial Synthesis of 2-Substituted-Amino-4-Aryl Thiazoles in Polar Solvents in the Absence of a Catalyst under Ambient Conditions and Study of Their Antimicrobial Activities

    PubMed Central

    Dighe, Satish N.; Chaskar, Pratip K.; Jain, Kishor S.; Phoujdar, Manisha S.; Srinivasan, Kumar V.

    2011-01-01

    Remarkably high-speed synthesis of 2-substituted amino-4-aryl thiazoles in polar solvents with a minimum threshold polarity index of 4.8 was found to proceed to completion in just 30–40 sec. affording excellent yields of thiazoles under ambient temperature conditions without the use of any additional catalyst. The purification-free procedure afforded libraries based around a known pharmacophore, namely, substituted arylthiazoles and generated samples of high purity. In terms of combinatorial synthesis in a single solution phase, our protocol is significantly better than those hitherto reported and is amenable for HTS. The in vitro biological tests of some thiazoles showed good activity towards gram-positive bacteria, gram-negative bacteria and fungi comparable with the standard drugs, nitrofurantoin and griseofulvin, for their antibacterial and antifungal activities, respectively. PMID:24052822

  7. Synthesis and solution-phase conformation of the RG-I fragment of the plant polysaccharide pectin reveals a modification-modulated assembly mechanism.

    PubMed

    Scanlan, Eoin M; Mackeen, Mukram M; Wormald, Mark R; Davis, Benjamin G

    2010-06-01

    The syntheses of pure RG-I fragments of key plant matrix biomolecule pectin using a counterintuitive late-stage convergent cis-glycosylation has allowed detailed analyses of their solution-phase conformations, metal binding affinities, pK(a) values, self-assembly equilibria, and diffusional kinetics. These reveal a striking, right-handed 3(1)-helix that provides an effective and repeating lateral display of putative liganding carboxylates. Moreover, these heteropolymeric structures allow units as short as tetrasaccharides to self-assemble through carbohydrate-carbohydrate interactions that are induced by the presence of Ca(II), a known dynamic trigger in planta. These self-assembly properties can be switched simply by the addition or removal of a single methyl group in this repeating unit through methyl (de)esterification, another known dynamic trigger in planta. Together, the combined effect of Ca(II) and methylation revealed here suggests a concerted molecular basis for these two major dynamic modifications in planta.

  8. Building blocks for the solution phase synthesis of oligonucleotides: regioselective hydrolysis of 3',5'-Di-O-levulinylnucleosides using an enzymatic approach.

    PubMed

    García, Javier; Fernández, Susana; Ferrero, Miguel; Sanghvi, Yogesh S; Gotor, Vicente

    2002-06-28

    A short and convenient synthesis of 3'- and 5'-O-levulinyl-2'-deoxynucleosides has been developed from the corresponding 3',5'-di-O-levulinyl derivatives by regioselective enzymatic hydrolysis, avoiding several tedious chemical protection/deprotection steps. Thus, Candida antartica lipase B (CAL-B) was found to selectively hydrolyze the 5'-levulinate esters, furnishing 3'-O-levulinyl-2'-deoxynucleosides 3 in >80% isolated yields. On the other hand, immobilized Pseudomonas cepacia lipase (PSL-C) and Candida antarctica lipase A (CAL-A) exhibit the opposite selectivity toward the hydrolysis at the 3'-position, affording 5'-O-levulinyl derivatives 4 in >70% yields. A similar hydrolysis procedure was successfully extended to the synthesis of 3'- and 5'-O-levulinyl-protected 2'-O-alkylribonucleosides 7 and 8. This work demonstrates for the first time application of commercial CAL-B and PSL-C toward regioselective hydrolysis of levulinyl esters with excellent selectivity and yields. It is noteworthy that protected cytidine and adenosine base derivatives were not adequate substrates for the enzymatic hydrolysis with CAL-B, whereas PSL-C was able to accommodate protected bases during selective hydrolysis. In addition, we report an improved synthesis of dilevulinyl esters using a polymer-bound carbodiimide as a replacement for dicyclohexylcarbodiimide (DCC), thus considerably simplifying the workup for esterification reactions. PMID:12076150

  9. Building blocks for the solution phase synthesis of oligonucleotides: regioselective hydrolysis of 3',5'-Di-O-levulinylnucleosides using an enzymatic approach.

    PubMed

    García, Javier; Fernández, Susana; Ferrero, Miguel; Sanghvi, Yogesh S; Gotor, Vicente

    2002-06-28

    A short and convenient synthesis of 3'- and 5'-O-levulinyl-2'-deoxynucleosides has been developed from the corresponding 3',5'-di-O-levulinyl derivatives by regioselective enzymatic hydrolysis, avoiding several tedious chemical protection/deprotection steps. Thus, Candida antartica lipase B (CAL-B) was found to selectively hydrolyze the 5'-levulinate esters, furnishing 3'-O-levulinyl-2'-deoxynucleosides 3 in >80% isolated yields. On the other hand, immobilized Pseudomonas cepacia lipase (PSL-C) and Candida antarctica lipase A (CAL-A) exhibit the opposite selectivity toward the hydrolysis at the 3'-position, affording 5'-O-levulinyl derivatives 4 in >70% yields. A similar hydrolysis procedure was successfully extended to the synthesis of 3'- and 5'-O-levulinyl-protected 2'-O-alkylribonucleosides 7 and 8. This work demonstrates for the first time application of commercial CAL-B and PSL-C toward regioselective hydrolysis of levulinyl esters with excellent selectivity and yields. It is noteworthy that protected cytidine and adenosine base derivatives were not adequate substrates for the enzymatic hydrolysis with CAL-B, whereas PSL-C was able to accommodate protected bases during selective hydrolysis. In addition, we report an improved synthesis of dilevulinyl esters using a polymer-bound carbodiimide as a replacement for dicyclohexylcarbodiimide (DCC), thus considerably simplifying the workup for esterification reactions.

  10. Parallel solution combustion synthesis for combinatorial materials studies.

    PubMed

    Luo, Zhen-Lin; Geng, Bin; Bao, Jun; Gao, Chen

    2005-01-01

    A parallel solution combustion synthesis technique was developed for combinatorial materials studies. The vigorous combustion reactions were successfully limited in the microreactors by using a substrate-net-mask microreactor system and the lowest adoptable furnace temperature. Using this technique, a luminescent materials library of Y3Al5O12/Tb(chi) was synthesized with the aid of an ink-jet delivery system. Structure and luminescence characterizations were implemented using X-ray diffraction and UV/X-ray spectroscopies, respectively. The results show that this technique is reliable and applicable to combinatorial study of powder materials with high synthesis temperature.

  11. Solution phase synthesis of Na{sub 0.28}V{sub 2}O{sub 5} nanobelts into nanorings and the electrochemical performance in Li battery

    SciTech Connect

    Nagaraju, Ganganagappa; Chandrappa, Gujjarahalli Thimmanna

    2012-11-15

    Graphical abstract: Hydrothermal method has been adopted first time to prepare Na{sub 0.28}V{sub 2}O{sub 5} nanorings/nanobelts without using any organic surfactant/solvents at 130–160 °C for 1–2 days. TEM analyses reveal that the products consist of nanorings of width about 500 nm and thickness of about 100 nm with inner diameter of 5–7 m. Nanobelts of width 70–100 nm and several tens of micrometers in length are observed. The electrochemical results show that Na{sub 0.28}V{sub 2}O{sub 5} exhibits an initial discharge capacity of 320 mAh g{sup −1} and its capacity still retained 175 mAh g{sup −1} even after 69 cycles. Highlights: ► We are the first to report Na{sub 0.28}V{sub 2}O{sub 5} nanorings/nanobelts by solution method. ► Synthesis via hydrothermal method at 130–160 °C/1–2d in acidic medium. ► We have carried out without using any surfactant/templates/organic solvents. ► Shows discharge capacity of 320 mAh g{sup −1} and reach 175 mAh g{sup −1} after 69 cycles. ► A probable reaction mechanism for Na{sub 0.28}V{sub 2}O{sub 5} nanorings formation is also proposed. -- Abstract: In this paper, we are the first to report a simple one step hydrothermal method to synthesize Na{sub 0.28}V{sub 2}O{sub 5} nanorings/nanobelts without using any organic surfactant/solvents at 130–160 °C for 1–2 days. The obtained products have been characterized by X-ray diffraction (XRD), energy dispersive X-ray spectroscopy (EDS), Fourier transform infrared spectroscopy (FTIR), Raman spectroscopy, morphology by scanning electron microscopy (SEM) and transmission electron microscopy (TEM) and electrochemical discharge–charge test for lithium battery. XRD pattern exhibit a monoclinic Na{sub 0.28}V{sub 2}O{sub 5} structure. FTIR spectrum shows band at 958 cm{sup −1} is assigned to V=O stretching vibration, which is sensitive to intercalation and suggests that Na{sup +} ions are inserted between the vanadium oxide layers. TEM analyses reveal that the

  12. A Laboratory Preparation of Aspartame Analogs Using Simultaneous Multiple Parallel Synthesis Methodology

    ERIC Educational Resources Information Center

    Qvit, Nir; Barda, Yaniv; Gilon, Chaim; Shalev, Deborah E.

    2007-01-01

    This laboratory experiment provides a unique opportunity for students to synthesize three analogues of aspartame, a commonly used artificial sweetener. The students are introduced to the powerful and useful method of parallel synthesis while synthesizing three dipeptides in parallel using solid-phase peptide synthesis (SPPS) and simultaneous…

  13. A versatile and inexpensive apparatus for rapid parallel synthesis on solid support: description and synthesis illustration.

    PubMed

    Saha, A K; Liu, L; Simoneaux, R L

    2001-01-01

    A new inexpensive and practical apparatus for solid-phase chemistry and parallel synthesis is described. This new apparatus fills an important void in the availability of portable tools for the synthesis of libraries of compounds in multi-milligram amounts. Individual reaction tube capacities range in size from 4 mL to 500 mL of operating liquid volume. Reaction blocks of 36 tubes x 4 mL or 24 tubes x 150 mL allow flexibility of operation. Insert tubes with frit ends function as filter sticks for resin wash and for maintenance of inert atmosphere. An electronic controller device connects to the reaction tubes for programmable entry of pulses of inert gas for resin mixing or vacuum for resin wash. The utility of this apparatus is illustrated by the synthesis of libraries based on 4-methaneamine imidazoles.

  14. The analysis and synthesis of a parallel sorting engine

    SciTech Connect

    Ahn, B.

    1989-01-01

    This thesis is concerned with the development of a unique parallel sort-merge system suitable for implementation in VLSI. Two new sorting subsystems, a high performance VLSI sorter and a four-way merger, were also realized during the development process. In addition, the analysis of several existing parallel sorting architectures and algorithms was carried out. Algorithmic time complexity, VLSI processor performance, and chip area requirements for the existing sorting systems were evaluated. The rebound sorting algorithm was determined to be the most efficient among those considered. The rebound sorter algorithm was implemented in hardware as a systolic array with external expansion capability. The second phase of the research involved analyzing several parallel merge algorithms and their buffer management schemes. The dominant considerations for this phase of the research were the achievement of minimum VLSI chip area, design complexity, and logic delay. It was determined that the proposed merger architecture could be implemented in several ways. Selecting the appropriate microarchitecture for the merge, given the constraints of chip area and performance, was the major problem. The tradeoffs associated with this process are outlined. Finally, a pipelined sort-merge system was implemented in VLSI by combining a rebound sorter and a four-way merger on a single chip. The final chip size was 416 mils by 432 mils. Two micron CMOS technology was utilized in this chip realization. An overall throughput rate of 10M bytes/sec was achieved.

  15. Parallel Combinatorial Synthesis of Azo Dyes: A Combinatorial Experiment Suitable for Undergraduate Laboratories

    ERIC Educational Resources Information Center

    Gung, Benjamin W.; Taylor, Richard T.

    2004-01-01

    An experiment in the parallel synthesis of azo dyes that demonstrates the concepts of structure-activity relationships and chemical diversity with vivid colors is described. It is seen that this experiment is suitable for the second-semester organic chemistry laboratory and also for the one-semester organic laboratory.

  16. Solution-Phase Processes of Macromolecular Crystallization

    NASA Technical Reports Server (NTRS)

    Pusey, Marc L.; Minamitani, Elizabeth Forsythe

    2004-01-01

    We have proposed, for the tetragonal form of chicken egg lysozyme, that solution phase assembly processes are needed to form the growth units for crystal nucleation and growth. The starting point for the self-association process is the monomeric protein, and the final crystallographic symmetry is defined by the initial dimerization interactions of the monomers and subsequent n-mers formed, which in turn are a function of the crystallization conditions. It has been suggested that multimeric proteins generally incorporate the underlying multimers symmetry into the final crystallographic symmetry. We posed the question of what happens to a protein that is known to grow as an n-mer when it is placed in solution conditions where it is monomeric. The trypsin-treated, or cut, form of the protein canavalin (CCAN) has been shown to nucleate and grow crystals as a trimer from neutral to slightly acidic solutions. Under these conditions the solution is composed almost wholly of trimers. The insoluble protein can be readily dissolved by weakly basic solution, which results in a solution that is monomeric. There are three possible outcomes to an attempt at crystallization of the protein under monomeric (high pH) conditions: 1) we will obtain the same crystals as under trimer conditions, but at different protein concentrations governed by the self association equilibria; 2) we will obtain crystals having a different symmetry, based upon a monomeric growth unit; 3) we will not obtain crystals. Obtaining the first result would be indicative that the solution-phase self-association process is critical to the crystal nucleation and growth process. The second result would be less clear, as it may also reflect a pH-dependent shift in the trimer-trimer molecular interactions. The third result, particularly for experiments in the transition pH's between trimeric and monomeric CCAN, would indicate that the monomer does not crystallize, and that solution phase self association is not part

  17. Synthesis of spherical parallel manipulator for dexterous medical task

    NASA Astrophysics Data System (ADS)

    Chaker, Abdelbadiâ; Mlika, Abdelfattah; Laribi, Med Amine; Romdhane, Lotfi; Zeghloul, Saïd

    2012-06-01

    This paper deals with the design and the analysis of a spherical parallel manipulator (SPM) for a haptic minimally invasive surgery application. First the medical task was characterized with the help of a surgeon who performed a suture technique called anastomosis. A Vicon system was used to capture the motion of the surgeon, which yielded the volume swept by the tool during the anastomosis operation. The identified workspace can be represented by a cone with a half vertex angle of 26°. A multi objective optimization procedure based on genetic algorithms was then carried out to find the optimal SPM. Two criteria were considered, i.e., task workspace and mechanism dexterity. The optimized SPM was then analyzed to determine the error on the orientation of the end effector as a function of the manufacturing errors of the different links of the mechanism.

  18. Parallel chemistry in the 21st century.

    PubMed

    Long, Alan

    2012-09-01

    The tool chest of techniques, methodologies, and equipment for conducting parallel chemistry is larger than ever before. Improvements in the laboratory and developments in computational chemistry have enabled compound library design at the desks of medicinal chemists. This unit includes a brief background in combinatorial/parallel synthesis chemistry, along with a discussion of evolving technologies for both solid- and solution-phase chemistry. In addition, there are discussions on designing compound libraries, acquisition/procurement of compounds and/or reagents, the chemistry and equipment used for chemical production, purification, sample handling, and data analysis.

  19. Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao

    2015-07-01

    Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.

  20. Pfizer Global Virtual Library (PGVL): a chemistry design tool powered by experimentally validated parallel synthesis information.

    PubMed

    Hu, Qiyue; Peng, Zhengwei; Sutton, Scott C; Na, Jim; Kostrowicki, Jaroslav; Yang, Bo; Thacher, Thomas; Kong, Xianjun; Mattaparti, Sarathy; Zhou, Joe Zhongxiang; Gonzalez, Javier; Ramirez-Weinhouse, Michele; Kuki, Atsuo

    2012-11-12

    An unprecedented amount of parallel synthesis information was accumulated within Pfizer over the past 12 years. This information was captured by an informatics tool known as PGVL (Pfizer Global Virtual Library). PGVL was used for many aspects of drug discovery including automated reactant mining and reaction product formation to build a synthetically feasible virtual compound collection. In this report, PGVL is discussed in detail. The chemistry information within PGVL has been used to extract synthesis and design information using an intuitive desktop Graphic User Interface, PGVL Hub. Several real-case examples of PGVL are also presented.

  1. Confined synthesis of graphene wrapped LiMn0.5Fe0.5PO4 composite via two step solution phase method as high performance cathode for Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Xiang, Wei; Wu, Zhen-Guo; Wang, En-Hui; Chen, Ming-Zhe; Song, Yang; Zhang, Ji-Bin; Zhong, Yan-Jun; Chou, Shu-Lei; Luo, Jian-Hong; Guo, Xiao-Dong

    2016-10-01

    A novel strategy for confined synthesis of graphene wrapped nano-sized LiMn0.5Fe0.5PO4 hybrid composite has been developed, including co-precipitation and solvothermal reactions. The LiMn0.5Fe0.5PO4 nanoparticles with a constrained diameter of 20 nm are homogeneously wrapped by a continuous interconnected graphene sheets. The mechanism and composite structure evolution during the process are carefully investigated and discussed. With the shortened Li+ diffusion paths and enhanced electron conductivity, the hybrid composite shows high discharge capacity and superior rate performance with the discharge capacities of 166 mA h g-1 at 0.1 C and 90 mA h g-1 at 20 C. Excellent cycle stability is also demonstrated with only about 7.8% capacity decay after 500 cycles at 1 C.

  2. Parallel Chemoenzymatic Synthesis of Sialosides Containing a C5-Diversified Sialic Acid

    PubMed Central

    Cao, Hongzhi; Muthana, Saddam; Li, Yanhong; Cheng, Jiansong; Chen, Xi

    2009-01-01

    A convenient chemoenzymatic strategy for synthesizing sialosides containing a C5-diversified sialic acid was developed. The α2,3- and α2,6-linked sialosides containing a 5-azido neuraminic acid synthesized by a highly efficient one-pot three-enzyme approach were converted to C5″-amino sialosides, which were used as common intermediates for chemical parallel synthesis to quickly generate a series of sialosides containing various sialic acid forms. PMID:19740656

  3. Sequence-Defined Oligomers from Hydroxyproline Building Blocks for Parallel Synthesis Applications.

    PubMed

    Kanasty, Rosemary L; Vegas, Arturo J; Ceo, Luke M; Maier, Martin; Charisse, Klaus; Nair, Jayaprakash K; Langer, Robert; Anderson, Daniel G

    2016-08-01

    The functionality of natural biopolymers has inspired significant effort to develop sequence-defined synthetic polymers for applications including molecular recognition, self-assembly, and catalysis. Conjugation of synthetic materials to biomacromolecules has played an increasingly important role in drug delivery and biomaterials. We developed a controlled synthesis of novel oligomers from hydroxyproline-based building blocks and conjugated these materials to siRNA. Hydroxyproline-based monomers enable the incorporation of broad structural diversity into defined polymer chains. Using a perfluorocarbon purification handle, we were able to purify diverse oligomers through a single solid-phase extraction method. The efficiency of synthesis was demonstrated by building 14 unique trimers and 4 hexamers from 6 diverse building blocks. We then adapted this method to the parallel synthesis of hundreds of materials in 96-well plates. This strategy provides a platform for the screening of libraries of modified biomolecules. PMID:27365192

  4. In situ solution-phase Raman spectroscopy under forced convection.

    PubMed

    Zhu, Huanfeng; Wu, Jun; Shi, Qingfang; Wang, Zhenghao; Scherson, Daniel A

    2007-11-01

    In situ Raman spectra of solution-phase electrogenerated species have been recorded in a channel-type electrochemical cell incorporating a flat optically transparent window placed parallel to the channel plane that contains the embedded working electrode. A microscope objective with its main axis (Z) aligned normal to the direction of flow was used to focus the excitation laser beam (lambda exc = 532 nm) in the solution and also to collect the Raman scattered light from species present therein. Judicious adjustment of the cell position along Z allowed the depth of focus to overlap the diffusion boundary layer to achieve maximum detection sensitivity. Measurements were performed using a Au working electrode in iron hexacyanoferrate(II), [Fe(CN)6]4-, and nitrite, NO2-, containing aqueous solutions as a function of the applied potential, E. Linear correlations were found between both the gain and the loss of the integrated Raman intensity, IR, of bands, attributed to [Fe(CN)6]3- and [Fe(CN)6]4-, respectively, recorded downstream from the edge of the working electrode, and the current measured at the Au electrode as a function of E. The same overall trend was found for the gain in the IR of the NO3- band in the nitrite solution. Also included in this work is a ray trace analysis of the optical system.

  5. Rapid parallel synthesis of bioactive folded cyclotides using a tea-bag approach

    PubMed Central

    Aboye, Teshome; Kuang, Yuting; Neamati, Nouri

    2015-01-01

    We report here for the first time the rapid parallel production of bioactive folded cyclotides by using Fmoc-based solid-phase peptide synthesis in combination with a tea-bag approach. Using this approach we efficiently synthesized 15 different analogs of the CXCR4 antagonist cyclotide MCo-CVX-5c. Cyclotides were cyclized using a single-pot cyclization/folding reaction in the presence of reduced glutathione. Natively folded cyclotides were quickly purified from the cyclization/folding crude by activated thiol sepharose-based chromatography. The different folded cyclotide analogs were finally tested for their ability to inhibit the CXCR4 receptor in a cell-based assay. These results indicate that this approach can be used for the efficient chemical synthesis of cyclotide-based libraries that can be easily interfaced with solution or cell-based assays for the rapid screening of novel cyclotides with improved biological properties. PMID:25663016

  6. Solution-Phase Synthesis of a Highly Substituted Furan Library

    PubMed Central

    Cho, Chul-Hee; Shi, Feng; Jung, Dai-Il; Neuenswander, Benjamin; Lushington, Gerald H.; Larock, Richard C.

    2012-01-01

    A library of furans has been synthesized by iodocyclization and further diversified by palladium-catalyzed coupling processes. The key intermediate 3-iodofurans have been prepared by the electrophilic iodocyclization of 2-iodo-2-alken-1-ones in the presence of various nucleophiles in good to excellent yields under mild reaction conditions. These 3-iodofurans are the key components for library generation through subsequent elaboration by palladium-catalyzed processes, such as Suzuki–Miyaura, Sonagashira, Heck, aminocarbonylation and carboalkoxylation chemistry to afford a diverse set of 2,3,4,5-tetrasubstituted furans. PMID:22612549

  7. Aryl azoles with neuroprotective activity--parallel synthesis and attempts at target identification.

    PubMed

    Cocconcelli, Giuseppe; Diodato, Enrica; Caricasole, Andrea; Gaviraghi, Giovanni; Genesio, Eva; Ghiron, Chiara; Magnoni, Letizia; Pecchioli, Elena; Plazzi, Pier Vincenzo; Terstappen, Georg C

    2008-02-15

    A parallel synthesis of aryl azoles with neuroprotective activity is described. All compounds obtained were evaluated in an in vitro assay using a NMDA toxicity paradigm showing a neuroprotective activity between 15% and 40%. The potential biological target of the active compounds was investigated by extensive literature searches based around similar scaffolds with reported neuroprotective activity. The most interesting molecules active in the NMDA toxicity assay (3a and 2g) showed moderate but significant activity in the inhibition of the Site 2 Sodium Channel binding assay at 10 microM. To confirm our hypothesis compounds 3a, c, f and 2g were tested in the Veratridine assay which is one of the excitotoxicity assays of relevance to NaV channels. The compounds tested showed an activity between 40% and 70%. The identification of neuroprotective small molecules and the identification of NaV channels as the potential site of action were the most important goals of this work.

  8. Parallel Synthesis of Poly(amino ether)-Templated Plasmonic Nanoparticles for Transgene Delivery

    PubMed Central

    2015-01-01

    Plasmonic nanoparticles have been increasingly investigated for numerous applications in medicine, sensing, and catalysis. In particular, gold nanoparticles have been investigated for separations, sensing, drug/nucleic acid delivery, and bioimaging. In addition, silver nanoparticles demonstrate antibacterial activity, resulting in potential application in treatments against microbial infections, burns, diabetic skin ulcers, and medical devices. Here, we describe the facile, parallel synthesis of both gold and silver nanoparticles using a small set of poly(amino ethers), or PAEs, derived from linear polyamines, under ambient conditions and in absence of additional reagents. The kinetics of nanoparticle formation were dependent on PAE concentration and chemical composition. In addition, yields were significantly greater in case of PAEs when compared to 25 kDa poly(ethylene imine), which was used as a standard catonic polymer. Ultraviolet radiation enhanced the kinetics and the yield of both gold and silver nanoparticles, likely by means of a coreduction effect. PAE-templated gold nanoparticles demonstrated the ability to deliver plasmid DNA, resulting in transgene expression, in 22Rv1 human prostate cancer and MB49 murine bladder cancer cell lines. Taken together, our results indicate that chemically diverse poly(amino ethers) can be employed for rapidly templating the formation of metal nanoparticles under ambient conditions. The simplicity of synthesis and chemical diversity make PAE-templated nanoparticles useful tools for several applications in biotechnology, including nucleic acid delivery. PMID:25084138

  9. A Novel and Efficient One-Step Parallel Synthesis of Dibenzopyranones via Suzuki-Miyaura Cross Coupling

    PubMed Central

    Vishnumurthy, Kodumuru; Makriyannis, Alexandros

    2010-01-01

    Microwave promoted novel and efficient one-step parallel synthesis of dibenzopyranones and heterocyclic analogues from bromo arylcarboxylates and o-hydroxyarylboronic acids via Suzuki-Miyaura cross coupling reaction is described. Spontaneous lactonization gave dibenzopyranones and heterocyclic analogues bearing electron donating and withdrawing groups on both aromatic rings in good to excellent yields. PMID:20831265

  10. Parallel microfluidic synthesis of size-tunable polymeric nanoparticles using 3D flow focusing towards in vivo study

    PubMed Central

    Lim, Jong-Min; Bertrand, Nicolas; Valencia, Pedro M.; Rhee, Minsoung; Langer, Robert; Jon, Sangyong; Farokhzad, Omid C.; Karnik, Rohit

    2014-01-01

    Microfluidic synthesis of nanoparticles (NPs) can enhance the controllability and reproducibility in physicochemical properties of NPs compared to bulk synthesis methods. However, applications of microfluidic synthesis are typically limited to in vitro studies due to low production rates. Herein, we report the parallelization of NP synthesis by 3D hydrodynamic flow focusing (HFF) using a multilayer microfluidic system to enhance the production rate without losing the advantages of reproducibility, controllability, and robustness. Using parallel 3D HFF, polymeric poly(lactide-co-glycolide)-b-polyethyleneglycol (PLGA-PEG) NPs with sizes tunable in the range of 13–150 nm could be synthesized reproducibly with high production rate. As a proof of concept, we used this system to perform in vivo pharmacokinetic and biodistribution study of small (20 nm diameter) PLGA-PEG NPs that are otherwise difficult to synthesize. Microfluidic parallelization thus enables synthesis of NPs with tunable properties with production rates suitable for both in vitro and in vivo studies. PMID:23969105

  11. Solution phase van der Waals epitaxy of ZnO wire arrays

    NASA Astrophysics Data System (ADS)

    Zhu, Yue; Zhou, Yong; Bakti Utama, Muhammad Iqbal; Mata, María De La; Zhao, Yanyuan; Zhang, Qing; Peng, Bo; Magen, Cesar; Arbiol, Jordi; Xiong, Qihua

    2013-07-01

    As an incommensurate epitaxy, van der Waals epitaxy allows defect-free crystals to grow on substrates even with a large lattice mismatch. Furthermore, van der Waals epitaxy is proposed as a universal platform where heteroepitaxy can be achieved irrespective of the nature of the overlayer material and the method of crystallization. Here we demonstrate van der Waals epitaxy in solution phase synthesis for seedless and catalyst-free growth of ZnO wire arrays on phlogopite mica at low temperature. A unique incommensurate interface is observed even with the incomplete initial wetting of ZnO onto the substrate. Interestingly, the imperfect contacting layer does not affect the crystalline and optical properties of other parts of the wires. In addition, we present patterned growth of a well-ordered array with hexagonal facets and in-plane alignment. We expect our seedless and catalyst-free solution phase van der Waals epitaxy synthesis to be widely applicable in other materials and structures.As an incommensurate epitaxy, van der Waals epitaxy allows defect-free crystals to grow on substrates even with a large lattice mismatch. Furthermore, van der Waals epitaxy is proposed as a universal platform where heteroepitaxy can be achieved irrespective of the nature of the overlayer material and the method of crystallization. Here we demonstrate van der Waals epitaxy in solution phase synthesis for seedless and catalyst-free growth of ZnO wire arrays on phlogopite mica at low temperature. A unique incommensurate interface is observed even with the incomplete initial wetting of ZnO onto the substrate. Interestingly, the imperfect contacting layer does not affect the crystalline and optical properties of other parts of the wires. In addition, we present patterned growth of a well-ordered array with hexagonal facets and in-plane alignment. We expect our seedless and catalyst-free solution phase van der Waals epitaxy synthesis to be widely applicable in other materials and structures

  12. Kinematic Analysis and Synthesis of a 3-URU Pure Rotational Parallel Mechanism with Respect to Singularity and Workspace

    NASA Astrophysics Data System (ADS)

    Huda, Syamsul; Takeda, Yukio

    This paper concerns kinematics and dimensional synthesis of a three universal-revolute-universal (3-URU) pure rotational parallel mechanism. The mechanism is composed of a base, a platform and three symmetric limbs consisting of U-R-U joints. This mechanism is a spatial non-overconstrained mechanism with three degrees of freedom. The joints in each limb are so arranged to perform pure rotational motion of the platform around a specific point. Equations for inverse displacement analysis and singularities were derived to investigate the relationship of the kinematic constants to the solution of the inverse kinematics and singularities. Based on the results, a dimensional synthesis procedure for the 3-URU parallel mechanism considering singularities and the workspace was proposed. A numerical example was also presented to illustrate the synthesis method.

  13. Parallel evolution of Nitric Oxide signaling: Diversity of synthesis & memory pathways

    PubMed Central

    Moroz, Leonid L.; Kohn, Andrea B.

    2014-01-01

    The origin of NO signaling can be traceable back to the origin of life with the large scale of parallel evolution of NO synthases (NOSs). Inducible-like NOSs may be the most basal prototype of all NOSs and that neuronal-like NOS might have evolved several times from this prototype. Other enzymatic and non-enzymatic pathways for NO synthesis have been discovered using reduction of nitrites, an alternative source of NO. Diverse synthetic mechanisms can co-exist within the same cell providing a complex NO-oxygen microenvironment tightly coupled with cellular energetics. The dissection of multiple sources of NO formation is crucial in analysis of complex biological processes such as neuronal integration and learning mechanisms when NO can act as a volume transmitter within memory-forming circuits. In particular, the molecular analysis of learning mechanisms (most notably in insects and gastropod molluscs) opens conceptually different perspectives to understand the logic of recruiting evolutionarily conserved pathways for novel functions. Giant uniquely identified cells from Aplysia and related species precent unuque opportunities for integrative analysis of NO signaling at the single cell level. PMID:21622160

  14. Type synthesis of two-degrees-of-freedom rotational parallel mechanism with two continuous rotational axes

    NASA Astrophysics Data System (ADS)

    Xu, Yundou; Zhang, Dongsheng; Wang, Min; Yao, Jiantao; Zhao, Yongsheng

    2016-07-01

    The two-rotational-degrees-of-freedom(2R) parallel mechanism(PM) with two continuous rotational axes(CRAs) has a simple kinematic model. It is therefore easy to implement trajectory planning, parameter calibration, and motion control, which allows for a variety of application prospects. However, no systematic analysis on structural constraints of the 2R-PM with two CRAs has been performed, and there are only a few types of 2R-PM with two CRAs. Thus, a theory regarding the type synthesis of the 2R-PM with two CRAs is systematically established. First, combining the theories of reciprocal screw and space geometry, the spatial arrangement relationships of the constraint forces applied to the moving platform by the branches are explored, which give the 2R-PM two CRAs. The different distributions of the constraint forces in each branch are also studied. On the basis of the obtained structural constraints of branches, and considering the geometric relationships of constraint forces in each branch, the appropriate kinematic chains are constructed. Through the reasonable configuration of branch kinematic chains corresponding to every structural constraint, a series of new 2R-PMs with two CRAs are finally obtained.

  15. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  16. Automated parallel synthesis of 5'-triphosphate oligonucleotides and preparation of chemically modified 5'-triphosphate small interfering RNA.

    PubMed

    Zlatev, Ivan; Lackey, Jeremy G; Zhang, Ligang; Dell, Amy; McRae, Kathy; Shaikh, Sarfraz; Duncan, Richard G; Rajeev, Kallanthottathil G; Manoharan, Muthiah

    2013-02-01

    A fully automated chemical method for the parallel and high-throughput solid-phase synthesis of 5'-triphosphate and 5'-diphosphate oligonucleotides is described. The desired full-length oligonucleotides were first constructed using standard automated DNA/RNA solid-phase synthesis procedures. Then, on the same column and instrument, efficient implementation of an uninterrupted sequential cycle afforded the corresponding unmodified or chemically modified 5'-triphosphates and 5'-diphosphates. The method was readily translated into a scalable and high-throughput synthesis protocol compatible with the current DNA/RNA synthesizers yielding a large variety of unique 5'-polyphosphorylated oligonucleotides. Using this approach, we accomplished the synthesis of chemically modified 5'-triphosphate oligonucleotides that were annealed to form small-interfering RNAs (ppp-siRNAs), a potentially interesting class of novel RNAi therapeutic tools. The attachment of the 5'-triphosphate group to the passenger strand of a siRNA construct did not induce a significant improvement in the in vitro RNAi-mediated gene silencing activity nor a strong specific in vitro RIG-I activation. The reported method will enable the screening of many chemically modified ppp-siRNAs, resulting in a novel bi-functional RNAi therapeutic platform. PMID:23260577

  17. Parallel synthesis of a series of potentially brain penetrant aminoalkyl benzoimidazoles.

    PubMed

    Micco, Iolanda; Nencini, Arianna; Quinn, Joanna; Bothmann, Hendrick; Ghiron, Chiara; Padova, Alessandro; Papini, Silvia

    2008-03-01

    Alpha7 agonists were identified via GOLD (CCDC) docking in the putative agonist binding site of an alpha7 homology model and a series of aminoalkyl benzoimidazoles was synthesised to obtain potentially brain penetrant drugs. The array was prepared starting from the reaction of ortho-fluoronitrobenzenes with a selection of diamines, followed by reduction of the nitro group to obtain a series of monoalkylated phenylene diamines. N,N'-Carbonyldiimidazole (CDI) mediated acylation, followed by a parallel automated work-up procedure, afforded the monoacylated phenylenediamines which were cyclised under acidic conditions. Parallel work-up and purification afforded the array products in good yields and purities with a robust parallel methodology which will be useful for other libraries. Screening for alpha7 activity revealed compounds with agonist activity for the receptor.

  18. Parallel synthesis of a series of potentially brain penetrant aminoalkyl benzoimidazoles.

    PubMed

    Micco, Iolanda; Nencini, Arianna; Quinn, Joanna; Bothmann, Hendrick; Ghiron, Chiara; Padova, Alessandro; Papini, Silvia

    2008-03-01

    Alpha7 agonists were identified via GOLD (CCDC) docking in the putative agonist binding site of an alpha7 homology model and a series of aminoalkyl benzoimidazoles was synthesised to obtain potentially brain penetrant drugs. The array was prepared starting from the reaction of ortho-fluoronitrobenzenes with a selection of diamines, followed by reduction of the nitro group to obtain a series of monoalkylated phenylene diamines. N,N'-Carbonyldiimidazole (CDI) mediated acylation, followed by a parallel automated work-up procedure, afforded the monoacylated phenylenediamines which were cyclised under acidic conditions. Parallel work-up and purification afforded the array products in good yields and purities with a robust parallel methodology which will be useful for other libraries. Screening for alpha7 activity revealed compounds with agonist activity for the receptor. PMID:18078760

  19. Parallel synthesis of 1,3-dihydro-1,4-benzodiazepine-2-ones employing catch and release.

    PubMed

    Laustsen, Line S; Sams, Christian K

    2007-01-01

    An efficient solid-phase method has been developed for the parallel synthesis of 1,3-dihydro-1,4-benzodiazepine-2-one derivatives. A key step in this procedure involves catching crude 2-aminobenzoimine products 4 on an amino acid Wang resin 10. Mild acidic conditions then promote a ring closure and in the same step cleavage from the resin to give pure benzodiazepine products 12. The 2-aminobenzoimines 4 can be synthesized from either 2-aminobenzonitriles 1 and Grignard reagents 2 or from iodoanilines 5 and nitriles 7 allowing a range of diversification. Further diversification can be introduced to the benzodiazepine products by N-alkylation promoted by a resin bound base and alkylating agents 13.

  20. A general stereocontrolled, convergent synthesis of oligoprenols that parallels the biosynthetic pathway.

    PubMed

    Radetich, Branko; Corey, E J

    2002-03-20

    A solution is reported to the classic unsolved problem of stereoselective synthesis of all-E oligoprenols, such as E-farnesylfarnesol, by a cationic coupling analogous to the biosynthetic pathway. The simplicity and efficacy of the method, which is outlined in Scheme 1, are demonstrated by the synthesis of a series of all-E oligoprenols from C(20) to C(35) in uniformly excellent overall yield. The success of the approach is due not only to the highly E-stereoselective C-C coupling that forms the oligoprenyl chain but also to the development of efficient syntheses of allylic secondary silanes and E-oligoprenal acetals, and to a selective allylic demethoxylation reaction.

  1. Solution-Phase Epitaxial Growth of Quasi-Monocrystalline Cuprous Oxide on Metal Nanowires

    PubMed Central

    2014-01-01

    The epitaxial growth of monocrystalline semiconductors on metal nanostructures is interesting from both fundamental and applied perspectives. The realization of nanostructures with excellent interfaces and material properties that also have controlled optical resonances can be very challenging. Here we report the synthesis and characterization of metal–semiconductor core–shell nanowires. We demonstrate a solution-phase route to obtain stable core–shell metal–Cu2O nanowires with outstanding control over the resulting structure, in which the noble metal nanowire is used as the nucleation site for epitaxial growth of quasi-monocrystalline Cu2O shells at room temperature in aqueous solution. We use X-ray and electron diffraction, high-resolution transmission electron microscopy, energy dispersive X-ray spectroscopy, photoluminescence spectroscopy, and absorption spectroscopy, as well as density functional theory calculations, to characterize the core–shell nanowires and verify their structure. Metal–semiconductor core–shell nanowires offer several potential advantages over thin film and traditional nanowire architectures as building blocks for photovoltaics, including efficient carrier collection in radial nanowire junctions and strong optical resonances that can be tuned to maximize absorption. PMID:25233392

  2. Production of complex nucleic acid libraries using highly parallel in situ oligonucleotide synthesis.

    PubMed

    Cleary, Michele A; Kilian, Kristopher; Wang, Yanqun; Bradshaw, Jeff; Cavet, Guy; Ge, Wei; Kulkarni, Amit; Paddison, Patrick J; Chang, Kenneth; Sheth, Nihar; Leproust, Eric; Coffey, Ernest M; Burchard, Julja; McCombie, W Richard; Linsley, Peter; Hannon, Gregory J

    2004-12-01

    Generation of complex libraries of defined nucleic acid sequences can greatly aid the functional analysis of protein and gene function. Previously, such studies relied either on individually synthesized oligonucleotides or on cellular nucleic acids as the starting material. As each method has disadvantages, we have developed a rapid and cost-effective alternative for construction of small-fragment DNA libraries of defined sequences. This approach uses in situ microarray DNA synthesis for generation of complex oligonucleotide populations. These populations can be recovered and either used directly or immortalized by cloning. From a single microarray, a library containing thousands of unique sequences can be generated. As an example of the potential applications of this technology, we have tested the approach for the production of plasmids encoding short hairpin RNAs (shRNAs) targeting numerous human and mouse genes. We achieved high-fidelity clone retrieval with a uniform representation of intended library sequences. PMID:15782200

  3. Parallel Synthesis and Biological Evaluation of 837 Analogues of Procaspase-Activating Compound 1 (PAC-1)

    PubMed Central

    Hsu, Danny C.; Roth, Howard S.; West, Diana C.; Botham, Rachel C.; Novotny, Chris J.; Schmid, Steven C.; Hergenrother, Paul J.

    2011-01-01

    Procaspase-Activating Compound 1 (PAC-1) is an ortho-hydroxy N-acyl hydrazone that enhances the enzymatic activity of procaspase-3 in vitro and induces apoptosis in cancer cells. An analogue of PAC-1, called S-PAC-1, was evaluated in a veterinary clinical trial in pet dogs with lymphoma and found to have considerable potential as an anticancer agent. With the goal of identifying more potent compounds in this promising class of experimental therapeutics, a combinatorial library based on PAC-1 was created, and the compounds were evaluated for their ability to induce death of cancer cells in culture. For library construction, 31 hydrazides were condensed in parallel with 27 aldehydes to create 837 PAC-1 analogues, with an average purity of 91%. The compounds were evaluated for their ability to induce apoptosis in cancer cells, and through this work, six compounds were discovered to be substantially more potent than PAC-1 and S-PAC-1. These six hits were further evaluated for their ability to relieve zinc-mediated inhibition of procaspase-3 in vitro. In general, the newly identified hit compounds are two- to four-fold more potent than PAC-1 and S-PAC-1 in cell culture, and thus have promise as experimental therapeutics for treatment of the many cancers that have elevated expression levels of procaspase-3. PMID:22007686

  4. Structure-based Design and In-Parallel Synthesis of Inhibitors of AmpC b-lactamase

    SciTech Connect

    Tondi, D.; Powers, R.A.; Negri, M.C.; Caselli, M.C.; Blazquez, J.; Costi, M.P.; Shoichet, B.K.

    2010-03-08

    Group I {beta}-lactamases are a major cause of antibiotic resistance to {beta}-lactams such as penicillins and cephalosporins. These enzymes are only modestly affected by classic {beta}-lactam-based inhibitors, such as clavulanic acid. Conversely, small arylboronic acids inhibit these enzymes at sub-micromolar concentrations. Structural studies suggest these inhibitors bind to a well-defined cleft in the group I {beta}-lactamase AmpC; this cleft binds the ubiquitous R1 side chain of {beta}-lactams. Intriguingly, much of this cleft is left unoccupied by the small arylboronic acids. To investigate if larger boronic acids might take advantage of this cleft, structure-guided in-parallel synthesis was used to explore new inhibitors of AmpC. Twenty-eight derivatives of the lead compound, 3-aminophenylboronic acid, led to an inhibitor with 80-fold better binding (2; K{sub i} 83 nM). Molecular docking suggested orientations for this compound in the R1 cleft. Based on the docking results, 12 derivatives of 2 were synthesized, leading to inhibitors with K{sub i} values of 60 nM and with improved solubility. Several of these inhibitors reversed the resistance of nosocomial Gram-positive bacteria, though they showed little activity against Gram-negative bacteria. The X-ray crystal structure of compound 2 in complex with AmpC was subsequently determined to 2.1 {angstrom} resolution. The placement of the proximal two-thirds of the inhibitor in the experimental structure corresponds with the docked structure, but a bond rotation leads to a distinctly different placement of the distal part of the inhibitor. In the experimental structure, the inhibitor interacts with conserved residues in the R1 cleft whose role in recognition has not been previously explored. Combining structure-based design with in-parallel synthesis allowed for the rapid exploration of inhibitor functionality in the R1 cleft of AmpC. The resulting inhibitors differ considerably from {beta}-lactams but

  5. Vertical Single-Crystalline Organic Nanowires on Graphene: Solution-Phase Epitaxy and Optical Microcavities.

    PubMed

    Zheng, Jian-Yao; Xu, Hongjun; Wang, Jing Jing; Winters, Sinéad; Motta, Carlo; Karademir, Ertuğrul; Zhu, Weigang; Varrla, Eswaraiah; Duesberg, Georg S; Sanvito, Stefano; Hu, Wenping; Donegan, John F

    2016-08-10

    Vertically aligned nanowires (NWs) of single crystal semiconductors have attracted a great deal of interest in the past few years. They have strong potential to be used in device structures with high density and with intriguing optoelectronic properties. However, fabricating such nanowire structures using organic semiconducting materials remains technically challenging. Here we report a simple procedure for the synthesis of crystalline 9,10-bis(phenylethynyl) anthracene (BPEA) NWs on a graphene surface utilizing a solution-phase van der Waals (vdW) epitaxial strategy. The wires are found to grow preferentially in a vertical direction on the surface of graphene. Structural characterization and first-principles ab initio simulations were performed to investigate the epitaxial growth and the molecular orientation of the BPEA molecules on graphene was studied, revealing the role of interactions at the graphene-BPEA interface in determining the molecular orientation. These free-standing NWs showed not only efficient optical waveguiding with low loss along the NW but also confinement of light between the two end facets of the NW forming a microcavity Fabry-Pérot resonator. From an analysis of the optical dispersion within such NW microcavities, we observed strong slowing of the waveguided light with a group velocity reduced to one-tenth the speed of light. Applications of the vertical single-crystalline organic NWs grown on graphene will benefit from a combination of the unique electronic properties and flexibility of graphene and the tunable optical and electronic properties of organic NWs. Therefore, these vertical organic NW arrays on graphene offer the potential for realizing future on-chip light sources. PMID:27438189

  6. Doping and Alloying in the Solution-Phase Synthesis of Germanium Nanocrystals

    SciTech Connect

    Ruddy, D. A.; Neale, N. R.

    2012-01-01

    Group IV nanocrystals (NCs) are receiving increased attention as a potentially non-toxic nanomaterial for use in a number of important optoelectronic applications (e.g., solar photoconversion, photodetectors, LEDs, biological imaging). With these goals in mind, doping and alloying with Group III, IV, and V elements may play a major role in tailoring the NC properties, such as developing n-type and p-type conductivity through substitutional doping, as well as affecting the optical absorption, emission, and overall charge transport in a NC film. Here we present an extension of the mixed-valence iodide precursor methodology to incorporate Group III, IV, and V elements to produce E-GeNC materials. All main-group elements (E) that surround Ge on the periodic table (i.e., E = Al, Si, P, Ga, As, In, Sn, and Sb) can be incorporated via this methodology. The extent to which the dopant elements are included will be discussed, along with the optical absorbance, emission, and related properties of the NCs. In addition, the effect of the dopant elements on the NC growth kinetics will be discussed.

  7. A New Application of Parallel Synthesis Strategy for Discovery of Amide-Linked Small Molecules as Potent Chondroprotective Agents in TNF-α-Stimulated Chondrocytes

    PubMed Central

    Lee, Chia-Chung; Lo, Yang; Ho, Ling-Jun; Lai, Jenn-Haung; Lien, Shiu-Bii; Lin, Leou-Chyr; Chen, Chun-Liang; Chen, Tsung-Chih; Liu, Feng-Cheng; Huang, Hsu-Shan

    2016-01-01

    As part of an effort to profile potential therapeutics for the treatment of inflammation-related diseases, a diversity of amide-linked small molecules was synthesized by using parallel synthesis strategy. Moreover, these new compounds were also evaluated for their inhibitory effects on nitric oxide (NO) by using tumor necrosis factor alpha (TNF-α)-induced inflammatory responses in chondrocytes. Among the tested compounds, N-(3-chloro-4-fluorophenyl)-2-hydroxybenzamide (HS-Ck) was the most potent inhibitor of NO production and inducible nitric oxide synthase (iNOS) expression in TNF-α-stimulated chondrocytes. In addition, our biological results indicated that HS-Ck might suppress the expression levels of iNOS and matrix metalloproteinases-13 (MMP-13) activities through downregulating the activation of nuclear factor kappa B (NF-κB) and signal transducer and activator of transcription 3 (STAT-3) transcriptional factors. Therefore, the parallel synthesis was successful used to develop a new class of potential anti-inflammatory agents as chondroprotective candidates for the treatment of osteoarthritis. PMID:26963090

  8. A New Application of Parallel Synthesis Strategy for Discovery of Amide-Linked Small Molecules as Potent Chondroprotective Agents in TNF-α-Stimulated Chondrocytes.

    PubMed

    Lee, Chia-Chung; Lo, Yang; Ho, Ling-Jun; Lai, Jenn-Haung; Lien, Shiu-Bii; Lin, Leou-Chyr; Chen, Chun-Liang; Chen, Tsung-Chih; Liu, Feng-Cheng; Huang, Hsu-Shan

    2016-01-01

    As part of an effort to profile potential therapeutics for the treatment of inflammation-related diseases, a diversity of amide-linked small molecules was synthesized by using parallel synthesis strategy. Moreover, these new compounds were also evaluated for their inhibitory effects on nitric oxide (NO) by using tumor necrosis factor alpha (TNF-α)-induced inflammatory responses in chondrocytes. Among the tested compounds, N-(3-chloro-4-fluorophenyl)-2-hydroxybenzamide (HS-Ck) was the most potent inhibitor of NO production and inducible nitric oxide synthase (iNOS) expression in TNF-α-stimulated chondrocytes. In addition, our biological results indicated that HS-Ck might suppress the expression levels of iNOS and matrix metalloproteinases-13 (MMP-13) activities through downregulating the activation of nuclear factor kappa B (NF-κB) and signal transducer and activator of transcription 3 (STAT-3) transcriptional factors. Therefore, the parallel synthesis was successful used to develop a new class of potential anti-inflammatory agents as chondroprotective candidates for the treatment of osteoarthritis. PMID:26963090

  9. Supramolecular chemistry: from aromatic foldamers to solution-phase supramolecular organic frameworks

    PubMed Central

    2015-01-01

    Summary This mini-review covers the growth, education, career, and research activities of the author. In particular, the developments of various folded, helical and extended secondary structures from aromatic backbones driven by different noncovalent forces (including hydrogen bonding, donor–acceptor, solvophobicity, and dimerization of conjugated radical cations) and solution-phase supramolecular organic frameworks driven by hydrophobically initiated aromatic stacking in the cavity of cucurbit[8]uril (CB[8]) are highlighted. PMID:26664626

  10. Comparison of photoluminescence of carbon nanotube/ZnO nanostructures synthesized by gas- and solution-phase transport

    NASA Astrophysics Data System (ADS)

    Jin, Changhyun; Lee, Seawook; Kim, Chang-Wan; Park, Suyoung; Lee, Chongmu; Lee, Dongjin

    2014-09-01

    Multiwalled carbon nanotubes (MWCNTs)/ZnO heterostructures were synthesized by two different processes: (1) gas-phase transport (GPT) and nucleation of Zn powders and (2) solution-phase transport (SPT) chemical reaction of zinc nitrate solution on the MWCNTs. Transmission electron microscopy and X-ray diffraction analysis indicated that the ZnO nanostructures on the MWCNTs from the GPT and SPT processes were poly- and single-crystal hexagonal wurtzite structure, respectively. The major photoluminescence (PL) spectra of our MWCNT/ZnO hybrid, excited at 380 nm and 550 nm, were presented. The PL intensity of the MWCNT/ZnO coaxial nanostructures behaves differently depending on the ZnO synthesis methods on the MWCNTs. The MWCNT/ZnO heterostructures synthesized using the GPT process were more efficient than those synthesized by SPT process in enhancing the PL intensity around the near-band-edge emission region. However, the emission enhancement around defect region was mostly attributed to increase in the O vacancy concentration in the ZnO on the MWCNTs during the SPT process.

  11. Comparison of photoluminescence of carbon nanotube/ZnO nanostructures synthesized by gas- and solution-phase transport

    NASA Astrophysics Data System (ADS)

    Jin, Changhyun; Lee, Seawook; Kim, Chang-Wan; Park, Suyoung; Lee, Chongmu; Lee, Dongjin

    2015-02-01

    Multiwalled carbon nanotubes (MWCNTs)/ZnO heterostructures were synthesized by two different processes: (1) gas-phase transport (GPT) and nucleation of Zn powders and (2) solution-phase transport (SPT) chemical reaction of zinc nitrate solution on the MWCNTs. Transmission electron microscopy and X-ray diffraction analysis indicated that the ZnO nanostructures on the MWCNTs from the GPT and SPT processes were poly- and single-crystal hexagonal wurtzite structure, respectively. The major photoluminescence (PL) spectra of our MWCNT/ZnO hybrid, excited at 380 nm and 550 nm, were presented. The PL intensity of the MWCNT/ZnO coaxial nanostructures behaves differently depending on the ZnO synthesis methods on the MWCNTs. The MWCNT/ZnO heterostructures synthesized using the GPT process were more efficient than those synthesized by SPT process in enhancing the PL intensity around the near-band-edge emission region. However, the emission enhancement around defect region was mostly attributed to increase in the O vacancy concentration in the ZnO on the MWCNTs during the SPT process.

  12. Solution phase space and conserved charges: A general formulation for charges associated with exact symmetries

    NASA Astrophysics Data System (ADS)

    Hajian, K.; Sheikh-Jabbari, M. M.

    2016-02-01

    We provide a general formulation for calculating conserved charges for solutions to generally covariant gravitational theories with possibly other internal gauge symmetries, in any dimensions and with generic asymptotic behaviors. These solutions are generically specified by a number of exact (continuous, global) symmetries and some parameters. We define "parametric variations" as field perturbations generated by variations of the solution parameters. Employing the covariant phase space method, we establish that the set of these solutions (up to pure gauge transformations) form a phase space, the solution phase space, and that the tangent space of this phase space includes the parametric variations. We then compute conserved charge variations associated with the exact symmetries of the family of solutions, caused by parametric variations. Integrating the charge variations over a path in the solution phase space, we define the conserved charges. In particular, we revisit "black hole entropy as a conserved charge" and the derivation of the first law of black hole thermodynamics. We show that the solution phase space setting enables us to define black hole entropy by an integration over any compact, codminesion-2, smooth spacelike surface encircling the hole, as well as to a natural generalization of Wald and Iyer-Wald analysis to cases involving gauge fields.

  13. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  14. Processing of organic electro-optic materials: solution-phase assisted reorientation of chromophores

    NASA Astrophysics Data System (ADS)

    Olbricht, Benjamin C.; Eng, David L. K.; Kozacik, Stephen T.; Ross, Dylan; Prather, Dennis W.

    2013-03-01

    Organic EO materials, sometimes called EO polymers, offer a variety of very promising properties that have improved at remarkable rates over the last decade, and will continue to improve. However, these materials rely on a "poling" process to afford EO activity, which is commonly cited as the bottleneck for the widespread implementation of organic EO material-containing devices. The Solution Phase-Assisted Reorientation of Chromophores (SPARC) is a process that utilizes the mobility of chromophores in the solution phase to afford acentric molecular order during deposition. The electric field can be generated by a corona discharge in a carefully-controlled gas environment. The absence of a poling director during conventional spin deposition forms centric pairs of chromophores which may compromise the efficacy of thermal poling. Direct spectroscopic evidence of linear dichroism in modern organic EO materials has estimated the poling-induced order of the chromophores to be 10-15% of its theoretical maximum, offering the potential for a manyfold enhancement in EO activity if poling is improved. SPARC is designed to overcome these limitations and also to allow the poling of polymeric hosts with temporal thermal (alignment) stabilities greater than the decomposition temperature of the guest chromophore. In this report evidence supporting the theory motivating the SPARC process and the resulting EO activities will be presented. Additionally, the results of trials towards a device demonstration of the SPARC process will be discussed.

  15. Microsomal triglyceride transfer protein (MTP) inhibitors: discovery of clinically active inhibitors using high-throughput screening and parallel synthesis paradigms.

    PubMed

    Chang, George; Ruggeri, Roger B; Harwood, H James

    2002-07-01

    The inhibition of microsomal triglyceride transfer protein (MTP) blocks the hepatic secretion of very low density lipoproteins (VLDL) and the intestinal secretion of chylomicrons. Consequently, this mechanism provides a highly efficacious pharmacological target for the lowering of low density lipoprotein (LDL) cholesterol and reduction of postprandial lipemia. The combination of these effects could afford unprecedented benefit in the treatment of atherosclerosis and consequent cardiovascular disease. The promise of this therapeutic target has attracted widespread interest in the pharmaceutical industry. Independent efforts have yielded strikingly similar series of lipophilic amide inhibitors. The way in which the evolutionary paths of distinct inhibitor series have tended to converge through the course of robotics-assisted synthesis efforts is illustrated with candidates from Bristol-Myers Squibb and Pfizer. Hanging in the balance with the exceptional potency of the compounds presented are the potential adverse effects due to blockage of intestinal fat absorption and hepatic lipid secretion. Finding a degree of efficacy that can be safely tolerated defines the dilemma surrounding the advancement of these compounds to clinical practice.

  16. Solution-phase photochemistry of a [FeFe]hydrogenase model compound: Evidence of photoinduced isomerisation

    SciTech Connect

    Kania, Rafal; Hunt, Neil T.; Frederix, Pim W. J. M.; Wright, Joseph A.; Pickett, Christopher J.; Ulijn, Rein V.

    2012-01-28

    The solution-phase photochemistry of the [FeFe] hydrogenase subsite model ({mu}-S(CH{sub 2}){sub 3}S)Fe{sub 2}(CO){sub 4}(PMe{sub 3}){sub 2} has been studied using ultrafast time-resolved infrared spectroscopy supported by density functional theory calculations. In three different solvents, n-heptane, methanol, and acetonitrile, relaxation of the tricarbonyl intermediate formed by UV photolysis of a carbonyl ligand leads to geminate recombination with a bias towards a thermodynamically less stable isomeric form, suggesting that facile interconversion of the ligand groups at the Fe center is possible in the unsaturated species. In a polar or hydrogen bonding solvent, this process competes with solvent substitution leading to the formation of stable solvent adduct species. The data provide further insight into the effect of incorporating non-carbonyl ligands on the dynamics and photochemistry of hydrogenase-derived biomimetic compounds.

  17. Solution-phase photochemistry of a [FeFe]hydrogenase model compound: evidence of photoinduced isomerisation.

    PubMed

    Kania, Rafal; Frederix, Pim W J M; Wright, Joseph A; Ulijn, Rein V; Pickett, Christopher J; Hunt, Neil T

    2012-01-28

    The solution-phase photochemistry of the [FeFe] hydrogenase subsite model (μ-S(CH(2))(3)S)Fe(2)(CO)(4)(PMe(3))(2) has been studied using ultrafast time-resolved infrared spectroscopy supported by density functional theory calculations. In three different solvents, n-heptane, methanol, and acetonitrile, relaxation of the tricarbonyl intermediate formed by UV photolysis of a carbonyl ligand leads to geminate recombination with a bias towards a thermodynamically less stable isomeric form, suggesting that facile interconversion of the ligand groups at the Fe center is possible in the unsaturated species. In a polar or hydrogen bonding solvent, this process competes with solvent substitution leading to the formation of stable solvent adduct species. The data provide further insight into the effect of incorporating non-carbonyl ligands on the dynamics and photochemistry of hydrogenase-derived biomimetic compounds.

  18. Accelerated exploration of multi-principal element alloys with solid solution phases

    PubMed Central

    Senkov, O.N.; Miller, J.D.; Miracle, D.B.; Woodward, C.

    2015-01-01

    Recent multi-principal element, high entropy alloy (HEA) development strategies vastly expand the number of candidate alloy systems, but also pose a new challenge—how to rapidly screen thousands of candidate alloy systems for targeted properties. Here we develop a new approach to rapidly assess structural metals by combining calculated phase diagrams with simple rules based on the phases present, their transformation temperatures and useful microstructures. We evaluate over 130,000 alloy systems, identifying promising compositions for more time-intensive experimental studies. We find the surprising result that solid solution alloys become less likely as the number of alloy elements increases. This contradicts the major premise of HEAs—that increased configurational entropy increases the stability of disordered solid solution phases. As the number of elements increases, the configurational entropy rises slowly while the probability of at least one pair of elements favouring formation of intermetallic compounds increases more rapidly, explaining this apparent contradiction. PMID:25739749

  19. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    PubMed

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  20. Sample-Averaged Biexciton Quantum Yield Measured by Solution-Phase Photon Correlation

    PubMed Central

    Beyler, Andrew P.; Bischof, Thomas S.; Cui, Jian; Coropceanu, Igor; Harris, Daniel K.; Bawendi, Moungi G.

    2015-01-01

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals. PMID:25409496

  1. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  2. Dynamics of organic and inorganic arsenic in the solution phase of an acidic fen in Germany

    NASA Astrophysics Data System (ADS)

    Huang, J.-H.; Matzner, E.

    2006-04-01

    Wetland soils play a key role for the transformation of heavy metals in forested watersheds, influencing their mobility, and ecotoxicity. Our goal was to investigate the mechanisms of release from solid to solution phase, the mobility, and the transformation of arsenic species in a fen soil. In methanol-water extracts, monomethylarsonic acid, dimethylarsinic acid, trimethylarsine oxide, arsenobetaine, and two unknown organic arsenic species were found with concentrations up to 14 ng As g -1 at the surface horizon. Arsenate is the dominant species at the 0-30 cm depth, whereas arsenite predominated at the 30-70 cm depth. Only up to 2.2% of total arsenic in fen was extractable with methanol-water. In porewaters, depth gradient spatial variation of arsenic species, pH, redox potentials, and the other chemical parameters along the profile was observed in June together with high proportion of organic arsenic species (up to 1.2 μg As L -1, 70% of total arsenic). Tetramethylarsonium ion and an unknown organic arsenic species were additionally detected in porewaters at deeper horizons. In comparison, the arsenic speciation in porewaters in April was homogeneous with depth and no organic arsenic species were found. Thus, the occurrence of microbial methylation of arsenic in fen was demonstrated for the first time. The 10 times elevated total arsenic concentrations in porewaters in June compared to April were accompanied by elevated concentrations of total iron, lower concentrations of sulfate and the presence of ammonium and phosphate. The low proportion of methanol-water extractable total arsenic suggests a generally low mobility of arsenic in fen soils. The release of arsenic from solid to solution phases in fen is dominantly controlled by dissolution of iron oxides, redox transformation, and methylation of arsenic, driven by microbial activity in the growing season. As a result, increased concentrations of total arsenic and potentially toxic arsenic species in fen

  3. Scalable solution-phase epitaxial growth of symmetry-mismatched heterostructures on two-dimensional crystal soft template

    PubMed Central

    Lin, Zhaoyang; Yin, Anxiang; Mao, Jun; Xia, Yi; Kempf, Nicholas; He, Qiyuan; Wang, Yiliu; Chen, Chih-Yen; Zhang, Yanliang; Ozolins, Vidvuds; Ren, Zhifeng; Huang, Yu; Duan, Xiangfeng

    2016-01-01

    Epitaxial heterostructures with precisely controlled composition and electronic modulation are of central importance for electronics, optoelectronics, thermoelectrics, and catalysis. In general, epitaxial material growth requires identical or nearly identical crystal structures with small misfit in lattice symmetry and parameters and is typically achieved by vapor-phase depositions in vacuum. We report a scalable solution-phase growth of symmetry-mismatched PbSe/Bi2Se3 epitaxial heterostructures by using two-dimensional (2D) Bi2Se3 nanoplates as soft templates. The dangling bond–free surface of 2D Bi2Se3 nanoplates guides the growth of PbSe crystal without requiring a one-to-one match in the atomic structure, which exerts minimal restriction on the epitaxial layer. With a layered structure and weak van der Waals interlayer interaction, the interface layer in the 2D Bi2Se3 nanoplates can deform to accommodate incoming layer, thus functioning as a soft template for symmetry-mismatched epitaxial growth of cubic PbSe crystal on rhombohedral Bi2Se3 nanoplates. We show that a solution chemistry approach can be readily used for the synthesis of gram-scale PbSe/Bi2Se3 epitaxial heterostructures, in which the square PbSe (001) layer forms on the trigonal/hexagonal (0001) plane of Bi2Se3 nanoplates. We further show that the resulted PbSe/Bi2Se3 heterostructures can be readily processed into bulk pellet with considerably suppressed thermal conductivity (0.30 W/m·K at room temperature) while retaining respectable electrical conductivity, together delivering a thermoelectric figure of merit ZT three times higher than that of the pristine Bi2Se3 nanoplates at 575 K. Our study demonstrates a unique epitaxy mode enabled by the 2D nanocrystal soft template via an affordable and scalable solution chemistry approach. It opens up new opportunities for the creation of diverse epitaxial heterostructures with highly disparate structures and functions. PMID:27730211

  4. Novel solution-phase structures of gallium-containing pyrogallol[4]arene scaffolds**

    PubMed Central

    Kumari, Harshita; Kline, Steven R.; Wycoff, Wei G.; Paul, Rick L.; Mossine, Andrew V.; Deakyne, Carol A.; Atwood, Jerry L.

    2012-01-01

    The variations in architecture of gallium-seamed (PgC4Ga) and gallium-zinc-seamed (PgC4GaZn) C-butylpyrogallol[4]arene nanoassemblies in solution (SANS/NMR) versus the solid state (XRD) have been investigated. Rearrangement from the solid-state spheroidal to the solution-phase toroidal shape differentiates the gallium-containing pyrogallol[4]arene nanoassemblies from all other PgCnM nanocapsules studied thus far. Different structural arrangements of the metals and arenes of PgC4Ga versus PgC4GaZn have been deduced from the different toroidal dimensions, C–H proton environments and guest encapsulation of the two toroids. PGAA of mixed-metal hexamers reveals a decrease in gallium-to-metal ratio as the second metal varies from cobalt to zinc. Overall, the combined study demonstrates the versatility of gallium in directing the self-assembly of pyrogallol[4]arenes into novel nanoarchitectures. PMID:22511521

  5. Solution-Phase Deposition and Nanopattering of GeSbSe Phase-Change Materials

    SciTech Connect

    Milliron,D.; Raoux, S.; Shelby, R.; Jordan-Sweet, J.

    2007-01-01

    Chalcogenide films with reversible amorphous-crystalline phase transitions have been commercialized as optically rewritable data-storage media, and intensive effort is now focused on integrating them into electrically addressed non-volatile memory devices (phase-change random-access memory or PCRAM). Although optical data storage is accomplished by laser-induced heating of continuous films, electronic memory requires integration of discrete nanoscale phase-change material features with read/write electronics. Currently, phase-change films are most commonly deposited by sputter deposition, and patterned by conventional lithography. Metal chalcogenide films for transistor applications have recently been deposited by a low-temperature, solution-phase route. Here, we extend this methodology to prepare thin films and nanostructures of GeSbSe phase-change materials. We report the ready tuneability of phase-change properties in GeSbSe films through composition variation achieved by combining novel precursors in solution. Rapid, submicrosecond phase switching is observed by laser-pulse annealing. We also demonstrate that prepatterned holes can be filled to fabricate phase-change nanostructures from hundreds down to tens of nanometres in size, offering enhanced flexibility in fabricating PCRAM devices with reduced current requirements.

  6. Magnesium-solution phase catholyte semi-fuel cell for undersea vehicles

    NASA Astrophysics Data System (ADS)

    Medeiros, Maria G.; Bessette, Russell R.; Deschenes, Craig M.; Patrissi, Charles J.; Carreiro, Louis G.; Tucker, Steven P.; Atwater, Delmas W.

    A magnesium-solution phase catholyte semi-fuel cell (SFC) is under development at the Naval Undersea Warfare Center (NUWC) as an energetic electrochemical system for low rate, long endurance undersea vehicle applications. This electrochemical system consists of a magnesium anode, a sodium chloride anolyte, a conductive membrane, a catalyzed carbon current collector, and a catholyte of sodium chloride, sulfuric acid and hydrogen peroxide. Bipolar electrode fabrication to minimize cell stack volume, long duration testing, and scale-up of electrodes from 77 to 1000 cm 2 have been the objectives of this project. Single cell and multi-cell testing at the 77 cm 2 configuration have been utilized to optimize all testing parameters including start-up conditions, flow rates, temperatures, and electrolyte concentrations while maintaining high voltages and efficiencies. The fabrication and testing of bipolar electrodes and operating parameter optimization for large electrode area cells will be presented. Designs for 1000 cm 2 electrodes, electrolyte flow patterns and current/voltage distribution across these large area cells will also be discussed.

  7. Promoting solution phase discharge in Li-O2 batteries containing weakly solvating electrolyte solutions

    NASA Astrophysics Data System (ADS)

    Gao, Xiangwen; Chen, Yuhui; Johnson, Lee; Bruce, Peter G.

    2016-08-01

    On discharge, the Li-O2 battery can form a Li2O2 film on the cathode surface, leading to low capacities, low rates and early cell death, or it can form Li2O2 particles in solution, leading to high capacities at relatively high rates and avoiding early cell death. Achieving discharge in solution is important and may be encouraged by the use of high donor or acceptor number solvents or salts that dissolve the LiO2 intermediate involved in the formation of Li2O2. However, the characteristics that make high donor or acceptor number solvents good (for example, high polarity) result in them being unstable towards LiO2 or Li2O2. Here we demonstrate that introduction of the additive 2,5-di-tert-butyl-1,4-benzoquinone (DBBQ) promotes solution phase formation of Li2O2 in low-polarity and weakly solvating electrolyte solutions. Importantly, it does so while simultaneously suppressing direct reduction to Li2O2 on the cathode surface, which would otherwise lead to Li2O2 film growth and premature cell death. It also halves the overpotential during discharge, increases the capacity 80- to 100-fold and enables rates >1 mA cmareal-2 for cathodes with capacities of >4 mAh cmareal-2. The DBBQ additive operates by a new mechanism that avoids the reactive LiO2 intermediate in solution.

  8. Comparative Study of Solution Phase and Vapor Phase Deposition of Aminosilanes on Silicon Dioxide Surfaces

    PubMed Central

    Yadav, Amrita R.; Sriram, Rashmi; Carter, Jared A.; Miller, Benjamin L.

    2014-01-01

    The uniformity of aminosilane layers typically used for the modification of hydroxyl bearing surfaces such as silicon dioxide is critical for a wide variety of applications, including biosensors. However, in spite of many studies that have been undertaken on surface silanization, there remains a paucity of easy-to-implement deposition methods reproducibly yielding smooth aminosilane monolayers. In this study, solution- and vapor-phase deposition methods for three aminoalkoxysilanes differing in the number of reactive groups (3-aminopropyl triethoxysilane (APTES), 3-aminopropyl methyl diethoxysilane (APMDES) and 3-aminopropyl dimethyl ethoxysilane (APDMES)) were assessed with the aim of identifying methods that yield highly uniform and reproducible silane layers that are resistant to minor procedural variations. Silane film quality was characterized based on measured thickness, hydrophilicity and surface roughness. Additionally, hydrolytic stability of the films was assessed via these thickness and contact angle values following desorption in water. We found that two simple solution-phase methods, an aqueous deposition of APTES and a toluene based deposition of APDMES, yielded high quality silane layers that exhibit comparable characteristics to those deposited via vapor-phase methods. PMID:24411379

  9. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  10. Development of anodes for aluminum/air batteries: Solution phase inhibition of corrosion: Final report

    SciTech Connect

    Macdonald, D.D.; English, C.; Urquidi-Macdonald, M.

    1989-03-01

    Solution-phase inhibition is a promising strategy for controlling the corrosion of the aluminum fuel in alkaline aluminum/air batteries. Development of effective inhibitors would permit the use of scrap aluminum as fuel and thereby significantly improve the economics of the battery, because the cost of the fuel would have been partly or wholly defrayed by its previous use. In this study, we explored the discharge characteristics of aluminum in inhibited and uninhibited 4 M KOH at 50/degree/C and compared the performance of the fuel with that for two leading alloy fuels that had been evaluated in our previous work, Alloy BDW (Al-1Mg-0.1In-0.2Mn) and Alloy 21 (Al-0.2Ga-0.1In-0.1Tl). The inhibitors employed in this study, SnO/sub 3//sup 2/minus//, In(OH)/sub 3/, Ga(OH)/sub 4//sup /minus//, MnO/sub 4//sup 2/minus//, and binary combinations thereof, are either alloying elements of Alloys BDW and 21 or have been investigated previously. We found that potassium manganate and Na/sub 2/SnO/sub 3/ + In(OH)/sub 3/ are effective inhibitor systems. Particularly at high discharge rates, but at low discharge rates only manganate offers a significant advantage in coulombic efficiency over the uninhibited solution. Alloy BDW exhibits a very low open circuit (standby) corrosion rate, but its coulombic efficiency under discharge, as determined by delineating the particle anodic and cathodic reactions, was found to be no better than that of aluminum in the same uninhibited solution. Alloy 21 was found to exhibit a comparable performance to Alloy BDW under open circuit conditions and a much higher coulombic efficiency at low discharge rates, but the performance of this alloy under high discharge rate conditions was not determined. Alloy 21 has the significant disadvantage that it contains thallium. 36 refs., 14 figs., 2 tabs.

  11. A liquid flatjet system for solution phase soft-x-ray spectroscopy

    PubMed Central

    Ekimova, Maria; Quevedo, Wilson; Faubel, Manfred; Wernet, Philippe; Nibbering, Erik T. J.

    2015-01-01

    We present a liquid flatjet system for solution phase soft-x-ray spectroscopy. The flatjet set-up utilises the phenomenon of formation of stable liquid sheets upon collision of two identical laminar jets. Colliding the two single water jets, coming out of the nozzles with 50 μm orifices, under an impact angle of 48° leads to double sheet formation, of which the first sheet is 4.6 mm long and 1.0 mm wide. The liquid flatjet operates fully functional under vacuum conditions (<10−3 mbar), allowing soft-x-ray spectroscopy of aqueous solutions in transmission mode. We analyse the liquid water flatjet thickness under atmospheric pressure using interferomeric or mid-infrared transmission measurements and under vacuum conditions by measuring the absorbance of the O K-edge of water in transmission, and comparing our results with previously published data obtained with standing cells with Si3N4 membrane windows. The thickness of the first liquid sheet is found to vary between 1.4–3 μm, depending on the transverse and longitudinal position in the liquid sheet. We observe that the derived thickness is of similar magnitude under 1 bar and under vacuum conditions. A catcher unit facilitates the recycling of the solutions, allowing measurements on small sample volumes (∼10 ml). We demonstrate the applicability of this approach by presenting measurements on the N K-edge of aqueous NH4+. Our results suggest the high potential of using liquid flatjets in steady-state and time-resolved studies in the soft-x-ray regime. PMID:26798824

  12. Polarization Sensitive THz TDS and Fabrication of Alignment Cells for Solution Phase THz Spectroscopy

    NASA Astrophysics Data System (ADS)

    George, Deepu Koshy

    sense that it makes use of the polarization state of THz pulse which is also the case for the alignment spectroscopy. PMOTS technique detects the rotation and change in ellipticity to the incident polarization from which the hall coefficients of the sample can be calculated. The final section deals with the fabrication of Dynamical Alignment Terahertz Spectroscopy cells for solution phase measurements. Design, fabrication and process optimization are detailed. Micro-fabrication based on optical lithography and SU-8 negative photoresist has been explored.

  13. Total Synthesis of Teixobactin.

    PubMed

    Giltrap, Andrew M; Dowman, Luke J; Nagalingam, Gayathri; Ochoa, Jessica L; Linington, Roger G; Britton, Warwick J; Payne, Richard J

    2016-06-01

    The first total synthesis of the cyclic depsipeptide natural product teixobactin is described. Synthesis was achieved by solid-phase peptide synthesis, incorporating the unusual l-allo-enduracididine as a suitably protected synthetic cassette and employing a key on-resin esterification and solution-phase macrolactamization. The synthetic natural product was shown to possess potent antibacterial activity against a range of Gram-positive pathogenic bacteria, including a virulent strain of Mycobacterium tuberculosis and methicillin-resistant Staphylococcus aureus (MRSA). PMID:27191730

  14. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  15. Synthesis of Chemiluminescent Esters: A Combinatorial Synthesis Experiment for Organic Chemistry Students

    ERIC Educational Resources Information Center

    Duarte, Robert; Nielson, Janne T.; Dragojlovic, Veljko

    2004-01-01

    A group of techniques aimed at synthesizing a large number of structurally diverse compounds is called combinatorial synthesis. Synthesis of chemiluminescence esters using parallel combinatorial synthesis and mix-and-split combinatorial synthesis is experimented.

  16. Large-scale solution-phase growth of Cu-doped ZnO nanowire networks.

    PubMed

    Xu, Chunju; Koo, Tae-Woong; Kim, Byung-Sung; Lee, Jae-Hyun; Hwang, Sung Woo; Whang, Dongmok

    2011-07-01

    Film-like networks of Cu-doped (0.8-2.5 at.%) ZnO nanowires were successfully synthesized through a facile solution process at a low temperature (<100 degrees C). The pH value of solution plays a key role in controlling the density and quality of the Cu-doped ZnO nanowires and the dopant concentration of ZnO nanowires was controlled by adjusting the Cu2+/Zn2+ concentration ratio during the synthesis. The structural study showed that the as-prepared Cu-doped ZnO nanowires with a narrow diameter range of 20-30 nm were single crystal and grew along [0001] direction. Photoluminescence and electrical conductivity measurements showed that Cu doping can lead to a redshift in bandgap energy and an increase in the resistivity of ZnO. The thermal annealing of the as-grown nanowires at a low temperature (300 degrees C) decreased the defect-related emission within the visible range and increased the electrical conductivity. The high-quality ZnO nanowire network with controlled doping will enable further application to flexible and transparent electronics.

  17. Fragment-based domain shuffling approach for the synthesis of pyran-based macrocycles

    PubMed Central

    Comer, Eamon; Liu, Haibo; Joliton, Adrien; Clabaut, Alexandre; Johnson, Christopher; Akella, Lakshmi B.; Marcaurelle, Lisa A.

    2011-01-01

    Complexity and the presence of stereogenic centers have been correlated with success as compounds transition from discovery through the clinic. Here we describe the synthesis of a library of pyran-containing macrocycles with a high degree of structural complexity and up to five stereogenic centers. A key feature of the design strategy was to use a modular synthetic route with three fragments that can be readily interchanged or “shuffled” to produce subtly different variants with distinct molecular shapes. A total of 352 macrocycles were synthesized ranging in size from 14- to 16-membered rings. In order to facilitate the generation of stereostructure-activity relationships, the complete matrix of stereoisomers was prepared for each macrocycle. Solid-phase assisted parallel solution-phase techniques were employed to allow for rapid analogue generation. An intramolecular nitrile-activated nucleophilic aromatic substitution reaction was used for the key macrocyclization step. PMID:21383141

  18. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  19. Importance of Solvation in Understanding the Chiroptical Spectra of Natural Products in Solution Phase: Garcinia Acid Dimethyl Ester

    PubMed Central

    Polavarapu, Prasad L.; Scalmani, Giovanni; Hawkins, Edward K.; Rizzo, Carmelo; Jeirath, Neha; Ibnusaud, Ibrahim; Habel, Deenamma; Nair, Divya Sadasivan; Haleema, Simimole

    2013-01-01

    The optical rotatory dispersion (ORD), electronic circular dichroism (ECD), and vibrational circular dichroism (VCD) spectra of (+)-garcinia acid dimethyl ester have been measured and analyzed by comparison with the corresponding spectra predicted by quantum chemical methods for (2S,3S)-garcinia acid dimethyl ester. For solution-phase calculations the recently developed continuous surface charge polarizable continuum model (PCM) has been used. It is found that gas-phase predictions and PCM predictions at the B3LYP/aug-cc-pVDZ level yield nearly mirror-image ECD spectra in the 190–250 nm region for the same absolute configuration and that gas-phase ECD predictions lead to incorrect absolute configuration. At the CAM-B3LYP/aug-cc-pVDZ level, however, gas-phase predictions and PCM predictions of ECD in the 190–250 nm region are not so different, but PCM predictions provide better agreement with the experimental observations. For carbonyl stretching vibrations, the vibrational band positions predicted at the B3LYP/aug-cc-pVDZ level in gas-phase calculations differ significantly from the corresponding experimentally observed band positions, and this discrepancy has also been corrected by the use of PCM. In addition, the solution-phase VCD predictions provided better agreement (with experimental VCD observations) than gas-phase VCD predictions. These observations underscore the importance of including solvent effects in quantum chemical calculations of chiroptical spectroscopic properties. PMID:21114277

  20. Importance of solvation in understanding the chiroptical spectra of natural products in solution phase: garcinia acid dimethyl ester.

    PubMed

    Polavarapu, Prasad L; Scalmani, Giovanni; Hawkins, Edward K; Rizzo, Carmelo; Jeirath, Neha; Ibnusaud, Ibrahim; Habel, Deenamma; Nair, Divya Sadasivan; Haleema, Simimole

    2011-03-25

    The optical rotatory dispersion (ORD), electronic circular dichroism (ECD), and vibrational circular dichroism (VCD) spectra of (+)-garcinia acid dimethyl ester have been measured and analyzed by comparison with the corresponding spectra predicted by quantum chemical methods for (2S,3S)-garcinia acid dimethyl ester. For solution-phase calculations the recently developed continuous surface charge polarizable continuum model (PCM) has been used. It is found that gas-phase predictions and PCM predictions at the B3LYP/aug-cc-pVDZ level yield nearly mirror-image ECD spectra in the 190-250 nm region for the same absolute configuration and that gas-phase ECD predictions lead to incorrect absolute configuration. At the CAM-B3LYP/aug-cc-pVDZ level, however, gas-phase predictions and PCM predictions of ECD in the 190-250 nm region are not so different, but PCM predictions provide better agreement with the experimental observations. For carbonyl stretching vibrations, the vibrational band positions predicted at the B3LYP/aug-cc-pVDZ level in gas-phase calculations differ significantly from the corresponding experimentally observed band positions, and this discrepancy has also been corrected by the use of PCM. In addition, the solution-phase VCD predictions provided better agreement (with experimental VCD observations) than gas-phase VCD predictions. These observations underscore the importance of including solvent effects in quantum chemical calculations of chiroptical spectroscopic properties. PMID:21114277

  1. Understanding the solution phase chemistry and solid state thermodynamic behavior of pharmaceutical cocrystals

    NASA Astrophysics Data System (ADS)

    Maheshwari, Chinmay

    Cocrystals have drawn a lot of research interest in the last decade due to their potential to favorably alter the physicochemical and biopharmaceutical properties of active pharmaceutical ingredients. This dissertation focuses on the thermodynamic stability and solubility of pharmaceutical cocrystals. Specifically, the objectives are to; (i) investigate the influence of coformer properties such as solubility and ionization characteristics on cocrystal solubility and stability as a function of pH, (ii) to measure the thermodynamic solubility of metastable cocrystals, and study the solubility differences measured by kinetic and equilibrium methods, (iii) investigate the role of surfactants on the solubility and synthesis of cocrystals, (iv) investigate the solid state phase transformation of reactants to cocrystals and the factors that influence the reaction kinetics and, (v) provide models that enable the prediction of cocrystal formation by calculating the free energy of formation for a solid to solid transformation of reactants to cocrystals. Cocrystal solubilities were measured directly when cocrystals were thermodynamically stable, while solubilities were calculated from eutectic concentration measurements when cocrystals were of higher solubility than its components. Cocrystal solubility was highly dependent on coformer solubilities for gabapentin-lactam and lamotrigine cocrystals. It was found that melting point is not a good indicator of cocrystal solubility as solute-solvent interactions quantified by the activity coefficient play a huge role in the observed solubility. Similar to salts, cocrystals also exhibit pHmax, however the salts and cocrystals have different dependencies on the parameters that govern the value of pHmax. It is also shown that cocrystals could provide solubility advantage over salts as lamotrigine-nicotinamide cocrystal hydrate has about 6 fold higher solubility relative to lamotrigine-saccharin salt. In the case of mixtures of solid

  2. 2'-O-Methyl- and 2'-O-propargyl-5-methylisocytidine: synthesis, properties and impact on the isoCd-dG and the isoCd-isoGd base pairing in nucleic acids with parallel and antiparallel strand orientation.

    PubMed

    Jana, Sunit K; Leonard, Peter; Ingale, Sachin A; Seela, Frank

    2016-06-01

    Oligonucleotides containing 2'-O-methylated 5-methylisocytidine (3) and 2'-O-propargyl-5-methylisocytidine (4) as well as the non-functionalized 5-methyl-2'-deoxyisocytidine (1b) were synthesized. MALDI-TOF mass spectra of oligonucleotides containing 1b are susceptible to a stepwise depyrimidination. In contrast, oligonucleotides incorporating 2'-O-alkylated nucleosides 3 and 4 are stable. This is supported by acid catalyzed hydrolysis experiments performed on nucleosides in solution. 2'-O-Alkylated nucleoside 3 was synthesized from 2'-O-5-dimethyluridine via tosylation, anhydro nucleoside formation and ring opening. The corresponding 4 was obtained by direct regioselective alkylation of 5-methylisocytidine (1d) with propargyl bromide under phase-transfer conditions. Both compounds were converted to phosphoramidites and employed in solid-phase oligonucleotide synthesis. Hybridization experiments resulted in duplexes with antiparallel or parallel chains. In parallel duplexes, methylation or propargylation of the 2'-hydroxyl group of isocytidine leads to destabilization while in antiparallel DNA this effect is less pronounced. 2'-O-Propargylated 4 was used to cross-link nucleosides and oligonucleotides to homodimers by a stepwise click ligation with a bifunctional azide. PMID:27221215

  3. High resolution ion mobility measurements for gas phase proteins: correlation between solution phase and gas phase conformations

    NASA Astrophysics Data System (ADS)

    Hudgins, Robert R.; Woenckhaus, Jürgen; Jarrold, Martin F.

    1997-11-01

    Our high resolution ion mobility apparatus has been modified by attaching an electrospray source to perform measurements for biological molecules. While the greater resolving power permits the resolution of more conformations for BPTI and cytochrome c, the resolved features are generally much broader than expected for a single rigid conformation. A major advantage of the new experimental configuration is the much gentler introduction of ions into the drift tube, so that the observed gas phase conformations appear to more closely reflect those present in solution. For example, it is possible to distinguish between the native state of cytochrome c and the methanol-denatured form on the basis of the ion mobility measurements; the mass spectra alone are not sensitive enough to detect this change. Thus this approach may provide a quick and sensitive tool for probing the solution phase conformations of biological molecules.

  4. Ultrasensitive Analysis of Binding Affinity of HIV Receptor and Neutralizing Antibody Using Solution-Phase Electrochemiluminescence Assay

    PubMed Central

    Xu, Xiao-Hong Nancy; Wen, Zhaoyang; Brownlow, William J.

    2012-01-01

    Binding of a few ligand molecules with its receptors on cell surface can initiate cellular signaling transduction pathways, and trigger viral infection of host cells. HIV-1 infects host T-cells by binding its viral envelope protein (gp120) with its receptor (a glycoprotein, CD4) on T cells. Primary strategies to prevent and treat HIV infection is to develop therapies (e.g., neutralizing antibodies) that can block specific binding of CD4 with gp120. The infection often leads to the lower counts of CD4 cells, which makes it an effective biomarker to monitor the AIDS progression and treatment. Despite research over decades, quantitative assays for effective measurements of binding affinities of protein-protein (ligand-receptor, antigen-antibody) interactions remains highly sought. Solid-phase electrochemiluminescence (ECL) immunoassay has been commonly used to capture analytes from the solution for analysis, which involves immobilization of antibody on solid surfaces (micron-sized beads), but it cannot quantitatively measure binding affinities of molecular interactions. In this study, we have developed solution-phase ECL assay with a wide dynamic range (0–2 nM) and high sensitivity and specificity for quantitative analysis of CD4 at femtomolar level and their binding affinity with gp120 and monoclonal antibodies (MABs). We found that binding affinities of CD4 with gp120 and MAB (Q4120) are 9.5×108 and 1.2×109 M−1, respectively. The results also show that MAB (Q4120) of CD4 can completely block the binding of gp120 with CD4, while MAB (17b) of gp120 can only partially block their interaction. This study demonstrates that the solution-phase ECL assay can be used for ultrasensitive and quantitative analysis of binding affinities of protein-protein interactions in solution for better understating of protein functions and identification of effective therapies to block their interactions. PMID:23565071

  5. Design, synthesis and biological evaluation of paralleled Aza resveratrol-chalcone compounds as potential anti-inflammatory agents for the treatment of acute lung injury.

    PubMed

    Chen, Wenbo; Ge, Xiangting; Xu, Fengli; Zhang, Yali; Liu, Zhiguo; Pan, Jialing; Song, Jiao; Dai, Yuanrong; Zhou, Jianmin; Feng, Jianpeng; Liang, Guang

    2015-08-01

    Acute lung injury (ALI) is a major cause of acute respiratory failure in critically-ill patients. It has been reported that both resveratrol and chalcone derivatives could ameliorate lung injury induced by inflammation. A series of paralleled Aza resveratrol-chalcone compounds (5a-5m, 6a-6i) were designed, synthesized and screened for anti-inflammatory activity. A majority showed potent inhibition on the IL-6 and TNF-α expression-stimulated by LPS in macrophages, of which compound 6b is the most potent analog by inhibition of LPS-induced IL-6 release in a dose-dependent manner. Moreover, 6b exhibited protection against LPS-induced acute lung injury in vivo. These results offer further insight into the use of Aza resveratrol-chalcone compounds for the treatment of inflammatory diseases, and the use of compound 6b as a lead compound for the development of anti-ALI agents.

  6. A nanofluidic system for massively parallel PCR

    NASA Astrophysics Data System (ADS)

    Brenan, Colin; Morrison, Tom; Roberts, Douglas; Hurley, James

    2008-02-01

    Massively parallel nanofluidic systems are lab-on-a-chip devices where solution phase biochemical and biological analyses are implemented in high density arrays of nanoliter holes micro-machined in a thin platen. Polymer coatings make the interior surfaces of the holes hydrophilic and the exterior surface of the platen hydrophobic for precise and accurate self-metered loading of liquids into each hole without cross-contamination. We have created a "nanoplate" based on this concept, equivalent in performance to standard microtiter plates, having 3072 thirty-three nanoliter holes in a stainless steel platen the dimensions of a microscope slide. We report on the performance of this device for PCR-based single nucleotide polymorphism (SNP) genotyping or quantitative measurement of gene expression by real-time PCR in applications ranging from plant and animal diagnostics, agricultural genetics and human disease research.

  7. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  8. Discovery of a thermally persistent h.c.p. solid-solution phase in the Ni-W system

    SciTech Connect

    Kurz, S. J. B. Leineweber, A.; Maisel, S. B.; Höfler, M.; Müller, S.; Mittemeijer, E. J.

    2014-08-28

    Although the accepted Ni-W phase diagram does not reveal the existence of h.c.p.-based phases, h.c.p.-like stacking sequences were observed in magnetron-co-sputtered Ni-W thin films at W contents of 20 to 25 at. %, by using transmission electron microscopy and X-ray diffraction. The occurrence of this h.c.p.-like solid-solution phase could be rationalized by first-principles calculations, showing that the vicinity of the system's ground-state line is populated with metastable h.c.p.-based superstructures in the intermediate concentration range from 20 to 50 at. % W. The h.c.p.-like stacking in Ni-W films was observed to be thermally persistent, up to temperatures as high as at least 850 K, as evidenced by extensive X-ray diffraction analyses on specimens before and after annealing treatments. The tendency of Ni-W for excessive planar faulting is discussed in the light of these new findings.

  9. Comparative study of solution-phase and vapor-phase deposition of aminosilanes on silicon dioxide surfaces.

    PubMed

    Yadav, Amrita R; Sriram, Rashmi; Carter, Jared A; Miller, Benjamin L

    2014-02-01

    The uniformity of aminosilane layers typically used for the modification of hydroxyl bearing surfaces such as silicon dioxide is critical for a wide variety of applications, including biosensors. However, in spite of many studies that have been undertaken on surface silanization, there remains a paucity of easy-to-implement deposition methods reproducibly yielding smooth aminosilane monolayers. In this study, solution- and vapor-phase deposition methods for three aminoalkoxysilanes differing in the number of reactive groups (3-aminopropyl triethoxysilane (APTES), 3-aminopropyl methyl diethoxysilane (APMDES) and 3-aminopropyl dimethyl ethoxysilane (APDMES)) were assessed with the aim of identifying methods that yield highly uniform and reproducible silane layers that are resistant to minor procedural variations. Silane film quality was characterized based on measured thickness, hydrophilicity and surface roughness. Additionally, hydrolytic stability of the films was assessed via these thickness and contact angle values following desorption in water. We found that two simple solution-phase methods, an aqueous deposition of APTES and a toluene based deposition of APDMES, yielded high quality silane layers that exhibit comparable characteristics to those deposited via vapor-phase methods.

  10. Solid-Phase Combinatorial Synthesis and Biological Evaluation of Destruxin E Analogues.

    PubMed

    Yoshida, Masahito; Ishida, Yoshitaka; Adachi, Kenta; Murase, Hayato; Nakagawa, Hiroshi; Doi, Takayuki

    2015-12-01

    The solid-phase combinatorial synthesis of cyclodepsipeptide destruxin E has been demonstrated. The combinatorial synthesis of cyclization precursors 8 was achieved by using a split and pool method on SynPhase Lanterns. The products were successfully macrolactonized in parallel in the solution phase by using 2-methyl-6-nitrobenzoic anhydride and 4-(dimethylamino)pyridine N-oxide to afford macrolactones 9, and the subsequent formation of an epoxide in the side chain gave 18 member destruxin E analogues 6. Biological evaluation of analogues 6 indicated that the N-MeAla residue was crucial to the induction of morphological changes in osteoclast-like multinuclear cells (OCLs). Based on structure-activity relationships, azido-containing analogues 15 were then designed for use as a molecular probe. The synthesis and biological evaluation of analogues 15 revealed that 15 b, in which the Ile residue was replaced with a Lys(N3 ) residue, induced morphological changes in OCLs at a sufficient concentration, and modification around the Ile residue would be tolerated for attachment of a chemical tag toward the target identification of destruxin E (1).

  11. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  12. Side-chain effects on the solution-phase conformations and charge photogeneration dynamics of low-bandgap copolymers

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Ming; Liang, Ran; Xing, Ya-Dong; Hu, Rong; Zhao, Ning-Jiu; Zhang, Wei; Fu, Li-Min; Ai, Xi-Cheng; Zhang, Jian-Ping; Hou, Jian-Hui

    2013-09-01

    Solution-phase conformations and charge photogeneration dynamics of a pair of low-bandgap copolymers based on benzo[1,2-b:4,5-b']dithiophene (BDT) and thieno[3,4-b]thiophene (TT), differed by the respective carbonyl (-C) and ester (-E) substituents at the TT units, were comparatively investigated by using near-infrared time-resolved absorption (TA) spectroscopy at 25 °C and 120 °C. Steady-state and TA spectroscopic results corroborated by quantum chemical analyses prove that both PBDTTT-C and PBDTTT-E in chlorobenzene solutions are self-aggregated; however, the former bears a relatively higher packing order. Specifically, PBDTTT-C aggregates with more π-π stacked domains, whereas PBDTTT-E does with more random coils interacting strongly at the chain intersections. At 25 °C, the copolymers exhibit comparable exciton lifetimes (˜1 ns) and fluorescence quantum yields (˜2%), but distinctly different charge photogeneration dynamics: PBDTTT-C on photoexcitation gives rise to a branching ratio of charge separated (CS) over charge transfer (CT) states more than 20% higher than PBDTTT-E does, correlating with their photovoltaic performance. Temperature and excitation-wavelength dependent exciton/charge dynamics suggest that the CT states localize at the chain intersections that are survivable up to 120 °C, and that the excitons and the CS states inhabit the stretched strands and the also thermally robust orderly stacked domains. The stable self-aggregation structures and the associated primary charge dynamics of the PBDTTT copolymers in solutions are suggested to impact intimately on the morphologies and the charge photogeneration efficiency of the solid-state photoactive layers.

  13. Amorphous and nanocrystalline titanium nitride and carbonitride materials obtained by solution phase ammonolysis of Ti(NMe 2) 4

    NASA Astrophysics Data System (ADS)

    Jackson, Andrew W.; Shebanova, Olga; Hector, Andrew L.; McMillan, Paul F.

    2006-05-01

    Solution phase reactions between tetrakisdimethylamidotitanium (Ti(NMe 2) 4) and ammonia yield precipitates with composition TiC 0.5N 1.1H 2.3. Thermogravimetric analysis (TGA) indicates that decomposition of these precursor materials proceeds in two steps to yield rocksalt-structured TiN or Ti(C,N), depending upon the gas atmosphere. Heating to above 700 °C in NH 3 yields nearly stoichiometric TiN. However, heating in N 2 atmosphere leads to isostructural carbonitrides, approximately TiC 0.2N 0.8 in composition. The particle sizes of these materials range between 4-12 nm. Heating to a temperature that corresponds to the intermediate plateau in the TGA curve (450 °C) results in a black powder that is X-ray amorphous and is electrically conducting. The bulk chemical composition of this material is found to be TiC 0.22N 1.01H 0.07, or Ti 3(C 0.17N 0.78H 0.05) 3.96, close to Ti 3(C,N) 4. Previous workers have suggested that the intermediate compound was an amorphous form of Ti 3N 4. TEM investigation of the material indicates the presence of nanocrystalline regions <5 nm in dimension embedded in an amorphous matrix. Raman and IR reflectance data indicate some structural similarity with the rocksalt-structured TiN and Ti(C,N) phases, but with disorder and substantial vacancies or other defects. XAS indicates that the local structure of the amorphous solid is based on the rocksalt structure, but with a large proportion of vacancies on both the cation (Ti) and anion (C,N) sites. The first shell Ti coordination is approximately 4.5 and the second-shell coordination ˜5.5 compared with expected values of 6 and 12, respectively, for the ideal rocksalt structure. The material is thus approximately 50% less dense than known Ti x(C,N) y crystalline phases.

  14. Thermodynamics of carbothermic synthesis of actinide mononitrides

    NASA Astrophysics Data System (ADS)

    Ogawa, Toru; Shirasu, Yoshiro; Minato, Kazuo; Serizawa, Hiroyuki

    1997-08-01

    Carbothermic synthesis will be further applied to the fabrication of nitride fuels containing minor actinides (MA) such as neptunium, americium and curium. A thorough understanding of the carbothermic synthesis of UN will be beneficial in the development of the MA-containing fuels. Thermodynamic analysis was carried out for conditions of practical interest in order to better understand the recent fabrication experiences. Two types of solution phases, oxynitride and carbonitride phases, were taken into account. The PuNO ternary isotherm was assessed for the modelling of M(C, N, O). With the understanding of the UN synthesis, the fabrication problems of Am-containing nitrides are discussed.

  15. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  16. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  17. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  18. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  19. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  20. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  1. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  2. Synthesis and solid-phase purification of anthranilic sulfonamides as CCK-2 ligands.

    PubMed

    Woods, Craig R; Hack, Michael D; Allison, Brett D; Phuong, Victor K; Rosen, Mark D; Morton, Magda F; Prendergast, Clodagh E; Barrett, Terrance D; Shankley, Nigel P; Rabinowitz, Michael H

    2007-12-15

    A novel strategy for the synthesis of cholecystokinin-2 receptor ligands was developed. The route employs a solution-phase synthesis of a series of anthranilic sulfonamides followed by a resin capture purification strategy to produce multi-milligram quantities of compounds for bioassay. The synthesis was used to produce >100 compounds containing various functional groups, highlighting the general applicability of this strategy and to address specific metabolism issues in our CCK-2 program.

  3. SOLUTION PHASE SYNTHESIS OF A DIVERSE LIBRARY OF BENZISOXAZOLES UTILIZING THE [3 + 2] CYCLOADDITION OF IN SITU GENERATED NITRILE OXIDES AND ARYNES

    PubMed Central

    Dubrovskiy, Anton V.; Jain, Prashi; Shi, Feng; Lushington, Gerald H.; Santini, Conrad; Porubusky, Patrick; Larock, Richard C.

    2013-01-01

    A library of benzisoxazoles has been synthesized by the [3 + 2] cycloaddition of nitrile oxides with arynes and further diversified by acylation/sulfonylation and palladium-catalyzed coupling processes. The eight key intermediate benzisoxazoles have been prepared by the reaction of o-(trimethylsilyl)aryl triflates and chlorooximes in the presence of CsF in good to excellent yields under mild reaction conditions. These building blocks have been used as the key components of a diverse set of 3,5,6-trisubstituted benzisoxazoles. PMID:23472819

  4. The synthesis and evaluation of a solution phase indexed combinatorial library of non-natural polyenes for reversal of P-glycoprotein mediated multidrug resistance.

    PubMed

    Andrus, M B; Turner, T M; Sauna, Z E; Ambudkar, S V

    2000-08-11

    A combinatorial library of polyenes, based on (-)-stipiamide, has been constructed and evaluated for the discovery of new multidrug resistance reversal agents. A palladium coupling was used to react each individual vinyl iodide with a mixture of the seven acetylenes at near 1:1 stoichiometry. The coupling was also used to react each individual acetylene with the mixture of six vinyl iodides to create 13 pools indexed in two dimensions for a total of 42 compounds. Individual compounds were detected at equimolar concentration. The vinyl iodides, made initially using a crotylborane addition to generate the anti1,2-hydroxylmethyl products, were now made using a more efficient norephedrine propionate boron enolate aldol reaction. The indexed approach, ideally suited for cellular assays that involve membrane-bound targets, allowed for the rapid identification of reversal agents using assays with drug-resistant human breast cancer MCF7-adrR cells. Intersections of potent pools identified new compounds with promising activity. Aryl dimension pools showed R = ph and naphthyl as the most potent. The acetylene dimension had R' = phenylalaninol and alaninol as the most potent. Isolated individual compounds, both active and nonpotent, were assayed to confirm the library results. The most potent new compound was 4ek (R = naphthyl, R' = phenylaninol) at 1.45 microM. Other nonnatural individual naphthyl-amide compounds showed potent MDR reversal including the morpholino-amide 4ej (1.69 microM). Synergistic activities attributed to the two ends of the molecule were also identified. Direct interaction with Pgp was established by ATPase and photoaffinity displacement assays. The results indicate that both ends of the polyene reversal agent are involved in Pgp interaction and can be further modified for increased potency. PMID:10956480

  5. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  6. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  7. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  8. Eclipse Parallel Tools Platform

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  9. Multifunctional nanocomposites constructed from Fe3O4-Au nanoparticle cores and a porous silica shell in the solution phase.

    PubMed

    Chen, Fenghua; Chen, Qingtao; Fang, Shaoming; Sun, Yu'an; Chen, Zhijun; Xie, Gang; Du, Yaping

    2011-11-01

    This work is directed towards the synthesis of multifunctional nanoparticles composed of Fe(3)O(4)-Au nanocomposite cores and a porous silica shell (Fe(3)O(4)-Au/pSiO(2)), aimed at ensuring the stability, magnetic, and optical properties of magnetic-gold nanocomposite simultaneously. The prepared Fe(3)O(4)-Au/pSiO(2) core/shell nanoparticles are characterized by means of TEM, N(2) adsorption-desorption isotherms, FTIR, XRD, UV-vis, and VSM. Meanwhile, as an example of the applications, catalytic activity of the porous silica shell-encapsulated Fe(3)O(4)-Au nanoparticles is investigated by choosing a model reaction, reduction of o-nitroaniline to benzenediamine by NaBH(4). Due to the existence of porous silica shells, the reaction with Fe(3)O(4)-Au/pSiO(2) core/shell nanoparticles as a catalyst follows second-order kinetics with the rate constant (k) of about 0.0165 l mol(-1) s(-1), remarkably different from the first-order kinetics with the k of about 0.002 s(-1) for the reduction reaction with the core Fe(3)O(4)-Au nanoparticles as a catalyst. PMID:21637876

  10. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  11. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  12. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  13. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  14. Parallel tetrahedral mesh adaptation with dynamic load balancing

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    2000-06-28

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  15. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  16. The importance of fixation procedures on DNA template and its suitability for solution-phase polymerase chain reaction and PCR in situ hybridization.

    PubMed

    O'Leary, J J; Browne, G; Landers, R J; Crowley, M; Healy, I B; Street, J T; Pollock, A M; Murphy, J; Johnson, M I; Lewis, F A

    1994-04-01

    Conventional solution-phase polymerase chain reaction (PCR) and in situ PCR/PCR in situ hybridization are powerful tools for retrospective analysis of fixed paraffin wax-embedded material. Amplification failure using these techniques is now encountered in some centres using archival fixed tissues. Such 'failures' may not only be due to absent target DNA sequences in the tissues, but may be a direct effect of the type of fixative, fixation time and/or fixation temperature used. The type of nucleic acid extraction procedure applied will also influence amplification results. This is particularly true with in situ PCR/PCR in situ hybridization. To examine these effects in solution-phase PCR, beta-globin gene was amplified in 100 mg pieces of tonsillar tissue fixed in Formal saline, 10% formalin, neutral buffered formaldehyde, Carnoy's Bouin's, buffered formaldehyde sublimate, Zenker's, Helly's and glutaraldehyde at 0 to 4 degrees C, room temperature and 37 degrees C fixation temperatures and for fixation periods of 6, 24, 48 and 72 hours and 1 week. DNA extraction procedures used were simple boiling and 5 days' proteinase K digestion at 37 degrees C. Amplified product was visible primarily yet variably from tissue fixed in neutral buffered formaldehyde and Carnoy's, whereas fixation in mercuric chloride-based fixatives produced consistently negative results. Room temperature and 37 degrees C fixation temperature appeared most conducive to yielding amplifiable DNA template. Fixation times of 24 and 48 hours in neutral buffered formaldehyde and Carnoy's again favoured amplification.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Synthesis and library construction of privileged tetra-substituted Δ5-2-oxopiperazine as β-turn structure mimetics.

    PubMed

    Kim, Jonghoon; Lee, Won Seok; Koo, Jaeyoung; Lee, Jeongae; Park, Seung Bum

    2014-01-13

    In this study, we developed an efficient and practical procedure for the synthesis of tetra-substituted Δ5-2-oxopiperazine that mimics the bioactive β-turn structural motif of proteins. This synthetic route is robust and modular enough to accommodate four different substituents to obtain a high level of molecular diversity without any deterioration in stereochemical enrichment of the natural and unnatural amino acids. Through the in silico studies, including a distance calculation of side chains and a conformational overlapping of our model compound with a native β-turn structure, we successfully demonstrated the conformational similarity of tetra-substituted Δ5-2-oxopiperazine to the β-turn motif. For the library construction in a high-throughput manner, the fluorous tag technology was adopted with the use of a solution-phase parallel synthesis platform. A 140-membered pilot library of tetra-substituted Δ5-2-oxopiperazines was achieved with an average purity of 90% without further purification.

  18. Liquid-phase combinatorial library synthesis: recent advances and future perspectives.

    PubMed

    Barot, Kuldipsinh P; Nikolova, Stoyanka; Ivanov, Illiyan; Ghate, Manjunath D

    2014-01-01

    Liquid-phase combinatorial library synthesis is commonly developed into the viable alternatives or adjunct across the broad spectrum of polymer-supported organic chemistry. It includes the use of soluble polymer supports in the combinatorial synthesis of peptides and small-molecular library compounds which act as catalyst and reagent supports. It also includes high throughput biological screening with generation and evaluation of chemical leads for drug discovery development. In this review, liquid-phase combinatorial library synthesis is shown as the most efficient method of choice for the synthesis of most of the combinatorial library compounds with specific approaches from different groups that state potentials of solution-phase combinatorial synthesis.

  19. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  20. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  1. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  2. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  3. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  4. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  5. Artificial intelligence in parallel

    SciTech Connect

    Waldrop, M.M.

    1984-08-10

    The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

  6. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  7. Unique role of ionic liquid in microwave-assisted synthesis of monodisperse magnetite nanoparticles.

    PubMed

    Hu, Hengyao; Yang, Hao; Huang, Peng; Cui, Daxiang; Peng, Yanqing; Zhang, Jingchang; Lu, Fengyuan; Lian, Jie; Shi, Donglu

    2010-06-14

    A small amount of ionic liquid [bmim][BF(4)] was found to be an efficient aid for microwave heating of nonpolar dibenzyl ether in high temperature solution-phase synthesis of monodisperse magnetite nanoparticles. It was found to act as both microwave absorber and assistant stabilizer in the reactive process and was recovered and reused in successive reactions.

  8. Rapid Screening for Potential Epitopes Reactive with a Polycolonal Antibody by Solution-Phase H/D Exchange Monitored by FT-ICR Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Noble, Kyle A.; Mao, Yuan; Young, Nicolas L.; Sathe, Shridhar K.; Roux, Kenneth H.; Marshall, Alan G.

    2013-07-01

    The potential epitopes of a recombinant food allergen protein, cashew Ana o 2, reactive to polyclonal antibodies, were mapped by solution-phase amide backbone H/D exchange (HDX) coupled with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). Ana o 2 polyclonal antibodies were purified in the serum from a goat immunized with cashew nut extract. Antibodies were incubated with recombinant Ana o 2 (rAna o 2) to form antigen:polyclonal antibody (Ag:pAb) complexes. Complexed and uncomplexed (free) rAna o 2 were then subjected to HDX-MS analysis. Four regions protected from H/D exchange upon pAb binding are identified as potential epitopes and mapped onto a homologous model.

  9. Matching Solid-State to Solution-Phase Photoluminescence for Near-Unity Down-Conversion Efficiency Using Giant Quantum Dots.

    PubMed

    Hanson, Christina J; Buck, Matthew R; Acharya, Krishna; Torres, Joseph A; Kundu, Janardan; Ma, Xuedan; Bouquin, Sarah; Hamilton, Christopher E; Htoon, Han; Hollingsworth, Jennifer A

    2015-06-24

    Efficient, stable, and narrowband red-emitting fluorophores are needed as down-conversion materials for next-generation solid-state lighting that is both efficient and of high color quality. Semiconductor quantum dots (QDs) are nearly ideal color-shifting phosphors, but solution-phase efficiencies have not traditionally extended to the solid-state, with losses from both intrinsic and environmental effects. Here, we assess the impacts of temperature and flux on QD phosphor performance. By controlling QD core/shell structure, we realize near-unity down-conversion efficiency and enhanced operational stability. Furthermore, we show that a simple modification of the phosphor-coated light-emitting diode device-incorporation of a thin spacer layer-can afford reduced thermal or photon-flux quenching at high driving currents (>200 mA).

  10. The Influence of the Linker Geometry in Bis(3-hydroxy-N-methyl-pyridin-2-one) Ligands on Solution-Phase Uranyl Affinity

    SciTech Connect

    Szigethy, Geza; Raymond, Kenneth

    2010-08-12

    Seven water-soluble, tetradentate bis(3-hydroxy-N-methyl-pyridin-2-one) (bis-Me-3,2-HOPO) ligands were synthesized that vary only in linker geometry and rigidity. Solution phase thermodynamic measurements were conducted between pH 1.6 and pH 9.0 to determine the effects of these variations on proton and uranyl cation affinity. Proton affinity decreases by introduction of the solubilizing triethylene glycol group as compared to un-substituted reference ligands. Uranyl affinity was found to follow no discernable trends with incremental geometric modification. The butyl-linked 4Li-Me-3,2-HOPO ligand exhibited the highest uranyl affinity, consistent with prior in vivo decorporation results. Of the rigidly-linked ligands, the o-phenylene linker imparted the best uranyl affinity to the bis-Me-3,2-HOPO ligand platform.

  11. Epitope mapping of 7S cashew antigen in complex with antibody by solution-phase H/D exchange monitored by FT-ICR mass spectrometry.

    PubMed

    Guan, Xiaoyan; Noble, Kyle A; Tao, Yeqing; Roux, Kenneth H; Sathe, Shridhar K; Young, Nicolas L; Marshall, Alan G

    2015-06-01

    The potential epitope of a recombinant food allergen protein, cashew Ana o 1, reactive to monoclonal antibody, mAb 2G4, has been mapped by solution-phase amide backbone H/D exchange (HDX) monitored by Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). Purified mAb 2G4 was incubated with recombinant Ana o 1 (rAna o 1) to form antigen:monoclonal antibody (Ag:mAb) complexes. Complexed and uncomplexed (free) rAna o 1 were then subjected to HDX-MS analysis. Five regions protected from H/D exchange upon mAb binding are identified as potential conformational epitope-contributing segments. PMID:26169135

  12. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  13. Parallel time integration software

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  14. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  15. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  16. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  17. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  18. SPINning parallel systems software.

    SciTech Connect

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  19. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  20. The impact of parallel chemistry in drug discovery.

    PubMed

    Edwards, Paul J

    2006-05-01

    With the application of parallel synthesis of single compounds to drug-discovery efforts, improvements in the efficiency of synthesis are possible. However, for improvements to occur in effective drug design - a critical requirement to increase productivity in the modern pharmaceutical industry - the implementation of in silico design hypotheses that incorporate comprehensive information on a target, including considerations of absorption, distribution, metabolism and excretion, is also necessary. Concomitantly, the use of automated methods of synthesis and purification is also required to improve drug design. Combining all of these elements allows the possibility to uncover unique insights into a biological target quickly and to therefore accelerate the rate of drug discovery.

  1. Parallel Total Energy

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  2. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  3. Parallel Multigrid Equation Solver

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  4. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  5. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  6. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  7. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  8. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  9. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  10. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  11. Parallel Dislocation Simulator

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  12. A new synthesis of ultrafine nanometre-sized bismuth particles

    NASA Astrophysics Data System (ADS)

    Balan, Lavinia; Schneider, Raphaël; Billaud, Denis; Fort, Yves; Ghanbaja, Jaafar

    2004-08-01

    A new synthesis of Bi(0) nanoparticles is reported. A low temperature solution phase reduction of BiCl3 with t-BuONa activated sodium hydride at 65 °C has been successfully used to prepare large quantities of colloidal Bi(0) nanoparticles with a diameter in the range 1.8-3.0 nm. The resulting Bi nanoparticles were characterized using transmission electron microscopy, XPS analysis and x-ray powder diffraction.

  13. Serial multiplier arrays for parallel computation

    NASA Technical Reports Server (NTRS)

    Winters, Kel

    1990-01-01

    Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.

  14. In vitro activation of T lymphocytes from human immunodeficiency virus (HIV)-seropositive blood donors. I. Soluble interleukin 2 receptor (IL2R) production parallels cellular IL2R expression and DNA synthesis.

    PubMed

    Prince, H E; Kleinman, S H; Maino, V C; Jackson, A L

    1988-03-01

    We investigated the relationship of soluble interleukin 2 receptor (sIL2R) production to cellular IL2R expression and DNA synthesis by mitogen-stimulated mononuclear cells from blood donors seropositive for human immunodeficiency virus (HIV). SIL2R was measured using an enzyme-linked immunosorbent assay which employed 2 anti-IL2R monoclonal antibodies recognizing distinct IL2R epitopes. Decreased phytohemagglutinin-induced DNA synthesis and cellular IL2R expression were accompanied by decreased levels of sIL2R in cell culture supernatants. Similar findings were observed for pokeweed mitogen-induced responses. There was no detectable spontaneous secretion of sIL2R into culture supernatants by unstimulated mononuclear cells from either HIV-seropositive or control seronegative donors. These findings indicate that the in vitro T-cell activation defects which characterize HIV infection include decreased sIL2R production, as well as decreased cellular IL2R expression and DNA synthesis. Further, they show that assessment of supernatant sIL2R levels can be used as a valid, reliable assay for T-cell activation.

  15. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  16. Combinatorial Synthesis and Discovery of an Antibiotic Compound. An Experiment Suitable for High School and Undergraduate Laboratories

    NASA Astrophysics Data System (ADS)

    Wolkenberg, Scott E.; Su, Andrew I.

    2001-06-01

    An exercise demonstrating solution-phase combinatorial chemistry and its application to drug discovery is described. The experiment involves the synthesis of six libraries of three hydrazones, screening the libraries for antibiotic activity, and deconvolution to determine the active individual compound. The laboratory was designed for a high school classroom, though it can easily be expanded to suit a college introductory organic laboratory course.

  17. Parallel Consensual Neural Networks

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

    1993-01-01

    A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

  18. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  19. Parallel grid population

    SciTech Connect

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  20. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  1. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  2. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  3. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  4. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  5. Collisionless parallel shocks

    NASA Technical Reports Server (NTRS)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  6. Oxidation does not (always) kill reactivity of transition metals: solution-phase conversion of nanoscale transition metal oxides to phosphides and sulfides.

    PubMed

    Muthuswamy, Elayaraja; Brock, Stephanie L

    2010-11-17

    Unexpected reactivity on the part of oxide nanoparticles that enables their transformation into phosphides or sulfides by solution-phase reaction with trioctylphosphine (TOP) or sulfur, respectively, at temperatures of ≤370 °C is reported. Impressively, single-phase phosphide products are produced, in some cases with controlled anisotropy and narrow polydispersity. The generality of the approach is demonstrated for Ni, Fe, and Co, and while manganese oxides are not sufficiently reactive toward TOP to form phosphides, they do yield MnS upon reaction with sulfur. The reactivity can be attributed to the small size of the precursor particles, since attempts to convert bulk oxides or even particles with sizes approaching 50 nm were unsuccessful. Overall, the use of oxide nanoparticles, which are easily accessed via reaction of inexpensive salts with air, in lieu of organometallic reagents (e.g., metal carbonyls), which may or may not be transformed into metal nanoparticles, greatly simplifies the production of nanoscale phosphides and sulfides. The precursor nanoparticles can easily be produced in large quantities and stored in the solid state without concern that "oxidation" will limit their reactivity.

  7. Correlation between computed gas-phase and experimentally determined solution-phase infrared spectra: models of the iron-iron hydrogenase enzyme active site.

    PubMed

    Tye, Jesse W; Darensbourg, Marcetta Y; Hall, Michael B

    2006-09-01

    Gas-phase density functional theory calculations (B3LYP, double zeta plus polarization basis sets) are used to predict the solution-phase infrared spectra for a series of CO- and CN-containing iron complexes. It is shown that simple linear scaling of the computed C--O and C--N stretching frequencies yields accurate predictions of the the experimentally determined nu(CO) and nu(CN) values for a variety of complexes of different charges and in solvents of varying polarity. As examples of the technique, the resulting correlation is used to assign structures to spectroscopically observed but structurally ambiguous species in two different systems. For the (mu-SCH2CH2CH2S)[Fe(CO)3]2 complex in tetrahydrofuran solution, our calculations show that the initial electrochemical reduction process leads to a simple one-electron reduced product with a structure very similar to the (mu-SCH2CH2CH2S)[Fe(CO)3]2 parent complex. For the iron-iron hydrogenase enzyme active site, our computations show that the absence or presence of a water molecule near the distal iron center (the iron center further from the [4Fe4S] cluster and protein backbone) has very little effect on the predicted infrared spectra.

  8. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  9. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  10. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  11. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  12. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  13. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  14. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  15. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  16. Parallelized nested sampling

    NASA Astrophysics Data System (ADS)

    Henderson, R. Wesley; Goggans, Paul M.

    2014-12-01

    One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

  17. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  18. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  19. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  20. Parallel Kinematic Machines (PKM)

    SciTech Connect

    Henry, R.S.

    2000-03-17

    The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

  1. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  2. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  3. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  4. Development of solution phase hybridisation PCR-ELISA for the detection and quantification of Enterococcus faecalis and Pediococcus pentosaceus in Nurmi-type cultures.

    PubMed

    Waters, Sinéad M; Doyle, Sean; Murphy, Richard A; Power, Ronan F G

    2005-12-01

    Nurmi-type cultures (NTCs), derived from the fermentation of caecal contents of specifically pathogen-free (SPF) birds, have been used successfully to control salmonella colonisation in chicks. These cultures are undefined in nature and, consequently, it is difficult to obtain approval from regulatory agencies for their use as direct fed microbials (DFMs) for poultry. Progress towards the generation of effective defined probiotics requires further knowledge of the composition of these cultures. As such, species-specific, culture-independent quantification methodologies need to be developed to elucidate the concentration of specific bacterial constituents of NTCs. Quantification of specific bacterial species in such ill-defined complex cultures using conventional culturing methods is inaccurate due to low levels of sensitivity and reproducibility, in addition to slow turnaround times. Furthermore, these methods lack selectivity due to the nature of the accompanying microflora. This study describes the development of a rapid, sensitive, reliable, reproducible, and species-specific culture-independent, solution phase hybridisation PCR-ELISA procedure for the detection and quantification of Enterococcus faecalis and Pediococcus pentosaceus in NTCs. In this technique, biotin-labelled primers were designed to amplify a species-specific fragment of a marker gene of known copy number, in both species. Resulting amplicons were hybridised with a dinitrophenol (DNP)-labelled oligonucleotide probe in solution and were subsequently captured on a streptavidin-coated microtitre plate. The degree of binding was determined by the addition of IgG (anti-DNP)-horseradish peroxidase conjugate, which was subsequently visualised using a chromogenic substrate, tetramethylbenzidine. This novel quantitative method was capable of detecting E. faecalis and P. pentosaceus at levels as low as 5 CFU per PCR reaction. PMID:15949857

  5. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  6. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  7. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  8. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  9. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  10. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  11. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  12. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  13. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  14. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  15. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  16. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  17. Unified Parallel Software

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  18. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  19. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  20. GEMAS: prediction of solid-solution phase partitioning coefficients (Kd) for oxoanions and boric acid in soils using mid-infrared diffuse reflectance spectroscopy.

    PubMed

    Janik, Leslie J; Forrester, Sean T; Soriano-Disla, José M; Kirby, Jason K; McLaughlin, Michael J; Reimann, Clemens

    2015-02-01

    The authors' aim was to develop rapid and inexpensive regression models for the prediction of partitioning coefficients (Kd), defined as the ratio of the total or surface-bound metal/metalloid concentration of the solid phase to the total concentration in the solution phase. Values of Kd were measured for boric acid (B[OH]3(0)) and selected added soluble oxoanions: molybdate (MoO4(2-)), antimonate (Sb[OH](6-)), selenate (SeO4(2-)), tellurate (TeO4(2-)) and vanadate (VO4(3-)). Models were developed using approximately 500 spectrally representative soils of the Geochemical Mapping of Agricultural Soils of Europe (GEMAS) program. These calibration soils represented the major properties of the entire 4813 soils of the GEMAS project. Multiple linear regression (MLR) from soil properties, partial least-squares regression (PLSR) using mid-infrared diffuse reflectance Fourier-transformed (DRIFT) spectra, and models using DRIFT spectra plus analytical pH values (DRIFT + pH), were compared with predicted log K(d + 1) values. Apart from selenate (R(2)  = 0.43), the DRIFT + pH calibrations resulted in marginally better models to predict log K(d + 1) values (R(2)  = 0.62-0.79), compared with those from PSLR-DRIFT (R(2)  = 0.61-0.72) and MLR (R(2)  = 0.54-0.79). The DRIFT + pH calibrations were applied to the prediction of log K(d + 1) values in the remaining 4313 soils. An example map of predicted log K(d + 1) values for added soluble MoO4(2-) in soils across Europe is presented. The DRIFT + pH PLSR models provided a rapid and inexpensive tool to assess the risk of mobility and potential availability of boric acid and selected oxoanions in European soils. For these models to be used in the prediction of log K(d + 1) values in soils globally, additional research will be needed to determine if soil variability is accounted on the calibration.

  1. Proline Editing: A General and Practical Approach to the Synthesis of Functionally and Structurally Diverse Peptides. Analysis of Steric versus Stereoelectronic Effects of 4-Substituted Prolines on Conformation within Peptides

    PubMed Central

    Pandey, Anil K.; Naduthambi, Devan; Thomas, Krista M.; Zondlo, Neal J.

    2013-01-01

    Functionalized proline residues have diverse applications. Herein we describe a practical approach, proline editing, for the synthesis of peptides with stereospecifically modified proline residues. Peptides are synthesized by standard solid-phase-peptide-synthesis to incorporate Fmoc-Hydroxyproline (4R-Hyp). In an automated manner, the Hyp hydroxyl is protected and the remainder of the peptide synthesized. After peptide synthesis, the Hyp protecting group is orthogonally removed and Hyp selectively modified to generate substituted proline amino acids, with the peptide main chain functioning to “protect” the proline amino and carboxyl groups. In a model tetrapeptide (Ac-TYPN-NH2), 4R-Hyp was stereospecifically converted to 122 different 4-substituted prolyl amino acids, with 4R or 4S stereochemistry, via Mitsunobu, oxidation, reduction, acylation, and substitution reactions. 4-Substituted prolines synthesized via proline editing include incorporated structured amino acid mimetics (Cys, Asp/Glu, Phe, Lys, Arg, pSer/pThr), recognition motifs (biotin, RGD), electron-withdrawing groups to induce stereoelectronic effects (fluoro, nitrobenzoate), handles for heteronuclear NMR (19F:fluoro; pentafluorophenyl or perfluoro-tert-butyl ether; 4,4-difluoro; 77SePh) and other spectroscopies (fluorescence, IR: cyanophenyl ether), leaving groups (sulfonate, halide, NHS, bromoacetate), and other reactive handles (amine, thiol, thioester, ketone, hydroxylamine, maleimide, acrylate, azide, alkene, alkyne, aryl halide, tetrazine, 1,2-aminothiol). Proline editing provides access to these proline derivatives with no solution phase synthesis. All peptides were analyzed by NMR to identify stereoelectronic and steric effects on conformation. Proline derivatives were synthesized to permit bioorthogonal conjugation reactions, including azide-alkyne, tetrazinetrans-cyclooctene, oxime, reductive amination, native chemical ligation, Suzuki, Sonogashira, cross-metathesis, and Diels

  2. Facile synthesis of advanced photodynamic molecular beacon architectures.

    PubMed

    Lovell, Jonathan F; Chen, Juan; Huynh, Elizabeth; Jarvi, Mark T; Wilson, Brian C; Zheng, Gang

    2010-06-16

    Nucleic acid photodynamic molecular beacons (PMBs) are a class of activatable photosensitizers that increase singlet oxygen generation upon binding a specific target sequence. Normally, PMBs are functionalized with multiple solution-phase labeling and purification steps. Here, we make use of a flexible solid-phase approach for completely automated synthesis of PMBs. This enabled the creation of a new type of molecular beacon that uses a linear superquencher architecture. The 3' terminus was labeled with a photosensitizer by generating pyropheophorbide-labeled solid-phase support. The 5' terminus was labeled with up to three consecutive additions of a dark quencher phosphoramidite. These photosensitizing and quenching moieties were stable in the harsh DNA synthesis environment and their hydrophobicity facilitated PMB purification by HPLC. Linear superquenchers exhibited highly efficient quenching. This fully automated synthesis method simplifies not only the synthesis and purification of PMBs, but also the creation of new activatable photosensitizer designs.

  3. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  4. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  5. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  6. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  7. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  8. Plasmonic nanoshell synthesis in microfluidic composite foams.

    PubMed

    Duraiswamy, Suhanya; Khan, Saif A

    2010-09-01

    The availability of robust, scalable, and automated nanoparticle manufacturing processes is crucial for the viability of emerging nanotechnologies. Metallic nanoparticles of diverse shape and composition are commonly manufactured by solution-phase colloidal chemistry methods, where rapid reaction kinetics and physical processes such as mixing are inextricably coupled, and scale-up often poses insurmountable problems. Here we present the first continuous flow process to synthesize thin gold "nanoshells" and "nanoislands" on colloidal silica surfaces, which are nanoparticle motifs of considerable interest in plasmonics-based applications. We assemble an ordered, flowing composite foam lattice in a simple microfluidic device, where the lattice cells are alternately aqueous drops containing reagents for nanoparticle synthesis or gas bubbles. Microfluidic foam generation enables precisely controlled reagent dispensing and mixing, and the ordered foam structure facilitates compartmentalized nanoparticle growth. This is a general method for aqueous colloidal synthesis, enabling continuous, inherently digital, scalable, and automated production processes for plasmonic nanomaterials.

  9. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  10. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  11. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  12. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  13. Theory and practice of parallel direct optimization.

    PubMed

    Janies, Daniel A; Wheeler, Ward C

    2002-01-01

    Our ability to collect and distribute genomic and other biological data is growing at a staggering rate (Pagel, 1999). However, the synthesis of these data into knowledge of evolution is incomplete. Phylogenetic systematics provides a unifying intellectual approach to understanding evolution but presents formidable computational challenges. A fundamental goal of systematics, the generation of evolutionary trees, is typically approached as two distinct NP-complete problems: multiple sequence alignment and phylogenetic tree search. The number of cells in a multiple alignment matrix are exponentially related to sequence length. In addition, the number of evolutionary trees expands combinatorially with respect to the number of organisms or sequences to be examined. Biologically interesting datasets are currently comprised of hundreds of taxa and thousands of nucleotides and morphological characters. This standard will continue to grow with the advent of highly automated sequencing and development of character databases. Three areas of innovation are changing how evolutionary computation can be addressed: (1) novel concepts for determination of sequence homology, (2) heuristics and shortcuts in tree-search algorithms, and (3) parallel computing. In this paper and the online software documentation we describe the basic usage of parallel direct optimization as implemented in the software POY (ftp://ftp.amnh.org/pub/molecular/poy).

  14. Magnetic nanoparticles: synthesis, functionalization, and applications in bioimaging and magnetic energy storage

    PubMed Central

    Frey, Natalie A.; Peng, Sheng; Cheng, Kai; Sun, Shouheng

    2009-01-01

    This tutorial review summarizes the recent advances in the chemical synthesis and potential applications of monodisperse magnetic nanoparticles. After a brief introduction to nanomagnetism, the review focuses on recent developments in solution phase syntheses of monodisperse MFe2O4, Co, Fe, CoFe, FePt and SmCo5 nanoparticles. The review further outlines the surface, structural, and magnetic properties of these nanoparticles for biomedicine and magnetic energy storage applications. PMID:19690734

  15. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  16. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  17. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  18. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  19. Experimental Parallel-Processing Computer

    NASA Technical Reports Server (NTRS)

    Mcgregor, J. W.; Salama, M. A.

    1986-01-01

    Master processor supervises slave processors, each with its own memory. Computer with parallel processing serves as inexpensive tool for experimentation with parallel mathematical algorithms. Speed enhancement obtained depends on both nature of problem and structure of algorithm used. In parallel-processing architecture, "bank select" and control signals determine which one, if any, of N slave processor memories accessible to master processor at any given moment. When so selected, slave memory operates as part of master computer memory. When not selected, slave memory operates independently of main memory. Slave processors communicate with each other via input/output bus.

  20. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  1. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  2. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  3. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  4. Metal structures with parallel pores

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  5. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  6. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  7. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  8. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  9. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  10. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  11. Selective Access to Heterocyclic Sulfonamides and Sulfonyl Fluorides via a Parallel Medicinal Chemistry Enabled Method.

    PubMed

    Tucker, Joseph W; Chenard, Lois; Young, Joseph M

    2015-11-01

    A sulfur-functionalized aminoacrolein derivative is used for the efficient and selective synthesis of heterocyclic sulfonyl chlorides, sulfonyl fluorides, and sulfonamides. The development of a 3-step parallel medicinal chemistry (PMC) protocol for the synthesis of pyrazole-4-sulfonamides effectively demonstrates the utility of this reagent. This reactivity was expanded to provide rapid access to other heterocyclic sulfonyl fluorides, including pyrimidines and pyridines, whose corresponding sulfonyl chlorides lack suitable chemical stability. PMID:26434694

  12. Selective Access to Heterocyclic Sulfonamides and Sulfonyl Fluorides via a Parallel Medicinal Chemistry Enabled Method.

    PubMed

    Tucker, Joseph W; Chenard, Lois; Young, Joseph M

    2015-11-01

    A sulfur-functionalized aminoacrolein derivative is used for the efficient and selective synthesis of heterocyclic sulfonyl chlorides, sulfonyl fluorides, and sulfonamides. The development of a 3-step parallel medicinal chemistry (PMC) protocol for the synthesis of pyrazole-4-sulfonamides effectively demonstrates the utility of this reagent. This reactivity was expanded to provide rapid access to other heterocyclic sulfonyl fluorides, including pyrimidines and pyridines, whose corresponding sulfonyl chlorides lack suitable chemical stability.

  13. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  14. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  15. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  16. Parallel computation and computers for artificial intelligence

    SciTech Connect

    Kowalik, J.S. )

    1988-01-01

    This book discusses Parallel Processing in Artificial Intelligence; Parallel Computing using Multilisp; Execution of Common Lisp in a Parallel Environment; Qlisp; Restricted AND-Parallel Execution of Logic Programs; PARLOG: Parallel Programming in Logic; and Data-driven Processing of Semantic Nets. Attention is also given to: Application of the Butterfly Parallel Processor in Artificial Intelligence; On the Range of Applicability of an Artificial Intelligence Machine; Low-level Vision on Warp and the Apply Programming Mode; AHR: A Parallel Computer for Pure Lisp; FAIM-1: An Architecture for Symbolic Multi-processing; and Overview of Al Application Oriented Parallel Processing Research in Japan.

  17. Large-scale, solution-phase growth of semiconductor nanocrystals into ultralong one-dimensional arrays and study of their electrical properties

    NASA Astrophysics Data System (ADS)

    Ma, Yuchao; Xue, Mengmeng; Shi, Jiahua; Tan, Yiwei

    2014-05-01

    One-dimensional (1D) assemblies of semiconductor nanocrystals (NCs) represent an important kind of 1D nanomaterial system due to their potential for exploring novel and enhanced electronic and photonic performances of devices. Herein, we present mass fabrication of a series of 1D arrays of CdSe and PbSe NCs on a large length scale with ultralong, aligned Se nanowires (NWs) as both the reactant and structure-directing template. The 1D self-assembly patterns are the anchored growth of CdSe quantum dots (QDs) on the surface of Se NWs (i.e., 1D Se NWs/CdSe QDs core-shell heterostructure) and 1D aggregates of unsupported PbSe NCs formed by substantially increased collective particle-particle interactions. The size of CdSe QDs and shape of PbSe NCs in the 1D arrays can be effectively controlled by varying the synthetic conditions. Room temperature electrical measurements on the 1D Se/CdSe heterostructure field effect transistors (FETs) exhibit a pronounced improvement in the on/off ratio, device carrier mobility, and transconductance compared to the Se NW FETs fabricated in parallel. Furthermore, upon visible light excitation, the photocurrent from the Se/CdSe heterostructure FETs responses sharply (small time constant) and increases linearly with increasing the light intensity, indicating excellent photoconductive properties.One-dimensional (1D) assemblies of semiconductor nanocrystals (NCs) represent an important kind of 1D nanomaterial system due to their potential for exploring novel and enhanced electronic and photonic performances of devices. Herein, we present mass fabrication of a series of 1D arrays of CdSe and PbSe NCs on a large length scale with ultralong, aligned Se nanowires (NWs) as both the reactant and structure-directing template. The 1D self-assembly patterns are the anchored growth of CdSe quantum dots (QDs) on the surface of Se NWs (i.e., 1D Se NWs/CdSe QDs core-shell heterostructure) and 1D aggregates of unsupported PbSe NCs formed by substantially

  18. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  19. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  20. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  1. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  2. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  3. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  4. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  5. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  6. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  7. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  8. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  9. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  10. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  11. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  12. [Total Synthesis of Biologically Active Natural Products toward Elucidation of the Mode of Action].

    PubMed

    Yoshida, Masahito

    2015-01-01

    Total synthesis of biologically active cyclodepsipeptide destruxin E using solid- and solution-phase synthesis is described. The solid-phase synthesis of destruxin E was initially investigated for the efficient synthesis of destruxin analogues. Peptide elongation from polymer-supported β-alanine was efficiently performed using DIC/HOBt or PyBroP/DIEA, and subsequent cleavage from the polymer-support under weakly acidic conditions furnished a cyclization precursor in moderate yield. Macrolactonization of the cyclization precursor was smoothly performed using 2-methyl-6-nitrobenzoic anhydride (MNBA)/4-(dimethylamino)pyridine N-oxide (DMAPO) to afford macrolactone in moderate yield. Finally, formation of the epoxide in the side chain via three steps provided destruxin E, and the stereochemistry of the epoxide was determined to be S. Its diastereomer, epi-destruxin E, was also synthesized in the same manner used to synthesize the natural product. The stereochemistry of the epoxide was critical for the V-ATPase inhibition; natural product destruxin E exhibited 10-fold more potent V-ATPase inhibition than epi-destruxin E. Next, the scalable synthesis of destruxin E for in vivo study was also performed via solution-phase synthesis. The scalable synthesis of a key component, (S)-HA-Pro-OH, was achieved using osmium-catalyzed diastereoselective dihydroxylation with (DHQD)2PHAL as a chiral ligand; peptide synthesis using Cbz-protected amino acid derivatives furnished the cyclization precursor on a gram-scale. Macrolactonization smoothly provided the macrolactone without forming a dimerized product, even at 6 mM, and the synthesis of destruxin E was achieved via three steps on a gram scale in high purity (>98%). PMID:26423864

  13. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  14. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  15. The AIS-5000 parallel processor

    SciTech Connect

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In this paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.

  16. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  17. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  18. Flow invariant droplet formation for stable parallel microreactors.

    PubMed

    Riche, Carson T; Roberts, Emily J; Gupta, Malancha; Brutchey, Richard L; Malmstadt, Noah

    2016-01-01

    The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis. PMID:26902825

  19. Flow invariant droplet formation for stable parallel microreactors

    NASA Astrophysics Data System (ADS)

    Riche, Carson T.; Roberts, Emily J.; Gupta, Malancha; Brutchey, Richard L.; Malmstadt, Noah

    2016-02-01

    The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis.

  20. Flow invariant droplet formation for stable parallel microreactors

    PubMed Central

    Riche, Carson T.; Roberts, Emily J.; Gupta, Malancha; Brutchey, Richard L.; Malmstadt, Noah

    2016-01-01

    The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis. PMID:26902825

  1. Parallel execution of LISP programs

    SciTech Connect

    Weening, J.S.

    1989-01-01

    This dissertation considers several issues in the execution of Lisp programs on shared-memory multiprocessors. An overview of constructs for explicit parallelism in Lisp is first presented. The problems of partitioning a program into processes and scheduling these processes are then described, and a number of methods for performing these are proposed. These include cutting off process creation based on properties of the computation tree of the program, and basing partitioning decisions on the state of the system at runtime instead of the program. An experimental study of these methods has been performed using a simulator for parallel Lisp. The simulator, written in common Lisp using a continuation-passing style, is described in detail. This is followed by a description of the experiments that were performed and an analysis of the results. Two programs are used as illustrations-a Fast Fourier Transform, which has an abundance of parallelism, and the Cocke-Younger-Kasami parsing algorithm, for which good speedup is not as easy to obtain. The difficulty of using cutoff-based partitioning methods, and the differences between various scheduling methods, are shown. A combination of partitioning and scheduling methods which the author calls dynamic partitioning is analyzed in more detail. This method is based on examining the machine's runtime state; it requires that the programmer only identify parallelism in the program, without deciding which potential parallelism is actually useful. Several theorems are proved providing upper bounds on the amount of overhead produced by this method. He concludes that for programs whose computation trees have small height relative to their total size, dynamic partitioning can achieve asymptotically minimal overhead in the cost of process creation.

  2. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  3. Synthesis and characterization of hybrid nanostructures

    PubMed Central

    Mokari, Taleb

    2011-01-01

    There has been significant interest in the development of multicomponent nanocrystals formed by the assembly of two or more different materials with control over size, shape, composition, and spatial orientation. In particular, the selective growth of metals on the tips of semiconductor nanorods and wires can act to couple the electrical and optical properties of semiconductors with the unique properties of various metals. Here, we outline our progress on the solution-phase synthesis of metal-semiconductor heterojunctions formed by the growth of Au, Pt, or other binary catalytic metal systems on metal (Cd, Pb, Cu)-chalcogenide nanostructures. We show the ability to grow the metal on various shapes (spherical, rods, hexagonal prisms, and wires). Furthermore, manipulating the composition of the metal nanoparticles is also shown, where PtNi and PtCo alloys are our main focus. The magnetic and electrical properties of the developed hybrid nanostructures are shown. PMID:22110873

  4. Alternative fuels and chemicals from synthesis gas

    SciTech Connect

    Unknown

    1998-08-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  5. ALTERNATIVE FUELS AND CHEMICALS FROM SYNTHESIS GAS

    SciTech Connect

    Unknown

    1999-01-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  6. Alternative Fuels and Chemicals From Synthesis Gas

    SciTech Connect

    1998-07-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  7. ALTERNATIVE FUELS AND CHEMICALS FROM SYNTHESIS GAS

    SciTech Connect

    Unknown

    1999-07-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  8. ALTERNATIVE FUELS AND CHEMICALS FROM SYNTHESIS GAS

    SciTech Connect

    Unknown

    2000-10-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  9. ALTERNATIVE FUELS AND CHEMICALS FROM SYNTHESIS GAS

    SciTech Connect

    1999-10-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  10. Efficient Synthesis and Biological Evaluation of 5'-GalNAc Conjugated Antisense Oligonucleotides.

    PubMed

    Østergaard, Michael E; Yu, Jinghua; Kinberger, Garth A; Wan, W Brad; Migawa, Michael T; Vasquez, Guillermo; Schmidt, Karsten; Gaus, Hans J; Murray, Heather M; Low, Audrey; Swayze, Eric E; Prakash, Thazha P; Seth, Punit P

    2015-08-19

    Conjugation of triantennary N-acetyl galactosamine (GalNAc) to oligonucleotide therapeutics results in marked improvement in potency for reducing gene targets expressed in hepatocytes. In this report we describe a robust and efficient solution-phase conjugation strategy to attach triantennary GalNAc clusters (mol. wt. ∼2000) activated as PFP (pentafluorophenyl) esters onto 5'-hexylamino modified antisense oligonucleotides (5'-HA ASOs, mol. wt. ∼8000 Da). The conjugation reaction is efficient and was used to prepare GalNAc conjugated ASOs from milligram to multigram scale. The solution phase method avoids loading of GalNAc clusters onto solid-support for automated synthesis and will facilitate evaluation of GalNAc clusters for structure activity relationship (SAR) studies. Furthermore, we show that transfer of the GalNAc cluster from the 3'-end of an ASO to the 5'-end results in improved potency in cells and animals.

  11. CdS and Cd-Free Buffer Layers on Solution Phase Grown Cu2ZnSn(SxSe1- x)4 :Band Alignments and Electronic Structure Determined with Femtosecond Ultraviolet Photoemission Spectroscopy

    SciTech Connect

    Haight, Richard; Barkhouse, Aaron; Wang, Wei; Yu, Luo; Shao, Xiaoyan; Mitzi, David; Hiroi, Homare; Sugimoto, Hiroki

    2013-12-02

    The heterojunctions formed between solution phase grown Cu2ZnSn(SxSe1- x)4(CZTS,Se) and a number of important buffer materials including CdS, ZnS, ZnO, and In2S3, were studied using femtosecond ultraviolet photoemission spectroscopy (fs-UPS) and photovoltage spectroscopy. With this approach we extract the magnitude and direction of the CZTS,Se band bending, locate the Fermi level within the band gaps of absorber and buffer and measure the absorber/buffer band offsets under flatband conditions. We will also discuss two-color pump/probe experiments in which the band bending in the buffer layer can be independently determined. Finally, studies of the bare CZTS,Se surface will be discussed including our observation of mid-gap Fermi level pinning and its relation to Voc limitations and bulk defects.

  12. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  13. A generalized parallel replica dynamics

    SciTech Connect

    Binder, Andrew; Lelièvre, Tony; Simpson, Gideon

    2015-03-01

    Metastability is a common obstacle to performing long molecular dynamics simulations. Many numerical methods have been proposed to overcome it. One method is parallel replica dynamics, which relies on the rapid convergence of the underlying stochastic process to a quasi-stationary distribution. Two requirements for applying parallel replica dynamics are knowledge of the time scale on which the process converges to the quasi-stationary distribution and a mechanism for generating samples from this distribution. By combining a Fleming–Viot particle system with convergence diagnostics to simultaneously identify when the process converges while also generating samples, we can address both points. This variation on the algorithm is illustrated with various numerical examples, including those with entropic barriers and the 2D Lennard-Jones cluster of seven atoms.

  14. Scans as primitive parallel operations

    SciTech Connect

    Blelloch, G.E. . Dept. of Computer Science)

    1989-11-01

    In most parallel random access machine (PRAM) models, memory references are assumed to take unit time. In practice, and in theory, certain scan operations, also known as prefix computations, can execute in no more time than these parallel memory references. This paper outlines an extensive study of the effect of including, in the PRAM models, such scan operations as unit-time primitives. The study concludes that the primitives improve the asymptotic running time of many algorithms by an O(log n) factor greatly simplify the description of many algorithms, and are significantly easier to implement than memory references. The authors argue that the algorithm designer should feel free to use these operations as if they were as cheap as a memory reference. This paper describes five algorithms that clearly illustrate how the scan primitives can be used in algorithm design. These all run on an EREW PRAM with the addition of two scan primitives.

  15. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  16. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  17. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  18. Parallelizing the XSTAR Photoionization Code

    NASA Astrophysics Data System (ADS)

    Noble, M. S.; Ji, L.; Young, A.; Lee, J. C.

    2009-09-01

    We describe two means by which XSTAR, a code which computes physical conditions and emission spectra of photoionized gases, has been parallelized. The first is pvmxstar, a wrapper which can be used in place of the serial xstar2xspec script to foster concurrent execution of the XSTAR command line application on independent sets of parameters. The second is pmodel, a plugin for the Interactive Spectral Interpretation System (ISIS) which allows arbitrary components of a broad range of astrophysical models to be distributed across processors during fitting and confidence limits calculations, by scientists with little training in parallel programming. Plugging the XSTAR family of analytic models into pmodel enables multiple ionization states (e.g., of a complex absorber/emitter) to be computed simultaneously, alleviating the often prohibitive expense of the traditional serial approach. Initial performance results indicate that these methods substantially enlarge the problem space to which XSTAR may be applied within practical timeframes.

  19. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Congzhe; Fang, Yuefa; Guo, Sheng

    2015-07-01

    Dimensional synthesis is one of the most difficult issues in the field of parallel robots with actuation redundancy. To deal with the optimal design of a redundantly actuated parallel robot used for ankle rehabilitation, a methodology of dimensional synthesis based on multi-objective optimization is presented. First, the dimensional synthesis of the redundant parallel robot is formulated as a nonlinear constrained multi-objective optimization problem. Then four objective functions, separately reflecting occupied space, input/output transmission and torque performances, and multi-criteria constraints, such as dimension, interference and kinematics, are defined. In consideration of the passive exercise of plantar/dorsiflexion requiring large output moment, a torque index is proposed. To cope with the actuation redundancy of the parallel robot, a new output transmission index is defined as well. The multi-objective optimization problem is solved by using a modified Differential Evolution(DE) algorithm, which is characterized by new selection and mutation strategies. Meanwhile, a special penalty method is presented to tackle the multi-criteria constraints. Finally, numerical experiments for different optimization algorithms are implemented. The computation results show that the proposed indices of output transmission and torque, and constraint handling are effective for the redundant parallel robot; the modified DE algorithm is superior to the other tested algorithms, in terms of the ability of global search and the number of non-dominated solutions. The proposed methodology of multi-objective optimization can be also applied to the dimensional synthesis of other redundantly actuated parallel robots only with rotational movements.

  20. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  1. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  2. Parallel fabrication of nanogap electrodes.

    PubMed

    Johnston, Danvers E; Strachan, Douglas R; Johnson, A T Charlie

    2007-09-01

    We have developed a technique for simultaneously fabricating large numbers of nanogaps in a single processing step using feedback-controlled electromigration. Parallel nanogap formation is achieved by a balanced simultaneous process that uses a novel arrangement of nanoscale shorts between narrow constrictions where the nanogaps form. Because of this balancing, the fabrication of multiple nanoelectrodes is similar to that of a single nanogap junction. The technique should be useful for constructing complex circuits of molecular-scale electronic devices.

  3. Massively parallel femtosecond laser processing.

    PubMed

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  4. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  5. High pressure synthesis gas conversion

    SciTech Connect

    Not Available

    1992-01-01

    A high pressure gas phase fermentation system has been constructed for the biological production of ethanol from coal synthesis gas. The reactors in the system consist of a 650 mL continuous stirred tank reactor and a 1 L continuous column reactor. The reactors are designed for individual or dual operation in series or parallel, with continuous gas and liquid feed. The system is housed in a constant temperature, explosion-proof room, equipped with gas leak detectors.

  6. Parallel micromanipulation method for microassembly

    NASA Astrophysics Data System (ADS)

    Sin, Jeongsik; Stephanou, Harry E.

    2001-09-01

    Microassembly deals with micron or millimeter scale objects where the tolerance requirements are in the micron range. Typical applications include electronics components (silicon fabricated circuits), optoelectronics components (photo detectors, emitters, amplifiers, optical fibers, microlenses, etc.), and MEMS (Micro-Electro-Mechanical-System) dies. The assembly processes generally require not only high precision but also high throughput at low manufacturing cost. While conventional macroscale assembly methods have been utilized in scaled down versions for microassembly applications, they exhibit limitations on throughput and cost due to the inherently serialized process. Since the assembly process depends heavily on the manipulation performance, an efficient manipulation method for small parts will have a significant impact on the manufacturing of miniaturized products. The objective of this study on 'parallel micromanipulation' is to achieve these three requirements through the handling of multiple small parts simultaneously (in parallel) with high precision (micromanipulation). As a step toward this objective, a new manipulation method is introduced. The method uses a distributed actuation array for gripper free and parallel manipulation, and a centralized, shared actuator for simplified controls. The method has been implemented on a testbed 'Piezo Active Surface (PAS)' in which an actively generated friction force field is the driving force for part manipulation. Basic motion primitives, such as translation and rotation of objects, are made possible with the proposed method. This study discusses the design of the proposed manipulation method PAS, and the corresponding manipulation mechanism. The PAS consists of two piezoelectric actuators for X and Y motion, two linear motion guides, two sets of nozzle arrays, and solenoid valves to switch the pneumatic suction force on and off in individual nozzles. One array of nozzles is fixed relative to the surface on

  7. Bioinspired synthesis of magnetic nanoparticles

    SciTech Connect

    David, Anand

    2009-01-01

    goal of this project is to understand the mechanism of magnetite particle synthesis in the presence of the biomineralization proteins, mms6 and C25. Previous work has hypothesized that the mms6 protein helps to template magnetite and cobalt ferrite particle synthesis and that the C25 protein templates cobalt ferrite formation. However, the effect of parameters such as the protein concentration on the particle formation is still unknown. It is expected that the protein concentration significantly affects the nucleation and growth of magnetite. Since the protein provides iron-binding sites, it is expected that magnetite crystals would nucleate at those sites. In addition, in the previous work, the reaction medium after completion of the reaction was in the solution phase, and magnetic particles had a tendency to fall to the bottom of the medium and aggregate. The research presented in this thesis involves solid Pluronic gel phase reactions, which can be studied readily using small-angle x-ray scattering, which is not possible for the solution phase experiments. In addition, the concentration effect of both of the proteins on magnetite crystal formation was studied.

  8. Bottom-up synthesis of chemically precise graphene nanoribbons.

    PubMed

    Narita, Akimitsu; Feng, Xinliang; Müllen, Klaus

    2015-02-01

    In this article, we describe our chemical approach, developed over the course of a decade, towards the bottom-up synthesis of structurally well-defined graphene nanoribbons (GNRs). GNR synthesis can be achieved through two different methods, one being a solution-phase process based on conventional organic chemistry and the other invoking surface-assisted fabrication, employing modern physics methodologies. In both methods, rationally designed monomers are polymerized to form non-planar polyphenylene precursors, which are "graphitized" and "planarized" by solution-mediated or surface-assisted cyclodehydrogenation. Through these methods, a variety of GNRs have been synthesized with different widths, lengths, edge structures, and degrees of heteroatom doping, featuring varying (opto)electronic properties. The ability to chemically tailor GNRs with tuned properties in a well-defined manner will contribute to the elucidation of the fundamental physics of GNRs, as well as pave the way for the development of GNR-based nanoelectronics and optoelectronics. PMID:25414146

  9. Gardimycin, a New Antibiotic Inhibiting Peptidoglycan Synthesis

    PubMed Central

    Somma, Sergio; Merati, Wilma; Parenti, Francesco

    1977-01-01

    Gardimycin, a new antibiotic, at 100 μg/ml, specifically inhibited cell wall synthesis and induced accumulation of uridine 5′-diphosphate-N-acetylmur-amylpentapeptide in whole cells of Bacillus subtilis. The antibiotic was active in a particulate enzyme preparation from Bacillus stearothermophilus: 60 μg/ml caused 50%, and 200μg/ml caused 100%, inhibition of peptidoglycan synthesis. Suppression of peptidoglycan synthesis was accompanied by parallel accumulation of the lipid intermediate. This mechanism of action is discussed in comparison with those of other antibiotics that are known to inhibit bacterial cell wall biosynthesis. PMID:404960

  10. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  11. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  12. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  13. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  14. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  15. Global Arrays Parallel Programming Toolkit

    SciTech Connect

    Nieplocha, Jaroslaw; Krishnan, Manoj Kumar; Palmer, Bruce J.; Tipparaju, Vinod; Harrison, Robert J.; Chavarría-Miranda, Daniel

    2011-01-01

    The two predominant classes of programming models for parallel computing are distributed memory and shared memory. Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this characteristic can have a negative impact on performance and scalability. Careful code restructuring to increase data reuse and replacing fine grain load/stores with block access to shared data can address the problem and yield performance for shared memory that is competitive with message-passing. However, this performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distributed memory models, such as message-passing or one-sided communication, offer performance and scalability but they are difficult to program. The Global Arrays toolkit attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed by the programmer. This management is achieved by calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be specified by the programmer and hence managed. GA is related to the global address space languages such as UPC, Titanium, and, to a lesser extent, Co-Array Fortran. In addition, by providing a set of data-parallel operations, GA is also related to data-parallel languages such as HPF, ZPL, and Data Parallel C. However, the Global Array programming model is implemented as a library that works with most languages used for technical computing and does not rely on compiler technology for achieving

  16. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  17. Parallel MRI at microtesla fields.

    PubMed

    Zotev, Vadim S; Volegov, Petr L; Matlashov, Andrei N; Espy, Michelle A; Mosher, John C; Kraus, Robert H

    2008-06-01

    Parallel imaging techniques have been widely used in high-field magnetic resonance imaging (MRI). Multiple receiver coils have been shown to improve image quality and allow accelerated image acquisition. Magnetic resonance imaging at ultra-low fields (ULF MRI) is a new imaging approach that uses SQUID (superconducting quantum interference device) sensors to measure the spatially encoded precession of pre-polarized nuclear spin populations at microtesla-range measurement fields. In this work, parallel imaging at microtesla fields is systematically studied for the first time. A seven-channel SQUID system, designed for both ULF MRI and magnetoencephalography (MEG), is used to acquire 3D images of a human hand, as well as 2D images of a large water phantom. The imaging is performed at 46 mu T measurement field with pre-polarization at 40 mT. It is shown how the use of seven channels increases imaging field of view and improves signal-to-noise ratio for the hand images. A simple procedure for approximate correction of concomitant gradient artifacts is described. Noise propagation is analyzed experimentally, and the main source of correlated noise is identified. Accelerated imaging based on one-dimensional undersampling and 1D SENSE (sensitivity encoding) image reconstruction is studied in the case of the 2D phantom. Actual threefold imaging acceleration in comparison to single-average fully encoded Fourier imaging is demonstrated. These results show that parallel imaging methods are efficient in ULF MRI, and that imaging performance of SQUID-based instruments improves substantially as the number of channels is increased.

  18. Parallel MRI at microtesla fields

    NASA Astrophysics Data System (ADS)

    Zotev, Vadim S.; Volegov, Petr L.; Matlashov, Andrei N.; Espy, Michelle A.; Mosher, John C.; Kraus, Robert H.

    2008-06-01

    Parallel imaging techniques have been widely used in high-field magnetic resonance imaging (MRI). Multiple receiver coils have been shown to improve image quality and allow accelerated image acquisition. Magnetic resonance imaging at ultra-low fields (ULF MRI) is a new imaging approach that uses SQUID (superconducting quantum interference device) sensors to measure the spatially encoded precession of pre-polarized nuclear spin populations at microtesla-range measurement fields. In this work, parallel imaging at microtesla fields is systematically studied for the first time. A seven-channel SQUID system, designed for both ULF MRI and magnetoencephalography (MEG), is used to acquire 3D images of a human hand, as well as 2D images of a large water phantom. The imaging is performed at 46 μT measurement field with pre-polarization at 40 mT. It is shown how the use of seven channels increases imaging field of view and improves signal-to-noise ratio for the hand images. A simple procedure for approximate correction of concomitant gradient artifacts is described. Noise propagation is analyzed experimentally, and the main source of correlated noise is identified. Accelerated imaging based on one-dimensional undersampling and 1D SENSE (sensitivity encoding) image reconstruction is studied in the case of the 2D phantom. Actual threefold imaging acceleration in comparison to single-average fully encoded Fourier imaging is demonstrated. These results show that parallel imaging methods are efficient in ULF MRI, and that imaging performance of SQUID-based instruments improves substantially as the number of channels is increased.

  19. The PARTY parallel runtime system

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Mirchandaney, Ravi; Smith, R. M.; Crowley, Kay; Nicol, D. M.

    1989-01-01

    In the present automated system for the organization of the data and computational operations entailed by parallel problems, in ways that optimize multiprocessor performance, general heuristics for partitioning program data and control are implemented by capturing and manipulating representations of a computation at run time. These heuristics are directed toward the dynamic identification and allocation of concurrent work in computations with irregular computational patterns. An optimized static-workload partitioning is computed for such repetitive-computation pattern problems as the iterative ones employed in scientific computation.

  20. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  1. Parallel Assembly of LIGA Components

    SciTech Connect

    Christenson, T.R.; Feddema, J.T.

    1999-03-04

    In this paper, a prototype robotic workcell for the parallel assembly of LIGA components is described. A Cartesian robot is used to press 386 and 485 micron diameter pins into a LIGA substrate and then place a 3-inch diameter wafer with LIGA gears onto the pins. Upward and downward looking microscopes are used to locate holes in the LIGA substrate, pins to be pressed in the holes, and gears to be placed on the pins. This vision system can locate parts within 3 microns, while the Cartesian manipulator can place the parts within 0.4 microns.

  2. PKDGRAV3: Parallel gravity code

    NASA Astrophysics Data System (ADS)

    Potter, Douglas; Stadel, Joachim

    2016-09-01

    Pkdgrav3 is an 𝒪(N) gravity calculation method; it uses a binary tree algorithm with fifth order fast multipole expansion of the gravitational potential, using cell-cell interactions. Periodic boundaries conditions require very little data movement and allow a high degree of parallelism; the code includes GPU acceleration for all force calculations, leading to a significant speed-up with respect to previous versions (ascl:1305.005). Pkdgrav3 also has a sophisticated time-stepping criterion based on an estimation of the local dynamical time.

  3. True Shear Parallel Plate Viscometer

    NASA Technical Reports Server (NTRS)

    Ethridge, Edwin; Kaukler, William

    2010-01-01

    This viscometer (which can also be used as a rheometer) is designed for use with liquids over a large temperature range. The device consists of horizontally disposed, similarly sized, parallel plates with a precisely known gap. The lower plate is driven laterally with a motor to apply shear to the liquid in the gap. The upper plate is freely suspended from a double-arm pendulum with a sufficiently long radius to reduce height variations during the swing to negligible levels. A sensitive load cell measures the shear force applied by the liquid to the upper plate. Viscosity is measured by taking the ratio of shear stress to shear rate.

  4. Scalable Parallel Algebraic Multigrid Solvers

    SciTech Connect

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  5. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  6. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  7. Exploring Parallel Concordancing in English and Chinese.

    ERIC Educational Resources Information Center

    Lixun, Wang

    2001-01-01

    Investigates the value of computer technology as a medium for the delivery of parallel texts in English and Chinese for language learning. A English-Chinese parallel corpus was created for use in parallel concordancing--a technique that has been developed to respond to the desire to study language in its natural contexts of use. (Author/VWL)

  8. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  9. Reservoir Thermal Recover Simulation on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Li, Baoyan; Ma, Yuanle

    The rapid development of parallel computers has provided a hardware background for massive refine reservoir simulation. However, the lack of parallel reservoir simulation software has blocked the application of parallel computers on reservoir simulation. Although a variety of parallel methods have been studied and applied to black oil, compositional, and chemical model numerical simulations, there has been limited parallel software available for reservoir simulation. Especially, the parallelization study of reservoir thermal recovery simulation has not been fully carried out, because of the complexity of its models and algorithms. The authors make use of the message passing interface (MPI) standard communication library, the domain decomposition method, the block Jacobi iteration algorithm, and the dynamic memory allocation technique to parallelize their serial thermal recovery simulation software NUMSIP, which is being used in petroleum industry in China. The parallel software PNUMSIP was tested on both IBM SP2 and Dawn 1000A distributed-memory parallel computers. The experiment results show that the parallelization of I/O has great effects on the efficiency of parallel software PNUMSIP; the data communication bandwidth is also an important factor, which has an influence on software efficiency. Keywords: domain decomposition method, block Jacobi iteration algorithm, reservoir thermal recovery simulation, distributed-memory parallel computer

  10. Depth-optimized reversible circuit synthesis

    NASA Astrophysics Data System (ADS)

    Arabzadeh, Mona; Saheb Zamani, Morteza; Sedighi, Mehdi; Saeedi, Mehdi

    2013-04-01

    In this paper, simultaneous reduction of circuit depth and synthesis cost of reversible circuits in quantum technologies with limited interaction is addressed. We developed a cycle-based synthesis algorithm which uses negative controls and limited distance between gate lines. To improve circuit depth, a new parallel structure is introduced in which before synthesis a set of disjoint cycles are extracted from the input specification and distributed into some subsets. The cycles of each subset are synthesized independently on different sets of ancillae. Accordingly, each disjoint set can be synthesized by different synthesis methods. Our analysis shows that the best worst-case synthesis cost of reversible circuits in the linear nearest neighbor architecture is improved by the proposed approach. Our experimental results reveal the effectiveness of the proposed approach to reduce cost and circuit depth for several benchmarks.

  11. A massively asynchronous, parallel brain.

    PubMed

    Zeki, Semir

    2015-05-19

    Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871

  12. Xyce parallel electronic simulator design.

    SciTech Connect

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  13. Efficient parallel global garbage collection on massively parallel computers

    SciTech Connect

    Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

    1994-12-31

    On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

  14. Implementation and performance of parallelized elegant.

    SciTech Connect

    Wang, Y.; Borland, M.; Accelerator Systems Division

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  15. Polymer solution phase separation: Microgravity simulation

    NASA Technical Reports Server (NTRS)

    Cerny, Lawrence C.; Sutter, James K.

    1989-01-01

    In many multicomponent systems, a transition from a single phase of uniform composition to a multiphase state with separated regions of different composition can be induced by changes in temperature and shear. The density difference between the phase and thermal and/or shear gradients within the system results in buoyancy driven convection. These differences affect kinetics of the phase separation if the system has a sufficiently low viscosity. This investigation presents more preliminary developments of a theoretical model in order to describe effects of the buoyancy driven convection in phase separation kinetics. Polymer solutions were employed as model systems because of the ease with which density differences can be systematically varied and because of the importance of phase separation in the processing and properties of polymeric materials. The results indicate that the kinetics of the phase separation can be performed viscometrically using laser light scattering as a principle means of following the process quantitatively. Isopycnic polymer solutions were used to determine the viscosity and density difference limits for polymer phase separation.

  16. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  17. Hybrid Optimization Parallel Search PACKage

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  18. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  19. Parallel network simulations with NEURON.

    PubMed

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  20. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things. PMID:27534347

  1. Nanocapillary Adhesion between Parallel Plates.

    PubMed

    Cheng, Shengfeng; Robbins, Mark O

    2016-08-01

    Molecular dynamics simulations are used to study capillary adhesion from a nanometer scale liquid bridge between two parallel flat solid surfaces. The capillary force, Fcap, and the meniscus shape of the bridge are computed as the separation between the solid surfaces, h, is varied. Macroscopic theory predicts the meniscus shape and the contribution of liquid/vapor interfacial tension to Fcap quite accurately for separations as small as two or three molecular diameters (1-2 nm). However, the total capillary force differs in sign and magnitude from macroscopic theory for h ≲ 5 nm (8-10 diameters) because of molecular layering that is not included in macroscopic theory. For these small separations, the pressure tensor in the fluid becomes anisotropic. The components in the plane of the surface vary smoothly and are consistent with theory based on the macroscopic surface tension. Capillary adhesion is affected by only the perpendicular component, which has strong oscillations as the molecular layering changes.

  2. Parallel spinors on flat manifolds

    NASA Astrophysics Data System (ADS)

    Sadowski, Michał

    2006-05-01

    Let p(M) be the dimension of the vector space of parallel spinors on a closed spin manifold M. We prove that every finite group G is the holonomy group of a closed flat spin manifold M(G) such that p(M(G))>0. If the holonomy group Hol(M) of M is cyclic, then we give an explicit formula for p(M) another than that given in [R.J. Miatello, R.A. Podesta, The spectrum of twisted Dirac operators on compact flat manifolds, Trans. Am. Math. Soc., in press]. We answer the question when p(M)>0 if Hol(M) is a cyclic group of prime order or dim⁡M≤4.

  3. Information hiding in parallel programs

    SciTech Connect

    Foster, I.

    1992-01-30

    A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

  4. Self-testing in parallel

    NASA Astrophysics Data System (ADS)

    McKague, Matthew

    2016-04-01

    Self-testing allows us to determine, through classical interaction only, whether some players in a non-local game share particular quantum states. Most work on self-testing has concentrated on developing tests for small states like one pair of maximally entangled qubits, or on tests where there is a separate player for each qubit, as in a graph state. Here we consider the case of testing many maximally entangled pairs of qubits shared between two players. Previously such a test was shown where testing is sequential, i.e., one pair is tested at a time. Here we consider the parallel case where all pairs are tested simultaneously, giving considerably more power to dishonest players. We derive sufficient conditions for a self-test for many maximally entangled pairs of qubits shared between two players and also two constructions for self-tests where all pairs are tested simultaneously.

  5. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  6. Nanocapillary Adhesion between Parallel Plates.

    PubMed

    Cheng, Shengfeng; Robbins, Mark O

    2016-08-01

    Molecular dynamics simulations are used to study capillary adhesion from a nanometer scale liquid bridge between two parallel flat solid surfaces. The capillary force, Fcap, and the meniscus shape of the bridge are computed as the separation between the solid surfaces, h, is varied. Macroscopic theory predicts the meniscus shape and the contribution of liquid/vapor interfacial tension to Fcap quite accurately for separations as small as two or three molecular diameters (1-2 nm). However, the total capillary force differs in sign and magnitude from macroscopic theory for h ≲ 5 nm (8-10 diameters) because of molecular layering that is not included in macroscopic theory. For these small separations, the pressure tensor in the fluid becomes anisotropic. The components in the plane of the surface vary smoothly and are consistent with theory based on the macroscopic surface tension. Capillary adhesion is affected by only the perpendicular component, which has strong oscillations as the molecular layering changes. PMID:27413872

  7. Parallel Performance Characterization of Columbia

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2004-01-01

    Using a collection of benchmark problems of increasing levels of realism and computational effort, we will characterize the strengths and limitations of the 10,240 processor Columbia system to deliver supercomputing value to application scientists. Scientists need to be able to determine if and how they can utilize Columbia to carry extreme workloads, either in terms of ultra-large applications that cannot be run otherwise (capability), or in terms of very large ensembles of medium-scale applications to populate response matrices (capacity). We select existing application benchmarks that scale from a small number of processors to the entire machine, and that highlight different issues in running supercomputing-calss applicaions, such as the various types of memory access, file I/O, inter- and intra-node communications and parallelization paradigms. http://www.nas.nasa.gov/Software/NPB/

  8. A parallel dipole line system

    NASA Astrophysics Data System (ADS)

    Gunawan, Oki; Virgus, Yudistira; Tai, Kong Fai

    2015-02-01

    We present a study of a parallel linear distribution of dipole system, which can be realized using a pair of cylindrical diametric magnets and yields several interesting properties and applications. The system serves as a trap for cylindrical diamagnetic object, produces a fascinating one-dimensional camelback potential profile at its center plane, yields a technique for measuring magnetic susceptibility of the trapped object and serves as an ideal system to implement highly sensitive Hall measurement utilizing rotating magnetic field and lock-in detection. The latter application enables extraction of low carrier mobility in several materials of high interest such as the world-record-quality, earth abundant kesterite solar cell, and helps elucidate its fundamental performance limitation.

  9. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  10. Parallel computing in enterprise modeling.

    SciTech Connect

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  11. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

  12. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2014-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  13. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2013-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Preliminary results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  14. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  15. Development of parallel wire regenerator for cryocoolers

    NASA Astrophysics Data System (ADS)

    Nam, Kwanwoo; Jeong, Sangkwon

    2006-04-01

    This paper describes development of a novel regenerator geometry for cryocoolers. Parallel wire type is a wire bundle stacked in parallel with the flow in the housing, which is similar to a conventional parallel plate or tube. Simple and unique fabrication procedure is developed and fully depicted in this paper. Hydrodynamic and thermal experiments are performed to demonstrate the feasibility of the parallel wire regenerator. First, pressure drop characteristic of the parallel wire regenerator is compared to that of the screen mesh regenerator. Experimental result shows that the steady flow friction factor of the parallel wire type is three to five times smaller than that of the screen mesh type. Second, thermal ineffectiveness is determined by measuring the instantaneous pressure, the flow rate and the gas temperature at the warm and cold ends of the regenerator. The measured ineffectiveness of the parallel wire regenerator is larger than that of the screen regenerator due to the excessive axial conduction loss. To alleviate the intrinsic axial conduction loss of the parallel wire regenerator, segmentation is introduced and the experimental results reveal the favorable effect of the segmentation. Entropy generation calculation is adopted to compare the total losses between the screen regenerator and the parallel wire regenerator for various operating ranges. Simulation results show that the parallel wire regenerator can be an attractive candidate to improve cryocooler performance especially for the case of smaller NTU and lower cold-end temperature.

  16. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  17. The ParaScope parallel programming environment

    NASA Technical Reports Server (NTRS)

    Cooper, Keith D.; Hall, Mary W.; Hood, Robert T.; Kennedy, Ken; Mckinley, Kathryn S.; Mellor-Crummey, John M.; Torczon, Linda; Warren, Scott K.

    1993-01-01

    The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and debug parallel programs. This paper focuses on ParaScope's compilation system, its parallel program editor, and its parallel debugging system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. Thus, ParaScope can support both traditional single-procedure optimization and optimization across procedure boundaries. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. It assists the knowledgeable user by displaying and managing analysis and by providing a variety of interactive program transformations that are effective in exposing parallelism. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. The system combines static analysis, program instrumentation, and run-time reporting to provide a mechanical system for isolating errors in parallel program executions. Finally, we describe a new project to extend ParaScope to support programming in FORTRAN D, a machine-independent parallel programming language intended for use with both distributed-memory and shared-memory parallel computers.

  18. Linearly exact parallel closures for slab geometry

    SciTech Connect

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-15

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients)

  19. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  20. Parallel reactor systems for bioprocess development.

    PubMed

    Weuster-Botz, Dirk

    2005-01-01

    Controlled parallel bioreactor systems allow fed-batch operation at early stages of process development. The characteristics of shaken bioreactors operated in parallel (shake flask, microtiter plate), sparged bioreactors (small-scale bubble column) and stirred bioreactors (stirred-tank, stirred column) are briefly summarized. Parallel fed-batch operation is achieved with an intermittent feeding and pH-control system for up to 16 bioreactors operated in parallel on a scale of 100 ml. Examples of the scale-up and scale-down of pH-controlled microbial fed-batch processes demonstrate that controlled parallel reactor systems can result in more effective bioprocess development. Future developments are also outlined, including units of 48 parallel stirred-tank reactors with individual pH- and pO2-controls and automation as well as liquid handling system, operated on a scale of ml.

  1. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  2. Linearly exact parallel closures for slab geometry

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-01

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).

  3. A generic fine-grained parallel C

    NASA Technical Reports Server (NTRS)

    Hamet, L.; Dorband, John E.

    1988-01-01

    With the present availability of parallel processors of vastly different architectures, there is a need for a common language interface to multiple types of machines. The parallel C compiler, currently under development, is intended to be such a language. This language is based on the belief that an algorithm designed around fine-grained parallelism can be mapped relatively easily to different parallel architectures, since a large percentage of the parallelism has been identified. The compiler generates a FORTH-like machine-independent intermediate code. A machine-dependent translator will reside on each machine to generate the appropriate executable code, taking advantage of the particular architectures. The goal of this project is to allow a user to run the same program on such machines as the Massively Parallel Processor, the CRAY, the Connection Machine, and the CYBER 205 as well as serial machines such as VAXes, Macintoshes and Sun workstations.

  4. Total Synthesis and Stereochemical Revision of the Anti-Tuberculosis Peptaibol Trichoderin A.

    PubMed

    Kavianinia, Iman; Kunalingam, Lavanya; Harris, Paul W R; Cook, Gregory M; Brimble, Margaret A

    2016-08-01

    The first total synthesis of the postulated structure of the aminolipopeptide trichoderin A and its epimer are reported. A late-stage solution phase C-terminal coupling was employed to introduce the C-terminal aminoalcohol moiety. This methodology provides a foundation to prepare analogues of trichoderin A to establish a structure-activity relationship. NMR spectroscopic analysis established that the C-6 position of the 2-amino-6-hydroxy-4-methyl-8-oxodecanoic acid (AHMOD) residue in trichoderin A possesses an (R)-configuration as opposed to the originally proposed (S)-configuration. PMID:27467118

  5. Running Geant on T. Node parallel computer

    SciTech Connect

    Jejcic, A.; Maillard, J.; Silva, J. ); Mignot, B. )

    1990-08-01

    AnInmos transputer-based computer has been utilized to overcome the difficulties due to the limitations on the processing abilities of event parallelism and multiprocessor farms (i.e., the so called bus-crisis) and the concern regarding the growing sizes of databases typical in High Energy Physics. This study was done on the T.Node parallel computer manufactured by TELMAT. Detailed figures are reported concerning the event parallelization. (AIP)

  6. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

  7. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  8. Parallel Genetic Algorithm for Alpha Spectra Fitting

    NASA Astrophysics Data System (ADS)

    García-Orellana, Carlos J.; Rubio-Montero, Pilar; González-Velasco, Horacio

    2005-01-01

    We present a performance study of alpha-particle spectra fitting using parallel Genetic Algorithm (GA). The method uses a two-step approach. In the first step we run parallel GA to find an initial solution for the second step, in which we use Levenberg-Marquardt (LM) method for a precise final fit. GA is a high resources-demanding method, so we use a Beowulf cluster for parallel simulation. The relationship between simulation time (and parallel efficiency) and processors number is studied using several alpha spectra, with the aim of obtaining a method to estimate the optimal processors number that must be used in a simulation.

  9. Parallel auto-correlative statistics with VTK.

    SciTech Connect

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  10. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  11. Data-parallel algorithms for image computing

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.

    1990-11-01

    Data-parallel algorithms for image computing on the Connection Machine are described. After a brief review of some basic programming concepts in *Lip, a parallel extension of Common Lisp, data-parallel programming paradigms based on a local (diffusion-like) model of computation, the scan model of computation, a general interprocessor communications model, and a region-based model are introduced. Algorithms for connected component labeling, distance transformation, Voronoi diagrams, finding minimum cost paths, local means, shape-from-shading, hidden surface calculations, affine transformation, oblique parallel projection, and spatial operations over regions are presented. An new algorithm for interpolating irregularly spaced data via Voronoi diagrams is also described.

  12. Analysis of the numerical effects of parallelism on a parallel genetic algorithm

    SciTech Connect

    Hart, W.E.; Belew, R.K.; Kohn, S.; Baden, S.

    1995-09-18

    This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments show that asynchronous versions of these algorithms have a lower run time than-synchronous GAs. Furthermore, we demonstrate that this improvement in performance is partly due to the fact that the numerical efficiency of the asynchronous genetic algorithm is better than the synchronous genetic algorithm. Our analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs, and we evaluate the claims made by several researchers that parallel GAs can have superlinear speedup.

  13. Chemical Synthesis of Novel Plasmonic Nanoparticles

    NASA Astrophysics Data System (ADS)

    Lu, Xianmao; Rycenga, Matthew; Skrabalak, Sara E.; Wiley, Benjamin; Xia, Younan

    2009-05-01

    Under the irradiation of light, the free electrons in a plasmonic nanoparticle are driven by the alternating electric field to collectively oscillate at a resonant frequency in a phenomenon known as surface plasmon resonance. Both calculations and measurements have shown that the frequency and amplitude of the resonance are sensitive to particle shape, which determines how the free electrons are polarized and distributed on the surface. As a result, controlling the shape of a plasmonic nanoparticle represents the most powerful means of tailoring and fine-tuning its optical resonance properties. In a solution-phase synthesis, the shape displayed by a nanoparticle is determined by the crystalline structure of the initial seed produced and the interaction of different seed facets with capping agents. Using polyol synthesis as a typical example, we illustrate how oxidative etching and kinetic control can be employed to manipulate the shapes and optical responses of plasmonic nanoparticles made of either Ag or Pd. We conclude by highlighting a few fundamental studies and applications enabled by plasmonic nanoparticles having well-defined and controllable shapes.

  14. The effect of N-methylprotoporphyrin IX on the synthesis of photosynthetic pigments in Cyanidium caldarium. Further evidence for the role of haem in the biosynthesis of plant bilins

    PubMed Central

    Brown, Stanley B.; Holroyd, J. Andrew; Vernon, David I.; Troxler, Robert F.; Smith, Kevin M.

    1982-01-01

    N-Methylprotoporphyrin IX strongly inhibits synthesis of phycocyanobilin, but not chlorophyll a, in the dark. In the light, both phycocyanin and chlorophyll a synthesis are inhibited in parallel. These results are consistent with the intermediacy of haem in algal bilin synthesis and suggest a control mechanism for chlorophyll a synthesis, previously unknown. PMID:6760860

  15. Grouping through local, parallel interactions

    NASA Astrophysics Data System (ADS)

    Proesmans, Marc; Van Gool, Luc J.; Oosterlinck, Andre J.

    1995-08-01

    This paper describes a new approach for computer based visual grouping. A number of computational principles are defined related to results on neurophysiological and psychophysical experiments. The grouping principles have been subdivided into two groups. The 'first-order processes' perform local operations on 'basic' features such as luminance, color, and orientation. 'Second-order processes' consider bilocal interactions (stereo, optical flow, texture, symmetry). The computational scheme developed in this paper relies on the solution of a set of nonlinear differential equations. They are referred to as 'coupled diffusion maps'. Such systems obey the prescribed computational principles. Several maps, corresponding to different features, evolve in parallel, while all computations within and between the maps are localized in a small neighborhood. Moreover, interactions between maps are bidirectional and retinotopically organized, features also underlying processing by the human visual system. Within this framework, new techniques are proposed and developed for e.g. the segmentation of oriented textures, stereo analysis, optical flow detection, etc. Experiments show that the underlying algorithms prove to be successful for first-order as well as second-order grouping processes and show the promising possiblities such a framework can offer for a large number of low-level vision tasks.

  16. Vectoring of parallel synthetic jets

    NASA Astrophysics Data System (ADS)

    Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume

    2015-11-01

    A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

  17. Parallel processing in immune networks

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Bartolucci, Silvia; Galluzzi, Andrea; Guerra, Francesco; Moauro, Francesco

    2013-04-01

    In this work, we adopt a statistical-mechanics approach to investigate basic, systemic features exhibited by adaptive immune systems. The lymphocyte network made by B cells and T cells is modeled by a bipartite spin glass, where, following biological prescriptions, links connecting B cells and T cells are sparse. Interestingly, the dilution performed on links is shown to make the system able to orchestrate parallel strategies to fight several pathogens at the same time; this multitasking capability constitutes a remarkable, key property of immune systems as multiple antigens are always present within the host. We also define the stochastic process ruling the temporal evolution of lymphocyte activity and show its relaxation toward an equilibrium measure allowing statistical-mechanics investigations. Analytical results are compared with Monte Carlo simulations and signal-to-noise outcomes showing overall excellent agreement. Finally, within our model, a rationale for the experimentally well-evidenced correlation between lymphocytosis and autoimmunity is achieved; this sheds further light on the systemic features exhibited by immune networks.

  18. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  19. Parallels between wind and crowd loading of bridges.

    PubMed

    McRobie, Allan; Morgenthal, Guido; Abrams, Danny; Prendergast, John

    2013-06-28

    Parallels between the dynamic response of flexible bridges under the action of wind and under the forces induced by crowds allow each field to inform the other. Wind-induced behaviour has been traditionally classified into categories such as flutter, galloping, vortex-induced vibration and buffeting. However, computational advances such as the vortex particle method have led to a more general picture where effects may occur simultaneously and interact, such that the simple semantic demarcations break down. Similarly, the modelling of individual pedestrians has progressed the understanding of human-structure interaction, particularly for large-amplitude lateral oscillations under crowd loading. In this paper, guided by the interaction of flutter and vortex-induced vibration in wind engineering, a framework is presented, which allows various human-structure interaction effects to coexist and interact, thereby providing a possible synthesis of previously disparate experimental and theoretical results. PMID:23690640

  20. Solid Phase Synthesis of Helically Folded Aromatic Oligoamides.

    PubMed

    Dawson, S J; Hu, X; Claerhout, S; Huc, I

    2016-01-01

    Aromatic amide foldamers constitute a growing class of oligomers that adopt remarkably stable folded conformations. The folded structures possess largely predictable shapes and open the way toward the design of synthetic mimics of proteins. Important examples of aromatic amide foldamers include oligomers of 7- or 8-amino-2-quinoline carboxylic acid that have been shown to exist predominantly as well-defined helices, including when they are combined with α-amino acids to which they may impose their folding behavior. To rapidly iterate their synthesis, solid phase synthesis (SPS) protocols have been developed and optimized for overcoming synthetic difficulties inherent to these backbones such as low nucleophilicity of amine groups on electron poor aromatic rings and a strong propensity of even short sequences to fold on the solid phase during synthesis. For example, acid chloride activation and the use of microwaves are required to bring coupling at aromatic amines to completion. Here, we report detailed SPS protocols for the rapid production of: (1) oligomers of 8-amino-2-quinolinecarboxylic acid; (2) oligomers containing 7-amino-8-fluoro-2-quinolinecarboxylic acid; and (3) heteromeric oligomers of 8-amino-2-quinolinecarboxylic acid and α-amino acids. SPS brings the advantage to quickly produce sequences having varied main chain or side chain components without having to purify multiple intermediates as in solution phase synthesis. With these protocols, an octamer could easily be synthesized and purified within one to two weeks from Fmoc protected amino acid monomer precursors. PMID:27586338

  1. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  2. Serial Order: A Parallel Distributed Processing Approach.

    ERIC Educational Resources Information Center

    Jordan, Michael I.

    Human behavior shows a variety of serially ordered action sequences. This paper presents a theory of serial order which describes how sequences of actions might be learned and performed. In this theory, parallel interactions across time (coarticulation) and parallel interactions across space (dual-task interference) are viewed as two aspects of a…

  3. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2001-01-01

    This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.

  4. Parallel processing of numerical transport algorithms

    SciTech Connect

    Wienke, B.R.; Hiromoto, R.E.

    1984-01-01

    The multigroup, discrete ordinates representation for the linear transport equation enjoys widespread computational use and popularity. Serial solution schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, we investigate the parallel structure and extension of a number of standard S/sub n/ approaches. Concurrent inner sweeps, coupled acceleration techniques, synchronized inner-outer loops, and chaotic iteration are described, and results of computations are contrasted. The multigroup representation and serial iteration methods are also detailed. The basic iterative S/sub n/ method lends itself to parallel tasking, portably affording an effective medium for performing transport calculations on future architectures. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. We find basic inner-outer and chaotic iteration strategies both easily support comparably high degrees of parallelism. Both accommodate parallel rebalance and diffusion acceleration and appear as robust and viable parallel techniques for S/sub n/ production work.

  5. Software For Diagnosis Of Parallel Processing

    NASA Technical Reports Server (NTRS)

    Hontalas, Philip; Yan, Jerry; Fineman, Charles

    1995-01-01

    Ames Instrumentation System (AIMS) computer program package of software tools measuring and analyzing performances of parallel-processing application programs. Helps programmer to debug and refine, and to monitor and visualize execution of, parallel-processing application software for Intel iPSC/860 (or equivalent) multicomputer. Performance data collected displayed graphically on computer workstations supporting X-Windows.

  6. Parallel unstructured grid generation for computational aerosciences

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1993-01-01

    The objective of this research project is to develop efficient parallel automatic grid generation procedures for use in computational aerosciences. This effort is focused on a parallel version of the Finite Octree grid generator. Progress made during the first six months is reported.

  7. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  8. RAM-Based parallel-output controller

    NASA Technical Reports Server (NTRS)

    Niswander, J. K.; Stattel, R. J.

    1980-01-01

    Selected bit strings in serial-data link are extracted for processing. Controller is programmable interface between serial-data link and peripherals that accept parallel data. It can be used to drive displays, printers, plotters, digital-to-analog converters, and parallel-output ports.

  9. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  10. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  11. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  12. Parallel hypergraph partitioning for scientific computing.

    SciTech Connect

    Heaphy, Robert; Devine, Karen Dragon; Catalyurek, Umit; Bisseling, Robert; Hendrickson, Bruce Alan; Boman, Erik Gunnar

    2005-07-01

    Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster.

  13. Differences Between Distributed and Parallel Systems

    SciTech Connect

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  14. A paradigm for parallel unstructured grid generation

    SciTech Connect

    Gaither, A.; Marcum, D.; Reese, D.

    1996-12-31

    In this paper, a sequential 2D unstructured grid generator based on iterative point insertion and local reconnection is coupled with a Delauney tessellation domain decomposition scheme to create a scalable parallel unstructured grid generator. The Message Passing Interface (MPI) is used for distributed communication in the parallel grid generator. This work attempts to provide a generic framework to enable the parallelization of fast sequential unstructured grid generators in order to compute grand-challenge scale grids for Computational Field Simulation (CFS). Motivation for moving from sequential to scalable parallel grid generation is presented. Delaunay tessellation and iterative point insertion and local reconnection (advancing front method only) unstructured grid generation techniques are discussed with emphasis on how these techniques can be utilized for parallel unstructured grid generation. Domain decomposition techniques are discussed for both Delauney and advancing front unstructured grid generation with emphasis placed on the differences needed for both grid quality and algorithmic efficiency.

  15. Broadcasting a message in a parallel computer

    DOEpatents

    Berg, Jeremy E.; Faraj, Ahmad A.

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  16. Configuration space representation in parallel coordinates

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Inselberg, Alfred

    1989-01-01

    By means of a system of parallel coordinates, a nonprojective mapping from R exp N to R squared is obtained for any positive integer N. In this way multivariate data and relations can be represented in the Euclidean plane (embedded in the projective plane). Basically, R squared with Cartesian coordinates is augmented by N parallel axes, one for each variable. The N joint variables of a robotic device can be represented graphically by using parallel coordinates. It is pointed out that some properties of the relation are better perceived visually from the parallel coordinate representation, and that new algorithms and data structures can be obtained from this representation. The main features of parallel coordinates are described, and an example is presented of their use for configuration space representation of a mechanical arm (where Cartesian coordinates cannot be used).

  17. Genetic Parallel Programming: design and implementation.

    PubMed

    Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong

    2006-01-01

    This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.

  18. Parallel tempering for the traveling salesman problem

    SciTech Connect

    Percus, Allon; Wang, Richard; Hyman, Jeffrey; Caflisch, Russel

    2008-01-01

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulations are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.

  19. Randomized parallel speedups for list ranking

    SciTech Connect

    Vishkin, U.

    1987-06-01

    The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O(n log n)/ rho + log n) time parallel algorithm using rho processors. The authors present a randomized parallel algorithm for the problem. The algorithm is designed for an exclusive-read exclusive-write parallel random access machine (EREW PRAM). It runs almost surely in time O(n/rho + log n log* n) using rho processors. Using a recently published parallel prefix sums algorithm the list-ranking algorithm can be adapted to run on a concurrent-read concurrent-write parallel random access machine (CRCW PRAM) almost surely in time O(n/rho + log n) using rho processors.

  20. National Combustion Code: Parallel Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.

    2000-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.

  1. Implementation and performance of parallel Prolog interpreter

    SciTech Connect

    Wei, S.; Kale, L.V.; Balkrishna, R. . Dept. of Computer Science)

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  2. Advanced parallel processing with supercomputer architectures

    SciTech Connect

    Hwang, K.

    1987-10-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers.

  3. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  4. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  5. Design and implementation of the parallel processing system of multi-channel polarization images

    NASA Astrophysics Data System (ADS)

    Li, Zhi-yong; Huang, Qin-chao

    2013-08-01

    Compared with traditional optical intensity image processing, polarization images processing has two main problems. One is that the amount of data is larger. The other is that processing tasks is more complex. To resolve these problems, the parallel processing system of multi-channel polarization images is designed by the multi-DSP technique. It contains a communication control unit (CCU) and a data processing array (DPA). CCU controls communications inside and outside the system. Its logics are designed by a FPGA chip. DPA is made up of four Digital Signal Processor (DSP) chips, which are interlinked by the loose coupling method. DPA implements processing tasks including images registration and images synthesis by parallel processing methods. The polarization images parallel processing model is designed on multi levels including the system task, the algorithm and the operation. Its program is designed by the assemble language. While the polarization image resolution is 782x582 pixels, the pixel data length is 12 bits in the experiment. After it received 3 channels of polarization image simultaneously, this system implements parallel task to acquire the target polarization characteristics. Experimental results show that this system has good real-time and reliability. The processing time of images registration is 293.343ms while the registration accuracy achieves 0.5 pixel. The processing time of images synthesis is 3.199ms.

  6. Freezing of parallel hard cubes with rounded edges.

    PubMed

    Marechal, Matthieu; Zimmermann, Urs; Löwen, Hartmut

    2012-04-14

    The freezing transition in a classical three-dimensional system of rounded hard cubes with fixed, equal orientations is studied by computer simulation and fundamental-measure density functional theory. By switching the rounding parameter s from zero to one, one can smoothly interpolate between cubes with sharp edges and hard spheres. The equilibrium phase diagram of rounded parallel hard cubes is computed as a function of their volume fraction and the rounding parameter s. The second order freezing transition known for oriented cubes at s = 0 is found to be persistent up to s = 0.65. The fluid freezes into a simple-cubic crystal which exhibits a large vacancy concentration. Upon a further increase of s, the continuous freezing is replaced by a first-order transition into either a sheared simple cubic lattice or a deformed face-centered cubic lattice with two possible unit cells: body-centered orthorhombic or base-centered monoclinic. In principle, a system of parallel cubes could be realized in experiments on colloids using advanced synthesis techniques and a combination of external fields.

  7. LALPC: Exploiting Parallelism from FPGAs Using C Language

    NASA Astrophysics Data System (ADS)

    Porto, Lucas F.; Fernandes, Marcio M.; Bonato, Vanderlei; Menotti, Ricardo

    2015-10-01

    This paper presents LALPC, a prototype high-level synthesis tool, specialized in hardware generation for loop-intensive code segments. As demonstrated in a previous work, the underlying hardware components target by LALPC are highly specialized for loop pipeline execution, resulting in efficient implementations, both in terms of performance and resources usage (silicon area). LALPC extends the functionality of a previous tool by using a subset of the C language as input code to describe computations, improving the usability and potential acceptance of the technique among developers. LALPC also enhances parallelism exploitation by applying loop unrolling, and providing support for automatic generation and scheduling of parallel memory accesses. The combination of using the C language to automate the process of hardware design, with an efficient underlying scheme to support loop pipelining, constitutes the main goal and contribution of the work described in this paper. Experimental results have shown the effectiveness of those techniques to enhance performance, and also exemplifies how some of the LALPC compiler features may support performance-resources trade-off analysis tasks.

  8. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  9. Applications of Parallel Processing in Configuration Analyses

    NASA Technical Reports Server (NTRS)

    Sundaram, Ppchuraman; Hager, James O.; Biedron, Robert T.

    1999-01-01

    The paper presents the recent progress made towards developing an efficient and user-friendly parallel environment for routine analysis of large CFD problems. The coarse-grain parallel version of the CFL3D Euler/Navier-Stokes analysis code, CFL3Dhp, has been ported onto most available parallel platforms. The CFL3Dhp solution accuracy on these parallel platforms has been verified with the CFL3D sequential analyses. User-friendly pre- and post-processing tools that enable a seamless transfer from sequential to parallel processing have been written. Static load balancing tool for CFL3Dhp analysis has also been implemented for achieving good parallel efficiency. For large problems, load balancing efficiency as high as 95% can be achieved even when large number of processors are used. Linear scalability of the CFL3Dhp code with increasing number of processors has also been shown using a large installed transonic nozzle boattail analysis. To highlight the fast turn-around time of parallel processing, the TCA full configuration in sideslip Navier-Stokes drag polar at supersonic cruise has been obtained in a day. CFL3Dhp is currently being used as a production analysis tool.

  10. Portable parallel programming in a Fortran environment

    SciTech Connect

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs.

  11. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  12. On the Scalability of Parallel UCT

    NASA Astrophysics Data System (ADS)

    Segal, Richard B.

    The parallelization of MCTS across multiple-machines has proven surprisingly difficult. The limitations of existing algorithms were evident in the 2009 Computer Olympiad where Zen using a single four-core machine defeated both Fuego with ten eight-core machines, and Mogo with twenty thirty-two core machines. This paper investigates the limits of parallel MCTS in order to understand why distributed parallelism has proven so difficult and to pave the way towards future distributed algorithms with better scaling. We first analyze the single-threaded scaling of Fuego and find that there is an upper bound on the play-quality improvements which can come from additional search. We then analyze the scaling of an idealized N-core shared memory machine to determine the maximum amount of parallelism supported by MCTS. We show that parallel speedup depends critically on how much time is given to each player. We use this relationship to predict parallel scaling for time scales beyond what can be empirically evaluated due to the immense computation required. Our results show that MCTS can scale nearly perfectly to at least 64 threads when combined with virtual loss, but without virtual loss scaling is limited to just eight threads. We also find that for competition time controls scaling to thousands of threads is impossible not necessarily due to MCTS not scaling, but because high levels of parallelism can start to bump up against the upper performance bound of Fuego itself.

  13. Performance of the Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  14. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  15. Xyce parallel electronic simulator : users' guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique

  16. SLAPP: A systolic linear algebra parallel processor

    SciTech Connect

    Drake, B.L.; Luk, F.T.; Speiser, J.M.; Symanski, J.J.

    1987-07-01

    Systolic array computer architectures provide a means for fast computation of the linear algebra algorithms that form the building blocks of many signal-processing algorithms, facilitating their real-time computation. For applications to signal processing, the systolic array operates on matrices, an inherently parallel view of the data, using numerical linear algebra algorithms that have been suitably parallelized to efficiently utilize the available hardware. This article describes work currently underway at the Naval Ocean Systems Center, San Diego, California, to build a two-dimensional systolic array, SLAPP, demonstrating efficient and modular parallelization of key matric computations for real-time signal- and image-processing problems.

  17. Parallelization of the Implicit RPLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Orkwis, Paul D.

    1997-01-01

    The multiblock reacting Navier-Stokes flow solver RPLUS2D was modified for parallel implementation. Results for non-reacting flow calculations of this code indicate parallelization efficiencies greater than 84% are possible for a typical test problem. Results tend to improve as the size of the problem increases. The convergence rate of the scheme is degraded slightly when additional artificial block boundaries are included for the purpose of parallelization. However, this degradation virtually disappears if the solution is converged near to machine zero. Recommendations are made for further code improvements to increase efficiency, correct bugs in the original version, and study decomposition effectiveness.

  18. Parallelization of the Implicit RPLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Orkwis, Paul D.

    1994-01-01

    The multiblock reacting Navier-Stokes flow-solver RPLUS2D was modified for parallel implementation. Results for non-reacting flow calculations of this code indicate parallelization efficiencies greater than 84% are possible for a typical test problem. Results tend to improve as the size of the problem increases. The convergence rate of the scheme is degraded slightly when additional artificial block boundaries are included for the purpose of parallelization. However, this degradation virtually disappears if the solution is converged near to machine zero. Recommendations are made for further code improvements to increase efficiency, correct bugs in the original version, and study decomposition effectiveness.

  19. Parallel optical memories for very large databases

    NASA Astrophysics Data System (ADS)

    Mitkas, Pericles A.; Berra, P. B.

    1993-02-01

    The steady increase in volume of current and future databases dictates the development of massive secondary storage devices that allow parallel access and exhibit high I/O data rates. Optical memories, such as parallel optical disks and holograms, can satisfy these requirements because they combine high recording density and parallel one- or two-dimensional output. Several configurations for database storage involving different types of optical memory devices are investigated. All these approaches include some level of optical preprocessing in the form of data filtering in an attempt to reduce the amount of data per transaction that reach the electronic front-end.

  20. Time-parallel multiscale/multiphysics framework

    SciTech Connect

    Frantziskonis, G.; Muralidharan, Krishna; Deymier, Pierre; Simunovic, Srdjan; Nukala, Phani K; Pannala, Sreekanth

    2009-01-01

    We introduce the time-parallel compound wavelet matrix method (tpCWM) for modeling the temporal evolution of multiscale and multiphysics systems. The method couples time parallel (TP) and CWM methods operating at different spatial and temporal scales. We demonstrate the efficiency of our approach on two examples: a chemical reaction kinetic system and a non-linear predator prey system. Our results indicate that the tpCWM technique is capable of accelerating time-to-solution by 2 3-orders of magnitude and is amenable to efficient parallel implementation.

  1. Parallel electric fields from ionospheric winds

    NASA Technical Reports Server (NTRS)

    Nakada, M. P.

    1987-01-01

    The possible production of electric fields parallel to the magnetic field by dynamo winds in the E region is examined, using a jet stream wind model. Current return paths through the F region above the stream are examined as well as return paths through the conjugate ionosphere. The Wulf geometry with horizontal winds moving in opposite directions one above the other is also examined. Parallel electric fields are found to depend strongly on the width of current sheets at the edges of the jet stream. If these are narrow enough, appreciable parallel electric fields are produced.

  2. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  3. Distributed parallel messaging for multiprocessor systems

    SciTech Connect

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  4. Structural mechanics computations on parallel computing platforms

    SciTech Connect

    Kulak, R.F.; Plaskacz, E.J.; Pfeiffer, P.A.

    1995-06-01

    With recent advances in parallel supercomputers and network-connected workstations, the solution to large scale structural engineering problems has now become tractable. High-performance computer architectures, which are usually available at large universities and national laboratories, now can solve large nonlinear problems. At the other end of the spectrum, network connected workstations can be configured to become a distributed-parallel computer. This approach is attractive to small, medium and large engineering firms. This paper describes the development of a parallelized finite element computer program for the solution of static, nonlinear structural mechanics problems.

  5. Parallel Communicating Grammar Systems with Regular Control

    NASA Astrophysics Data System (ADS)

    Pardubská, Dana; Plátek, Martin; Otto, Friedrich

    Parallel communicating grammar systems with regular control (RPCGS, for short) are introduced, which are obtained from returning regular parallel communicating grammar systems by restricting the derivations that are executed in parallel by the various components through a regular control language. For the class of languages that are generated by RPCGSs with constant communication complexity we derive a characterization in terms of a restricted type of freely rewriting restarting automaton. From this characterization we obtain that these languages are semi-linear, and that centralized RPCGSs with constant communication complexity are of the same generative power as non-centralized RPCGSs with constant communication complexity.

  6. Parallel Climate Analysis Toolkit (ParCAT)

    SciTech Connect

    Smith, Brian Edward

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  7. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  8. Feature Clustering for Accelerating Parallel Coordinate Descent

    SciTech Connect

    Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh; Haglin, David J.

    2012-12-06

    We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.

  9. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  10. Improved chopper circuit uses parallel transistors

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Parallel transistor chopper circuit operates with one transistor in the forward mode and the other in the inverse mode. By using this method, it acts as a single, symmetrical, bidirectional transistor, and reduces and stabilizes the offset voltage.

  11. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  12. Social Problems and Deviance: Some Parallel Issues

    ERIC Educational Resources Information Center

    Kitsuse, John I.; Spector, Malcolm

    1975-01-01

    Explores parallel developments in labeling theory and in the value conflict approach to social problems. Similarities in their critiques of functionalism and etiological theory as well as their emphasis on the definitional process are noted. (Author)

  13. Parallel programming with PCN. Revision 1

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  14. Parallel algorithms for dynamically partitioning unstructured grids

    SciTech Connect

    Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.

    1994-10-01

    Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.

  15. The Nexus task-parallel runtime system

    SciTech Connect

    Foster, I.; Tuecke, S.; Kesselman, C.

    1994-12-31

    A runtime system provides a parallel language compiler with an interface to the low-level facilities required to support interaction between concurrently executing program components. Nexus is a portable runtime system for task-parallel programming languages. Distinguishing features of Nexus include its support for multiple threads of control, dynamic processor acquisition, dynamic address space creation, a global memory model via interprocessor references, and asynchronous events. In addition, it supports heterogeneity at multiple levels, allowing a single computation to utilize different programming languages, executables, processors, and network protocols. Nexus is currently being used as a compiler target for two task-parallel languages: Fortran M and Compositional C++. In this paper, we present the Nexus design, outline techniques used to implement Nexus on parallel computers, show how it is used in compilers, and compare its performance with that of another runtime system.

  16. Modified mesh-connected parallel computers

    SciTech Connect

    Carlson, D.A. )

    1988-10-01

    The mesh-connected parallel computer is an important parallel processing organization that has been used in the past for the design of supercomputing systems. In this paper, the authors explore modifications of a mesh-connected parallel computer for the purpose of increasing the efficiency of executing important application programs. These modifications are made by adding one or more global mesh structures to the processing array. They show how our modifications allow asymptotic improvements in the efficiency of executing computations having low to medium interprocessor communication requirements (e.g., tree computations, prefix computations, finding the connected components of a graph). For computations with high interprocessor communication requirements such as sorting, they show that they offer no speedup. They also compare the modified mesh-connected parallel computer to other similar organizations including the pyramid, the X-tree, and the mesh-of-trees.

  17. Fast and practical parallel polynomial interpolation

    SciTech Connect

    Egecioglu, O.; Gallopoulos, E.; Koc, C.K.

    1987-01-01

    We present fast and practical parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms make use of fast parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. For n + 1 given input pairs the proposed interpolation algorithm requires 2 (log (n + 1)) + 2 parallel arithmetic steps and circuit size O(n/sup 2/). The algorithms are numerically stable and their floating-point implementation results in error accumulation similar to that of the widely used serial algorithms. This is in contrast to other fast serial and parallel interpolation algorithms which are subject to much larger roundoff. We demonstrate that in a distributed memory environment context, a cube connected system is very suitable for the algorithms' implementation, exhibiting very small communication cost. As further advantages we note that our techniques do not require equidistant points, preconditioning, or use of the Fast Fourier Transform. 21 refs., 4 figs.

  18. Massively Parallel Computing: A Sandia Perspective

    SciTech Connect

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  19. Finite element computation with parallel VLSI

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Salama, M.

    1983-01-01

    This paper describes a parallel processing computer consisting of a 16-bit microcomputer as a master processor which controls and coordinates the activities of 8086/8087 VLSI chip set slave processors working in parallel. The hardware is inexpensive and can be flexibly configured and programmed to perform various functions. This makes it a useful research tool for the development of, and experimentation with parallel mathematical algorithms. Application of the hardware to computational tasks involved in the finite element analysis method is demonstrated by the generation and assembly of beam finite element stiffness matrices. A number of possible schemes for the implementation of N-elements on N- or n-processors (N is greater than n) are described, and the speedup factors of their time consumption are determined as a function of the number of available parallel processors.

  20. The PISCES 2 parallel programming environment

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.

    1987-01-01

    PISCES 2 is a programming environment for scientific and engineering computations on MIMD parallel computers. It is currently implemented on a flexible FLEX/32 at NASA Langley, a 20 processor machine with both shared and local memories. The environment provides an extended Fortran for applications programming, a configuration environment for setting up a run on the parallel machine, and a run-time environment for monitoring and controlling program execution. This paper describes the overall design of the system and its implementation on the FLEX/32. Emphasis is placed on several novel aspects of the design: the use of a carefully defined virtual machine, programmer control of the mapping of virtual machine to actual hardware, forces for medium-granularity parallelism, and windows for parallel distribution of data. Some preliminary measurements of storage use are included.

  1. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  2. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  3. Parallel supercomputing today and the cedar approach.

    PubMed

    Kuck, D J; Davidson, E S; Lawrie, D H; Sameh, A H

    1986-02-28

    More and more scientists and engineers are becoming interested in using supercomputers. Earlier barriers to using these machines are disappearing as software for their use improves. Meanwhile, new parallel supercomputer architectures are emerging that may provide rapid growth in performance. These systems may use a large number of processors with an intricate memory system that is both parallel and hierarchical; they will require even more advanced software. Compilers that restructure user programs to exploit the machine organization seem to be essential. A wide range of algorithms and applications is being developed in an effort to provide high parallel processing performance in many fields. The Cedar supercomputer, presently operating with eight processors in parallel, uses advanced system and applications software developed at the University of Illinois during the past 12 years. This software should allow the number of processors in Cedar to be doubled annually, providing rapid performance advances in the next decade. PMID:17740294

  4. Massive parallelism in the future of science

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of processing agents, one for each data item in the result will be designed. These agents collectively can solve problems thousands of times faster than current supercomputers. In the domain of distributed parallelism, computations comprising large numbers of resource attached to the world network will be designed. The network will support computations far beyond the power of any one machine. In the domain of people parallelism collaborations among large groups of scientists around the world who participate in projects that endure well past the sojourns of individuals within them will be designed. Computing and telecommunications technology will support the large, long projects that will characterize big science by the turn of the century. Scientists must become masters in these three domains during the coming decade.

  5. Parallel processing of a rotating shaft simulation

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    A FORTRAN program describing the vibration modes of a rotor-bearing system is analyzed for parellelism in this simulation using a Pascal-like structured language. Potential vector operations are also identified. A critical path through the simulation is identified and used in conjunction with somewhat fictitious processor characteristics to determine the time to calculate the problem on a parallel processing system having those characteristics. A parallel processing overhead time is included as a parameter for proper evaluation of the gain over serial calculation. The serial calculation time is determined for the same fictitious system. An improvement of up to 640 percent is possible depending on the value of the overhead time. Based on the analysis, certain conclusions are drawn pertaining to the development needs of parallel processing technology, and to the specification of parallel processing systems to meet computational needs.

  6. A join algorithm for combining AND parallel solutions in AND/OR parallel systems

    SciTech Connect

    Ramkumar, B. ); Kale, L.V. )

    1992-02-01

    When two or more literals in the body of a Prolog clause are solved in (AND) parallel, their solutions need to be joined to compute solutions for the clause. This is often a difficult problem in parallel Prolog systems that exploit OR and independent AND parallelism in Prolog programs. In several AND/OR parallel systems proposed recently, this problem is side-stepped at the cost of unexploited OR parallelism in the program, in part due to the complexity of the backtracking algorithm beneath AND parallel branches. In some cases, the data dependency graphs used by these systems cannot represent all the exploitable independent AND parallelism known at compile time. In this paper, we describe the compile time analysis for an optimized join algorithm for supporting independent AND parallelism in logic programs efficiently without leaving and OR parallelism unexploited. We then discuss how this analysis can be used to yield very efficient runtime behavior. We also discuss problems associated with a tree representation of the search space when arbitrarily complex data dependency graphs are permitted. We describe how these problems can be resolved by mapping the search space onto data dependency graphs themselves. The algorithm has been implemented in a compiler for parallel Prolog based on the reduce-OR process model. The algorithm is suitable for the implementation of AND/OR systems on both shared and nonshared memory machines. Performance on benchmark programs.

  7. Design and Synthesis of Analogues of Marine Natural Product Galaxamide, an N-methylated Cyclic Pentapeptide, as Potential Anti-Tumor Agent in Vitro.

    PubMed

    Lunagariya, Jignesh; Zhong, Shenghui; Chen, Jianwei; Bai, Defa; Bhadja, Poonam; Long, Weili; Liao, Xiaojian; Tang, Xiaoli; Xu, Shihai

    2016-01-01

    Herein, we report design and synthesis of novel 26 galaxamide analogues with N-methylated cyclo-pentapeptide, and their in vitro anti-tumor activity towards the panel of human tumor cell line, such as, A549, A549/DPP, HepG2 and SMMC-7721 using MTT assay. We have also investigated the effect of galaxamide and its representative analogues on growth, cell-cycle phases, and induction of apoptosis in SMMC-7721 cells in vitro. Reckon with the significance of conformational space and N-Me aminoacid (aa) comprising this compound template, we designed the analogues with modification in N-Me-aa position, change in aa configuration from l to d aa and substitute one Leu-aa to d/l Phe-aa residue with respective to the parent structure. The efficient solid phase parallel synthesis approach is employed for the linear pentapeptide residue containing N-Me aa, followed by solution phase macrocyclisation to afford target cyclo pentapeptide compounds. In the present study, all galaxamide analogues exhibited growth inhibition in A549, A549/DPP, SMMC-7721 and HepG2 cell lines. Compounds 6, 18, and 22 exhibited interesting activities towards all cell line tested, while Compounds 1, 4, 15, and 22 showed strong activity towards SMMC-7221 cell line in the range of 1-2 μg/mL IC50. Flow cytometry experiment revealed that galaxamide analogues namely Compounds 6, 18, and 22 induced concentration dependent SMMC-7721 cell apoptosis after 48 h. These compounds induced G0/G1 phase cell-cycle arrest and morphological changes indicating induction of apoptosis. Thus, findings of our study suggest that the galaxamide and its analogues 6, 18 and 22 exerted growth inhibitory effect on SMMC-7721 cells by arresting the cell cycle in the G0/G1 phase and inducing apoptosis. Compound 1 showed promising anti-tumor activity towards SMMC-7721 cancer cell line, which is 9 and 10 fold higher than galaxamide and reference DPP (cisplatin), respectively. PMID:27598177

  8. Design and Synthesis of Analogues of Marine Natural Product Galaxamide, an N-methylated Cyclic Pentapeptide, as Potential Anti-Tumor Agent in Vitro

    PubMed Central

    Lunagariya, Jignesh; Zhong, Shenghui; Chen, Jianwei; Bai, Defa; Bhadja, Poonam; Long, Weili; Liao, Xiaojian; Tang, Xiaoli; Xu, Shihai

    2016-01-01

    Herein, we report design and synthesis of novel 26 galaxamide analogues with N-methylated cyclo-pentapeptide, and their in vitro anti-tumor activity towards the panel of human tumor cell line, such as, A549, A549/DPP, HepG2 and SMMC-7721 using MTT assay. We have also investigated the effect of galaxamide and its representative analogues on growth, cell-cycle phases, and induction of apoptosis in SMMC-7721 cells in vitro. Reckon with the significance of conformational space and N-Me aminoacid (aa) comprising this compound template, we designed the analogues with modification in N-Me-aa position, change in aa configuration from l to d aa and substitute one Leu-aa to d/l Phe-aa residue with respective to the parent structure. The efficient solid phase parallel synthesis approach is employed for the linear pentapeptide residue containing N-Me aa, followed by solution phase macrocyclisation to afford target cyclo pentapeptide compounds. In the present study, all galaxamide analogues exhibited growth inhibition in A549, A549/DPP, SMMC-7721 and HepG2 cell lines. Compounds 6, 18, and 22 exhibited interesting activities towards all cell line tested, while Compounds 1, 4, 15, and 22 showed strong activity towards SMMC-7221 cell line in the range of 1–2 μg/mL IC50. Flow cytometry experiment revealed that galaxamide analogues namely Compounds 6, 18, and 22 induced concentration dependent SMMC-7721 cell apoptosis after 48 h. These compounds induced G0/G1 phase cell-cycle arrest and morphological changes indicating induction of apoptosis. Thus, findings of our study suggest that the galaxamide and its analogues 6, 18 and 22 exerted growth inhibitory effect on SMMC-7721 cells by arresting the cell cycle in the G0/G1 phase and inducing apoptosis. Compound 1 showed promising anti-tumor activity towards SMMC-7721 cancer cell line, which is 9 and 10 fold higher than galaxamide and reference DPP (cisplatin), respectively. PMID:27598177

  9. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  10. Extensions of ADA for SIMD parallel processing

    SciTech Connect

    Cline, C.; Siegel, H.J.

    1983-01-01

    In order to program SIMD (single instruction stream-multiple data stream) parallel machines used for tasks such as speech and image processing, a language with explicit parallel constructs is often desirable. The language ADA, developed by the Department of Defense, is used as a basis for such a language. Extensions of ADA which allow the user to specify such things as interprocessor communications and activation of processors are proposed. 25 references.

  11. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha Anne.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica L.

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  12. LDV Measurement of Confined Parallel Jet Mixing

    SciTech Connect

    R.F. Kunz; S.W. D'Amico; P.F. Vassallo; M.A. Zaccaria

    2001-01-31

    Laser Doppler Velocimetry (LDV) measurements were taken in a confinement, bounded by two parallel walls, into which issues a row of parallel jets. Two-component measurements were taken of two mean velocity components and three Reynolds stress components. As observed in isolated three dimensional wall bounded jets, the transverse diffusion of the jets is quite large. The data indicate that this rapid mixing process is due to strong secondary flows, transport of large inlet intensities and Reynolds stress anisotropy effects.

  13. Acoustic simulation in architecture with parallel algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaohong; Zhang, Xinrong; Li, Dan

    2004-03-01

    In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.

  14. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  15. Parallel programming interface for distributed data

    NASA Astrophysics Data System (ADS)

    Wang, Manhui; May, Andrew J.; Knowles, Peter J.

    2009-12-01

    The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform. Program summaryProgram title: PPIDD Catalogue identifier: AEEF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEF_1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 698 No. of bytes in distributed program, including test data, etc.: 166 173 Distribution format: tar.gz Programming language: Fortran, C Computer: Many parallel systems Operating system: Various Has the code been vectorised or parallelized?: Yes. 2-256 processors used RAM: 50 Mbytes Classification: 6.5 External routines: Global Arrays or MPI-2 Nature of problem: Many scientific applications require management and communication of data that is global, and the standard MPI-2 protocol provides only low-level methods for the required one-sided remote memory access. Solution method: The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform. Running time: Problem dependent. The test provided with

  16. On mesh rezoning algorithms for parallel platforms

    SciTech Connect

    Plaskacz, E.J.

    1995-07-01

    A mesh rezoning algorithm for finite element simulations in a parallel-distributed environment is described. The cornerstones of the algorithm are: the parallel computation of distortion norms on the element and subdomain level, the exchange of the individual subdomain norms to form a subdomain distortion vector, the classification of subdomains and the rezoning behavior prescribed within each subdomain as a response to its own classification and the classification of neighboring subdomains.

  17. Enhancing Scalability of Parallel Structured AMR Calculations

    SciTech Connect

    Wissink, A M; Hysom, D; Hornung, R D

    2003-02-10

    This paper discusses parallel scaling performance of large scale parallel structured adaptive mesh refinement (SAMR) calculations in SAMRAI. Previous work revealed that poor scaling qualities in the adaptive gridding operations in SAMR calculations cause them to become dominant for cases run on up to 512 processors. This work describes algorithms we have developed to enhance the efficiency of the adaptive gridding operations. Performance of the algorithms is evaluated for two adaptive benchmarks run on up 512 processors of an IBM SP system.

  18. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.

    1995-06-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  19. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  20. Efficient communication in massively parallel computers

    SciTech Connect

    Cypher, R.E.

    1989-01-01

    A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.

  1. Parallel object-oriented adaptive mesh refinement

    SciTech Connect

    Balsara, D.; Quinlan, D.J.

    1997-04-01

    In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.

  2. A parallel PCG solver for MODFLOW.

    PubMed

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. PMID:19563427

  3. Modeling Parallel System Workloads with Temporal Locality

    NASA Astrophysics Data System (ADS)

    Minh, Tran Ngoc; Wolters, Lex

    In parallel systems, similar jobs tend to arrive within bursty periods. This fact leads to the existence of the locality phenomenon, a persistent similarity between nearby jobs, in real parallel computer workloads. This important phenomenon deserves to be taken into account and used as a characteristic of any workload model. Regrettably, this property has received little if any attention of researchers and synthetic workloads used for performance evaluation to date often do not have locality. With respect to this research trend, Feitelson has suggested a general repetition approach to model locality in synthetic workloads [6]. Using this approach, Li et al. recently introduced a new method for modeling temporal locality in workload attributes such as run time and memory [14]. However, with the assumption that each job in the synthetic workload requires a single processor, the parallelism has not been taken into account in their study. In this paper, we propose a new model for parallel computer workloads based on their result. In our research, we firstly improve their model to control locality of a run time process better and then model the parallelism. The key idea for modeling the parallelism is to control the cross-correlation between the run time and the number of processors. Experimental results show that not only the cross-correlation is controlled well by our model, but also the marginal distribution can be fitted nicely. Furthermore, the locality feature is also obtained in our model.

  4. Biomimetic one-pot synthesis of gold nanoclusters/nanoparticles for targeted tumor cellular dual-modality imaging

    NASA Astrophysics Data System (ADS)

    Lin, Jing; Zhou, Zhijun; Li, Zhiming; Zhang, Chunlei; Wang, Xiansong; Wang, Kan; Gao, Guo; Huang, Peng; Cui, Daxiang

    2013-04-01

    Biomimetic synthesis has become a promising green pathway to prepare nanomaterials. In this study, bovine serum albumin (BSA)-conjugated gold nanoclusters/nanoparticles were successfully synthesized in water at room temperature by a protein-directed, solution-phase, green synthetic method. The synthesized BSA-Au nanocomplexes have fluorescence emission (588 nm) of gold nanoclusters and surface plasmon resonance of gold nanoparticles. The BSA-Au nanocomplexes display non-cytotoxicity and excellent biocompatibility on MGC803 gastric cancer cells. After conjugation of folic acid molecules, the obtained BSA-Au nanocomplexes showed highly selective targeting for MGC803 cells and dual-modality dark-field and fluorescence imaging.

  5. Speech Synthesis

    NASA Astrophysics Data System (ADS)

    Dutoit, Thierry; Bozkurt, Baris

    Text-to-speech (TTS) synthesis is the art of designing talking machines. It is often seen by engineers as an easy task, compared to speech recognition.1 It is true, indeed, that it is easier to create a bad, first trial text-to-speech (TTS) system than to design a rudimentary speech recognizer.

  6. GLUTATHIONE SYNTHESIS

    PubMed Central

    Lu, Shelly C.

    2012-01-01

    BACKGROUND Glutathione (GSH) is present in all mammalian tissues as the most abundant non-protein thiol that defends against oxidative stress. GSH is also a key determinant of redox signaling, vital in detoxification of xenobiotics, regulates cell proliferation, apoptosis, immune function, and fibrogenesis. Biosynthesis of GSH occurs in the cytosol in a tightly regulated manner. Key determinants of GSH synthesis are the availability of the sulfur amino acid precursor, cysteine, and the activity of the rate-limiting enzyme, glutamate cysteine ligase (GCL), which is composed of a catalytic (GCLC) and a modifier (GCLM) subunit. The second enzyme of GSH synthesis is GSH synthetase (GS). SCOPE OF REVIEW This review summarizes key functions of GSH and focuses on factors that regulate the biosynthesis of GSH, including pathological conditions where GSH synthesis is dysregulated. MAJOR CONCLUSIONS GCL subunits and GS are regulated at multiple levels and often in a coordinated manner. Key transcription factors that regulate the expression of these genes include NF-E2 related factor 2 (Nrf2) via the antioxidant response element (ARE), AP-1, and nuclear factor kappa B (NFκB). There is increasing evidence that dysregulation of GSH synthesis contributes to the pathogenesis of many pathological conditions. These include diabetes mellitus, pulmonary and liver fibrosis, alcoholic liver disease, cholestatic liver injury, endotoxemia and drug-resistant tumor cells. GENERAL SIGNIFICANCE GSH is a key antioxidant that also modulates diverse cellular processes. A better understanding of how its synthesis is regulated and dysregulated in disease states may lead to improvement in the treatment of these disorders. PMID:22995213

  7. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  8. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  9. Plasmonics and the parallel programming problem

    NASA Astrophysics Data System (ADS)

    Vishkin, Uzi; Smolyaninov, Igor; Davis, Chris

    2007-02-01

    While many parallel computers have been built, it has generally been too difficult to program them. Now, all computers are effectively becoming parallel machines. Biannual doubling in the number of cores on a single chip, or faster, over the coming decade is planned by most computer vendors. Thus, the parallel programming problem is becoming more critical. The only known solution to the parallel programming problem in the theory of computer science is through a parallel algorithmic theory called PRAM. Unfortunately, some of the PRAM theory assumptions regarding the bandwidth between processors and memories did not properly reflect a parallel computer that could be built in previous decades. Reaching memories, or other processors in a multi-processor organization, required off-chip connections through pins on the boundary of each electric chip. Using the number of transistors that is becoming available on chip, on-chip architectures that adequately support the PRAM are becoming possible. However, the bandwidth of off-chip connections remains insufficient and the latency remains too high. This creates a bottleneck at the boundary of the chip for a PRAM-On-Chip architecture. This also prevents scalability to larger "supercomputing" organizations spanning across many processing chips that can handle massive amounts of data. Instead of connections through pins and wires, power-efficient CMOS-compatible on-chip conversion to plasmonic nanowaveguides is introduced for improved latency and bandwidth. Proper incorporation of our ideas offer exciting avenues to resolving the parallel programming problem, and an alternative way for building faster, more useable and much more compact supercomputers.

  10. Accelerating the performance of a novel meshless method based on collocation with radial basis functions by employing a graphical processing unit as a parallel coprocessor

    NASA Astrophysics Data System (ADS)

    Owusu-Banson, Derek

    In recent times, a variety of industries, applications and numerical methods including the meshless method have enjoyed a great deal of success by utilizing the graphical processing unit (GPU) as a parallel coprocessor. These benefits often include performance improvement over the previous implementations. Furthermore, applications running on graphics processors enjoy superior performance per dollar and performance per watt than implementations built exclusively on traditional central processing technologies. The GPU was originally designed for graphics acceleration but the modern GPU, known as the General Purpose Graphical Processing Unit (GPGPU) can be used for scientific and engineering calculations. The GPGPU consists of massively parallel array of integer and floating point processors. There are typically hundreds of processors per graphics card with dedicated high-speed memory. This work describes an application written by the author, titled GaussianRBF to show the implementation and results of a novel meshless method that in-cooperates the collocation of the Gaussian radial basis function by utilizing the GPU as a parallel co-processor. Key phases of the proposed meshless method have been executed on the GPU using the NVIDIA CUDA software development kit. Especially, the matrix fill and solution phases have been carried out on the GPU, along with some post processing. This approach resulted in a decreased processing time compared to similar algorithm implemented on the CPU while maintaining the same accuracy.

  11. Application of lean manufacturing concepts to drug discovery: rapid analogue library synthesis.

    PubMed

    Weller, Harold N; Nirschl, David S; Petrillo, Edward W; Poss, Michael A; Andres, Charles J; Cavallaro, Cullen L; Echols, Martin M; Grant-Young, Katherine A; Houston, John G; Miller, Arthur V; Swann, R Thomas

    2006-01-01

    The application of parallel synthesis to lead optimization programs in drug discovery has been an ongoing challenge since the first reports of library synthesis. A number of approaches to the application of parallel array synthesis to lead optimization have been attempted over the years, ranging from widespread deployment by (and support of) individual medicinal chemists to centralization as a service by an expert core team. This manuscript describes our experience with the latter approach, which was undertaken as part of a larger initiative to optimize drug discovery. In particular, we highlight how concepts taken from the manufacturing sector can be applied to drug discovery and parallel synthesis to improve the timeliness and thus the impact of arrays on drug discovery.

  12. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  13. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  14. Relative Debugging of Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  15. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  16. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  17. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1994-04-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.

  18. Toward a science of parallel computation

    SciTech Connect

    Worlton, W.J.

    1986-01-01

    The evolution of parallel processing over the past several decades can be viewed as the development of a new scientific discipline. Parallel processing has been, and is, undergoing the same evolutionary stages that are common to the development of scientific disciplines in general: exploration, focusing, and maturity. That parallel processing is not yet a science can readily be appreciated by its lack of some of the characteristics typical of mature sciences, such as prescriptive terminology, comprehensive taxonomies, and authoritative fundamental principles. A great deal of outstanding work has been done and the field is experiencing the beginnings of its ''focusing'' phase, i.e., support is being concentrated in a set of the more promising approaches selected from among the larger set of exploratory projects. However, the possible set of parallel-processing concepts is so extensive that exploratory work will probably continue for one or two more decades. In the meantime, the growing maturity of the field will be reflected in the increasing clarity and precision of the terminology, the development of systematic classification of the domain of discourse, the development of basic principles, and the growing number of commercial products that are the outcome of the research and development projects on which support is being focused. In this paper we develop some generalizations of taxonomies and use basic principles to draw conclusions about the extensibility of parallel processor architectures. 7 refs., 5 figs., 2 tabs.

  19. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  20. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  1. Iteration schemes for parallelizing models of superconductivity

    SciTech Connect

    Gray, P.A.

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  2. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1997-05-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, the authors describe these different parallel algorithms and report on computational experiments that they have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations on a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. The authors focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but they also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional fast Fourier transforms (FFTs) and other parallel transforms.

  3. Parallel operation of microhollow cathode discharges

    SciTech Connect

    Shi, W.; Schoenbach, K.H.

    1998-12-31

    The dc current-voltage characteristics of microhollow cathode discharges has, in certain ranges of the discharge current, a positive slope. In these current ranges it should be possible to operate multiple discharges in parallel without individual ballast, and be used as flat panel excimer lamps or large area plasma cathodes. In order to verify this hypothesis they have studied the parallel operation of two microhollow cathode discharges of 100 {micro}m hole diameter in argon at pressures from 100 Torr to 800 Torr. Stable dc operation of the two discharges, without individual ballast, was obtained if the voltage-current characteristics of the individual discharges had a positive slope greater than 10 V/mA over a voltage range of more than 5% of the sustaining voltage. Small variations in the discharge geometry generated during fabrication of cathode holes or caused by thermal effects during discharge operation are detrimental to parallel operation. Varying the distance between the discharges from twice the hole diameter to approximately five times did not affect the parallel operation. The total current was always slightly larger than the sum of the currents measured for the individual discharges, indicating coupling between the two discharges. In order to obtain parallel operation even for microhollow cathode geometries with large variations, they have studied the effect of distributed resistive ballast on the operation of such discharges.

  4. Parallel execution of Lisp programs. Doctoral thesis

    SciTech Connect

    Weening, J.S.

    1989-06-01

    This dissertation considers several issues in the execution of Lisp programs on shared-memory multiprocessors. An overview of constructs for explicit parallelism in Lisp is first presented. The problem of partitioning a program into process and scheduling these processes are then described, and a number of methods for performing these are proposed. These include cutting off process creation based on properties of the computation tree of the program, and basing partitioning decisions on the state of the system at runtime instead of the program. An experimental study of these methods has been performed using a simulator for parallel Lisp. This is followed by a description of the experiments that were performed and an analysis of the results. Two programs are used as illustrations-a Fast Fourier Transform, which has an abundance of parallelism, and the Cocke-Younger-Kasami parsing algorithm, for which good speedup is not as easy to obtain. The difficulty of using cutoff-based partitioning methods, and the differences between varios scheduling methods, are shown. A combination of partitioning and scheduling methods which we call dynamic partitioning is analyzed in more detail. This method is based on examining the machine's runtime state; it requires that the programmer only identify parallelism in the program, without deciding which potential parallelism is actually useful. We conclude that for programs whose computation trees have small height relative to their total size, dynamic partitioning can achieve asymptotically minimal overhead in the cost of process creation.

  5. Regulation of collagen synthesis by ascorbic acid.

    PubMed Central

    Murad, S; Grove, D; Lindberg, K A; Reynolds, G; Sivarajah, A; Pinnell, S R

    1981-01-01

    After prolonged exposure to ascorbate, collagen synthesis in cultured human skin fibroblasts increased approximately 8-fold with no significant change in synthesis of noncollagen protein. This effect of ascorbate appears to be unrelated to its cofactor function in collagen hydroxylation. The collagenous protein secreted in the absence of added ascorbate was normal in hydroxylysine but was mildly deficient in hydroxyproline. In parallel experiments, lysine hydroxylase (peptidyllysine, 2-oxoglutarate:oxygen 5-oxidoreductase, EC 1.14.11.4) activity increased 3-fold in response to ascorbate administration whereas proline hydroxylase (prolyl-glycyl-peptide, 2-oxoglutarate:oxygen oxidoreductase, EC 1.14.11.2) activity decreased considerably. These results suggest that collage polypeptide synthesis, posttranslational hydroxylations, and activities of the two hydroxylases are independently regulated by ascorbate. PMID:6265920

  6. Metal-Acetylacetonate Synthesis Experiments: Which Is Greener?

    ERIC Educational Resources Information Center

    Ribeiro, M. Gabriela T. C.; Machado, Adlio A. S. C.

    2011-01-01

    A procedure for teaching green chemistry through laboratory experiments is presented in which students are challenged to use the 12 principles of green chemistry to review and modify synthesis protocols to improve greenness. A global metric, green star, is used in parallel with green chemistry mass metrics to evaluate the improvement in greenness.…

  7. Algebraic techniques for automatic detection of parallelism

    SciTech Connect

    Torgersen, T.C.

    1989-01-01

    A restructuring transformation is described which can be used to parallelize recurrence relations. The transformation is based on the hyperplane (or wavefront) method, but extends the applicability of the method to irregularly structured recurrences and introduces an algorithm for solving a restricted class of symbolic linear inequalities which arise from such irregularly-structured recurrences. The algorithm for solving this class of linear inequalities introduces several sub-problems and a solution to each is presented. First, a sub-algorithm is developed for deciding the existence of a valid parallel schedule. Second, various methods for finding a characterization of the iteration space in terms of its extreme points and directions of recession are discussed. Third, a method is given for finding a minimal representation of the set of valid parallel schedules. Finally, some approaches to the problem of choosing an optimal schedule are considered.

  8. Parallel algorithms for optical digital computers

    SciTech Connect

    Huang, A.

    1983-01-01

    Conventional computers suffer from several communication bottlenecks which fundamentally limit their performance. These bottlenecks are characterised by an address-dependent sequential transfer of information which arises from the need to time-multiplex information over a limited number of interconnections. An optical digital computer based on a classical finite state machine can be shown to be free of these bottlenecks. Such a processor would be unique since it would be capable of modifying its entire state space each cycle while conventional computers can only alter a few bits. New algorithms are needed to manage and use this capability. A technique based on recognising a particular symbol in parallel and replacing it in parallel with another symbol is suggested. Examples using this parallel symbolic substitution to perform binary addition and binary incrementation are presented. Applications involving Boolean logic, functional programming languages, production rule driven artificial intelligence, and molecular chemistry are also discussed. 12 references.

  9. Parallel Harness for Informatic Stream Hashing

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPImore » message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.« less

  10. Parallelization of the Lagrangian Particle Dispersion Model

    SciTech Connect

    Buckley, R.L.; O`Steen, B.L.

    1997-08-01

    An advanced stochastic Lagrangian Particle Dispersion Model (LPDM) is used by the Atmospheric Technologies Group (ATG) to simulate contaminant transport. The model uses time-dependent three-dimensional fields of wind and turbulence to determine the location of individual particles released into the atmosphere. This report describes modifications to LPDM using the Message Passing Interface (MPI) which allows for execution in a parallel configuration on the Cray Supercomputer facility at the SRS. Use of a parallel version allows for many more particles to be released in a given simulation, with little or no increase in computational time. This significantly lowers (greater than an order of magnitude) the minimum resolvable concentration levels without ad hoc averaging schemes and/or without reducing spatial resolution. The general changes made to LPDM are discussed and a series of tests are performed comparing the serial (single processor) and parallel versions of the code.

  11. Parallelization of ITOUGH2 using PVM

    SciTech Connect

    Finsterle, Stefan

    1998-10-01

    ITOUGH2 inversions are computationally intensive because the forward problem must be solved many times to evaluate the objective function for different parameter combinations or to numerically calculate sensitivity coefficients. Most of these forward runs are independent from each other and can therefore be performed in parallel. Message passing based on the Parallel Virtual Machine (PVM) system has been implemented into ITOUGH2 to enable parallel processing of ITOUGH2 jobs on a heterogeneous network of Unix workstations. This report describes the PVM system and its implementation into ITOUGH2. Instructions are given for installing PVM, compiling ITOUGH2-PVM for use on a workstation cluster, the preparation of an 1.TOUGH2 input file under PVM, and the execution of an ITOUGH2-PVM application. Examples are discussed, demonstrating the use of ITOUGH2-PVM.

  12. PADRE: a parallel asynchronous data routing environment

    SciTech Connect

    Gunney, B; Quinlan, D

    2001-01-08

    Increasingly in industry, software design and implementation is object-oriented, developed in C++ or Java, and relies heavily on pre-existing software libraries (e.g. the Microsoft Foundation Classes for C++, the Java API for Java). A similar but more tentative trend is developing in high-performance parallel scientific computing. The transition from serial to parallel application development considerably increases the need for library support: task creation and management, data distribution and dynamic redistribution, and inter-process and inter-processor communication and synchronization must be supported. PADRE is a library to support the interoperability of parallel applications. We feel there is significant need for just such a tool to compliment the many domain-specific application frameworks presently available today, but which are generally not interoperable.

  13. Parallel Harness for Informatic Stream Hashing

    SciTech Connect

    Steve Plimpton, Tim Shead

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPI message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.

  14. New parallel SOR method by domain partitioning

    SciTech Connect

    Xie, D.; Adams, L.

    1999-07-01

    In this paper the authors propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning and interprocessor data communication techniques. They prove that the PSOR method has the same asymptotic rate of convergence as the Red/Black (R/B) SOR method for the five-point stencil on both strip and block partitions, and as the four-color (R/B/G/O) SOR method for the nine-point stencil on strip partitions. They also demonstrate the parallel performance of the PSOR method on four different MIMD multiprocessors (a KSR1, an Intel Delta, a Paragon, and an IBM SP2). Finally, they compare the parallel performance of PSOR, R/B SOR, and R/B/G/O SOR. Numerical results on the Paragon indicate that PSOR is more efficient than R/B SOR and R/B/G/O SOR in both computation and interprocessor data communication.

  15. Loop parallelism on Tera MTA using SISAL

    SciTech Connect

    Mitrovic, S.

    1995-11-01

    The difficulty of programming parallel computers has impeded their wide-spread use. The problems are caused by existing hardware and software tools. The software problems on shared-memory and vector computers can be solved by using deterministic high-performance functional languages like SISAL. Distributed-memory computers have even more obstacles than shared-memory parallel machines. Research indicates that multithreaded architectures can hide long latency of distributed memories and that they can solve the problems of locality. Tera`s MTA multiprocessor is based on the concept of multithreading and provides the programmer with a real shared-memory model. This paper investigates the performance of parallel loops written in SISAL and executed on the Tera MTA using the Livermore Loops benchmarks.

  16. Simulating Billion-Task Parallel Programs

    SciTech Connect

    Perumalla, Kalyan S; Park, Alfred J

    2014-01-01

    In simulating large parallel systems, bottom-up approaches exercise detailed hardware models with effects from simplified software models or traces, whereas top-down approaches evaluate the timing and functionality of detailed software models over coarse hardware models. Here, we focus on the top-down approach and significantly advance the scale of the simulated parallel programs. Via the direct execution technique combined with parallel discrete event simulation, we stretch the limits of the top-down approach by simulating message passing interface (MPI) programs with millions of tasks. Using a timing-validated benchmark application, a proof-of-concept scaling level is achieved to over 0.22 billion virtual MPI processes on 216,000 cores of a Cray XT5 supercomputer, representing one of the largest direct execution simulations to date, combined with a multiplexing ratio of 1024 simulated tasks per real task.

  17. Java Parallel Secure Stream for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2001-09-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

  18. National Combustion Code Parallel Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Benyo, Theresa (Technical Monitor)

    2002-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.

  19. Improved CDMA Performance Using Parallel Interference Cancellation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    1995-01-01

    This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

  20. Numerical wind tunnel and parallel FORTRAN

    NASA Astrophysics Data System (ADS)

    Nakamura, Takashi; Yoshida, Masahiro; Fukuda, Masahiro; Takamura, Moriyuki; Okada, Shin

    1992-12-01

    Computational Fluid Dynamics (CFD) requires computers 100 times faster than the Fujitsu VP400 in effective speed. Such a processor can be suitably called the 'Numerical Wind Tunnel'. Numerical Wind Tunnel (NWT) is a parallel computer system of a distributed memory architecture composed of vector processors connected through cross-bar network. In this report, the system configuration, processing element, and interconnection network and communication mechanism of the NWT are shown. Fundamental functions global data, parallel execution of DO-loop, and data decomposition and allocation, which the language-processor system has to provide in order to realize parallel execution on the NWT are also shown. FORTRAN 77 is chosen as a basic programming language for NWT and some compiler directives are added to make effective use of the NWT.