Science.gov

Sample records for solution-phase parallel synthesis

  1. Sucrose and KF quenching system for solution phase parallel synthesis.

    PubMed

    Chavan, Sunil; Watpade, Rahul; Toche, Raghunath

    2016-01-01

    The KF, sucrose (table sugar) exploited as quenching system in solution phase parallel synthesis. Excess of electrophiles were covalently trapped with hydroxyl functionality of sucrose and due to polar nature of sucrose derivative was solubilize in water. Potassium fluoride used to convert various excess electrophilic reagents such as acid chlorides, sulfonyl chlorides, isocyanates to corresponding fluorides, which are less susceptible for hydrolysis and subsequently sucrose traps these fluorides and dissolves them in water thus removing them from reaction mixture. Various excess electrophilic reagents such as acid chlorides, sulfonyl chlorides, and isocyanates were quenched successfully to give pure products in excellent yields. PMID:27462506

  2. Parallel solution phase synthesis of 3,6,7-4(3H)-quinazolinones and evaluation of their antitumor activities against human cancer.

    PubMed

    Wu, Hao; Xie, Xilei; Liu, Gang

    2010-05-10

    Three diversity points of 4(3H)-quinazolinone are introduced at the 3-, 6-, and 7-positions with an efficient parallel solution-phase synthetic method. A one-pot synthesis was developed that gave the key intermediate in high yield. Five hit compounds exhibit preferable activities against a panel of human tumor cell lines, which pointed out preliminary structure-activity relationships. PMID:20196566

  3. Solution-phase microwave assisted parallel synthesis, biological evaluation and in silico docking studies of N,N'-disubstituted thioureas derived from 3-chlorobenzoic acid.

    PubMed

    Rauf, Muhammad Khawar; Zaib, Sumera; Talib, Ammara; Ebihara, Masahiro; Badshah, Amin; Bolte, Michael; Iqbal, Jamshed

    2016-09-15

    A facile and robust microwave-assisted solution phase parallel synthesis protocol was exercised for the development of a 38-member library of N,N'-disubstituted thiourea analogues (1-38) by using an identical set of conditions. The reaction time for synthesis of N,N'-disubstituted thiourea analogues was drastically reduced from a reported duration of 8-12h for conventional methods to only 1.5-2.0min. All the derivatives (1-38) were characterized by physico-analytical techniques such as elemental analysis in combination with FT-IR, (1)H, (13)C NMR and by single crystal XRD analysis have also been performed. These compounds were screened for their in vitro urease inhibition activities. Majority of compounds exhibited potent urease inhibition activities, however, the most significant activity was found for 16, with an IC50 value of 1.23±0.1μM. Furthermore, the synthesized compounds were screened for their cytotoxic potential against lungs cancer cell lines. Cell culture studies demonstrated significant toxicity of the compounds on the cell lines, and the levels of toxicity were altered in the presence of various side groups. The molecular docking studies of the most potent inhibitors were performed to identify the probable binding modes in the active site of the urease enzymes. These compounds have a great potential and significance for further investigations. PMID:27480030

  4. Solution-phase parallel synthesis of a pharmacophore library of HUN-7293 analogues: a general chemical mutagenesis approach to defining structure-function properties of naturally occurring cyclic (depsi)peptides.

    PubMed

    Chen, Yan; Bilban, Melitta; Foster, Carolyn A; Boger, Dale L

    2002-05-15

    HUN-7293 (1), a naturally occurring cyclic heptadepsipeptide, is a potent inhibitor of cell adhesion molecule expression (VCAM-1, ICAM-1, E-selectin), the overexpression of which is characteristic of chronic inflammatory diseases. Representative of a general approach to defining structure-function relationships of such cyclic (depsi)peptides, the parallel synthesis and evaluation of a complete library of key HUN-7293 analogues are detailed enlisting solution-phase techniques and simple acid-base liquid-liquid extractions for isolation and purification of intermediates and final products. Significant to the design of the studies and unique to solution-phase techniques, the library was assembled superimposing a divergent synthetic strategy onto a convergent total synthesis. An alanine scan and N-methyl deletion of each residue of the cyclic heptadepsipeptide identified key sites responsible for or contributing to the biological properties. The simultaneous preparation of a complete set of individual residue analogues further simplifying the structure allowed an assessment of each structural feature of 1, providing a detailed account of the structure-function relationships in a single study. Within this pharmacophore library prepared by systematic chemical mutagenesis of the natural product structure, simplified analogues possessing comparable potency and, in some instances, improved selectivity were identified. One potent member of this library proved to be an additional natural product in its own right, which we have come to refer to as HUN-7293B (8), being isolated from the microbial strain F/94-499709. PMID:11996584

  5. Solution-Phase Synthesis of Dipeptides: A Capstone Project That Employs Key Techniques in an Organic Laboratory Course

    ERIC Educational Resources Information Center

    Marchetti, Louis; DeBoef, Brenton

    2015-01-01

    A contemporary approach to the synthesis and purification of several UV-active dipeptides has been developed for the second-year organic laboratory. This experiment exposes students to the important technique of solution-phase peptide synthesis and allows an instructor to highlight the parallel between what they are accomplishing in the laboratory…

  6. Solution-phase synthesis of nanomaterials at low temperature

    NASA Astrophysics Data System (ADS)

    Zhu, Yongchun; Qian, Yitai

    2009-01-01

    This paper reviews the solution-phase synthesis of nanoparticles via some routes at low temperatures, such as room temperature route, wave-assisted synthesis (γ-irradiation route and sonochemical route), directly heating at low temperatures, and hydrothermal/solvothermal methods. A number of strategies were developed to control the shape, the size, as well as the dispersion of nanostructures. Using diethylamine or n-butylamine as solvent, semiconductor nanorods were yielded. By the hydrothermal treatment of amorphous colloids, Bi2S3 nanorods and Se nanowires were obtained. CdS nanowires were prepared in the presence of polyacrylamide. ZnS nanowires were obtained using liquid crystal. The polymer poly (vinyl acetate) tubule acted as both nanoreactor and template for the CdSe nanowire growth. Assisted by the surfactant of sodium dodecyl benzenesulfonate (SDBS), nickel nanobelts were synthesized. In addition, Ag nanowires, Te nanotubes and ZnO nanorod arrays could be prepared without adding any additives or templates.

  7. Aerosol spray pyrolysis & solution phase synthesis of nanostructures

    NASA Astrophysics Data System (ADS)

    Zhang, Hongwang

    This dissertation focuses on the synthesis of nanomaterials by both solution phase and gas phase methods. By the solution phase method, we demonstrate the synthesis of Au/CdS binary hybrid nanoparticles and the Au-induced growth of CdS nanorods. At higher reaction temperature, extremely uniform CdS nanorods were obtained. The size of the Au seed nanoparticles has an important influence on the length and diameter of the nanorods. In addition, preparation of peanut-like FePt-CdS hybrid nanoparticles by spontaneous epitaxial nucleation and growth of CdS onto FePt-seed nanoparticles in high-temperature organic solution is reported. The FePt-CdS hybrid nanoparticles reported here are an example of a bifunctional nanomaterial that combines size-dependent magnetic and optical properties. In the gas phase method, a spray pyrolysis aerosol synthesis method was used to produce tellurium dioxide nanoparticles and zinc sulfide nanoparticles. Tellurite glasses (amorphous TeO2 based materials) have two useful optical properties, high refractive index and high optical nonlinearity, that make them attractive for a range of applications. In the work presented here, TeO2 nanoparticles were prepared by spray pyrolysis of an aqueous solution of telluric acid, Te(OH)6. This laboratory-scale process is capable of producing up to 80 mg/hr of amorphous TeO2-nanoparticles with primary particle diameters from 10 to 40 nm, and allows their synthesis in significant quantities from an inexpensive and environmentally friendly precursor. Furthermore, both Er3+ doped and Er3+ and Yb3+ co-doped tellurium dioxide nanoparticles were synthesized by spray pyrolysis of an aqueous mixture of telluric acid with erbium/ytterbium salts, which exhibit the infrared to green visible upconversion phenomena. ZnS nanoparticles (NPs) were prepared by spray pyrolysis using zinc diethyldithiocarbamate as a single-source precursor. The home-built scanning mobility particle spectrometer (SMPS) is a useful tool for

  8. Solution-phase automated synthesis of tripeptide derivatives.

    PubMed

    Kuroda, N; Hattori, T; Kitada, C; Sugawara, T

    2001-09-01

    An improved general method for automated synthesis of tripeptides was developed, in which methanesulfonic acid (MSA) was used in place of trifluoroacetic acid (TFA), thus making it possible to avoid, 1) corrosion of the apparatus by strong acid vapor, 2) formation of emulsions, and 3) use of the restricted solvent, dichloromethane. As an application of the automated synthesis apparatus, 216 fragment tripeptide derivatives were synthesized systematically using the MSA method, in excellent yield and with increased efficiency. PMID:11558600

  9. 2,6-Diketopiperazines from amino acids, from solution-phase to solid-phase organic synthesis.

    PubMed

    Perrotta, E; Altamura, M; Barani, T; Bindi, S; Giannotti, D; Harmat, N J; Nannicini, R; Maggi, C A

    2001-01-01

    A method to prepare 1,3-disubstituted 2,6-diketopiperazines (2,6-DKP) as useful heterocyclic library scaffolds in the search of new leads for drug discovery is described. The method can be used in solution-phase and solid-phase conditions. In the key step of the synthesis, the imido portion of the new molecule is formed in solution through intramolecular cyclization, under basic conditions, of a secondary amide nitrogen on a benzyl ester. A Wang resin carboxylic ester is used as the acylating agent under solid-phase conditions, allowing the cyclization to take place with simultaneous cleavage of the product from the resin ("cyclocleavage"). The synthetic method worked well with several couples of amino acids, independently from their configuration, and was used for the parallel synthesis of a series of fully characterized compounds. The use of iterative conditions in the solid phase (repeated addition of fresh solvent and potassium carbonate to the resin after filtering out the product-containing solution) allowed us to keep diastereoisomer content below the detection limit by HPLC and (1)H NMR (200 MHz). PMID:11549363

  10. Automated Solution-Phase Synthesis of β-1,4-Mannuronate and β-1,4-Mannan

    PubMed Central

    2016-01-01

    The first automated solution-phase synthesis of β-1,4-mannuronate and β-1,4-mannan oligomers has been accomplished by using a β-directing C-5 carboxylate strategy. By utilizing fluorous-tag assisting purification after repeated reaction cycles, β-1,4-mannuronate was synthesized up to a hexasaccharide with limited loading of a glycosyl donor (up to 3.5 equiv) for each glycosylation cycle due to the homogeneous solution-phase reaction condition. After a global reduction of the uronates, the β-1,4-mannan hexasaccharide was obtained, thereby demonstrating a new approach to β-mannan synthesis. PMID:25955886

  11. Automated Solution-Phase Synthesis of β-1,4-Mannuronate and β-1,4-Mannan.

    PubMed

    Tang, Shu-Lun; Pohl, Nicola L B

    2015-06-01

    The first automated solution-phase synthesis of β-1,4-mannuronate and β-1,4-mannan oligomers has been accomplished by using a β-directing C-5 carboxylate strategy. By utilizing fluorous-tag assisting purification after repeated reaction cycles, β-1,4-mannuronate was synthesized up to a hexasaccharide with limited loading of a glycosyl donor (up to 3.5 equiv) for each glycosylation cycle due to the homogeneous solution-phase reaction condition. After a global reduction of the uronates, the β-1,4-mannan hexasaccharide was obtained, thereby demonstrating a new approach to β-mannan synthesis. PMID:25955886

  12. Solution-phase-peptide synthesis via the group-assisted purification (GAP) chemistry without using chromatography and recrystallization.

    PubMed

    Wu, Jianbin; An, Guanghui; Lin, Siqi; Xie, Jianbo; Zhou, Wei; Sun, Hao; Pan, Yi; Li, Guigen

    2014-02-01

    The solution phase synthesis of N-protected amino acids and peptides has been achieved through the Group-Assisted Purification (GAP) chemistry by avoiding disadvantages of other methods in regard to the difficult scale-up, expenses of solid and soluble polymers, etc. The GAP synthesis can reduce the use of solvents, silica gels, energy and manpower. In addition, the GAP auxiliary can be conveniently recovered for re-use and is environmentally friendly and benign, and substantially reduces waste production in academic labs and industry. PMID:24336500

  13. A novel solution-phase route for the synthesis of crystalline silver nanowires

    SciTech Connect

    Liu Yang; Chu Ying . E-mail: chuying@nenu.edu.cn; Yang Likun; Han Dongxue; Lue Zhongxian

    2005-10-06

    A unique solution-phase route was devised to synthesize crystal Ag nanowires with high aspect-ratio (8-10 nm in diameter and length up to 10 {mu}m) by the reduction of AgNO{sub 3} with Vitamin C in SDS/ethanol solution. The resultant nanoproducts were characterized by transmission electron microscope (TEM), X-ray diffraction (XRD) and electron diffraction (ED). A soft template mechanism was put forward to interpret the formation of metal Ag nanowires.

  14. An Efficient Solution-Phase Synthesis of 4,5,7-Trisubstituted Pyrrolo[3,2-d]pyrimidines

    PubMed Central

    Zhang, Weihe; Liu, Jing; Stashko, Michael A.; Wang, Xiaodong

    2013-01-01

    We have developed an efficient and robust route to synthesize 4,5,7-trisubstituted pyrrolo[3,2-d]pyrimidines as potent kinase inhibitors. This solution-phase synthesis features a SNAr substitution reaction, cross-coupling reaction, one-pot reduction/reductive amination and N-alkylation reaction. These reactions occur rapidly with high yields and have broad substrate scopes. A variety of groups can be selectively introduced into the N5 and C7 positions of 4,5,7-trisubstituted pyrrolopyrimidines at a late stage of the synthesis, thereby providing a highly efficient approach to explore the structure-activity relationships of pyrrolopyrimidine derivatives. Four synthetic analogs have been profiled against a panel of 48 kinases and a new and selective FLT3 inhibitor 9 is identified. PMID:23181516

  15. Automated fluorous-assisted solution-phase synthesis of β-1,2-, 1,3-, and 1,6-mannan oligomers.

    PubMed

    Tang, Shu-Lun; Pohl, Nicola L B

    2016-07-22

    Automated solution-phase syntheses of β-1,2-, 1,3-, and 1,6-mannan oligomers have been accomplished by applying a β-directing C-5 carboxylate strategy. Fluorous-tag-assisted purification after each reaction cycle allowed the synthesis of short β-mannan oligomers with limited loading of glycosyl donor-as low as 3.0 equivalents for each glycosylation cycle. This study showed the capability of the automated solution-phase synthesis protocol for synthesizing various challenging glycosides, including use of a C-5 ester as a protecting group that could be converted under reductive conditions to a hydroxymethyl group for chain extension. PMID:27155895

  16. Automated Solution-Phase Synthesis of Insect Glycans to Probe the Binding Affinity of Pea Enation Mosaic Virus.

    PubMed

    Tang, Shu-Lun; Linz, Lucas B; Bonning, Bryony C; Pohl, Nicola L B

    2015-11-01

    Pea enation mosaic virus (PEMV)--a plant RNA virus transmitted exclusively by aphids--causes disease in multiple food crops. However, the aphid-virus interactions required for disease transmission are poorly understood. For virus transmission, PEMV binds to a heavily glycosylated receptor aminopeptidase N in the pea aphid gut and is transcytosed across the gut epithelium into the aphid body cavity prior to release in saliva as the aphid feeds. To investigate the role of glycans in PEMV-aphid interactions and explore the possibility of viral control through blocking a glycan interaction, we synthesized insect N-glycan terminal trimannosides by automated solution-phase synthesis. The route features a mannose building block with C-5 ester enforcing a β-linkage, which also provides a site for subsequent chain extension. The resulting insect N-glycan terminal trimannosides with fluorous tags were used in a fluorous microarray to analyze binding with fluorescein isothiocyanate-labeled PEMV; however, no specific binding between the insect glycan and PEMV was detected. To confirm these microarray results, we removed the fluorous tag from the trimannosides for isothermal titration calorimetry studies with unlabeled PEMV. The ITC studies confirmed the microarray results and suggested that this particular glycan-PEMV interaction is not involved in virus uptake and transport through the aphid. PMID:26457763

  17. Solution-Phase Perfluoroalkylation of C60 Leads to Efficient and Selective Synthesis of Bis-Perfluoroalkylated Fullerenes

    PubMed Central

    Kuvychko, Igor V.; Strauss, Steven H.; Boltalina, Olga V.

    2012-01-01

    A solution-phase perfluoroalkylation of C60 with a series of RFI reagents was studied. The effects of molar ratio of the reagents, reaction time, and presence of copper metal promoter on fullerene conversion and product composition were evaluated. Ten aliphatic and aromatic RFI reagents were investigated (CF3I, C2F5I, n-C3F7I, i-C3F7I, n-C4F9I, (CF3)(C2F5)CFI, n-C8F17I, C6F5CF2I, C6F5I, and 1,3-(CF3)2C6F3I) and eight of them (except for C6F5I and 1,3-(CF3)2C6F3I) were found to add the respective RF groups to C60 in solution. Efficient and selective synthesis of C60(RF)2 derivatives was developed. PMID:25843973

  18. Solution-phase synthesis and photoluminescence characterization of quaternary Cu{sub 2}ZnSnS{sub 4} nanocrystals

    SciTech Connect

    Hamanaka, Yasushi; Tsuzuki, Masakazu; Ozawa, Kohei; Kuzuya, Toshihiro

    2013-12-04

    Cu{sub 2}ZnSnS{sub 4} (CZTS) nanocrystals were synthesized via solution phase route and their lattice defects were investigated by photoluminescence measurements. Ionization energies of the defect levels were estimated to be 10 and 72 meV from thermal quenching behavior of the photoluminescence spectra. These values are quite different from those experimentally estimated for vapor-grown CZTS films and crystals and theoretically calculated for bulk CZTS. The results indicate that the defects are characteristic of CZTS nanocrystals synthesized in the solution phase.

  19. Diorganotin(IV) N-acetyl-L-cysteinate complexes: synthesis, solid state, solution phase, DFT and biological investigations.

    PubMed

    Pellerito, Lorenzo; Prinzivalli, Cristina; Casella, Girolamo; Fiore, Tiziana; Pellerito, Ornella; Giuliano, Michela; Scopelliti, Michelangelo; Pellerito, Claudia

    2010-07-01

    Diorganotin(IV) complexes of N-acetyl-L-cysteine (H(2)NAC; (R)-2-acetamido-3-sulfanylpropanoic acid) have been synthesized and their solid and solution-phase structural configurations investigated by FTIR, Mössbauer, (1)H, (13)C and (119)Sn NMR spectroscopy. FTIR results suggested that in R(2)Sn(IV)NAC (R = Me, Bu, Ph) complexes NAC(2-) behaves as dianionic tridentate ligand coordinating the tin(IV) atom, through ester-type carboxylate, acetate carbonyl oxygen atom and the deprotonated thiolate group. From (119)Sn Mössbauer spectroscopy it could be inferred that the tin atom is pentacoordinated, with equatorial R(2)Sn(IV) trigonal bipyramidal configuration. In DMSO-d(6) solution, NMR spectroscopic data showed the coordination of one solvent molecule to tin atom, while the coordination mode of the ligand through the ester-type carboxylate and the deprotonated thiolate group was retained in solution. DFT (Density Functional Theory) study confirmed the proposed structures in solution phase as well as the determination of the most probable stable ring conformation. Biological investigations showed that Bu(2)SnCl(2) and NAC2 induce loss of viability in HCC cells and only moderate effects in non-tumor Chang liver cells. NAC2 showed lower cytotoxic activity than Bu(2)SnCl(2), suggesting that the binding with NAC(2-) modulates the marked cytotoxic activity exerted by Bu(2)SnCl(2). Therefore, these novel butyl derivatives could represent a new class of anticancer drugs. PMID:20421134

  20. Solution-phase synthesis and electrochemical hydrogen storage of ultra-long single-crystal selenium submicrotubes.

    PubMed

    Zhang, Bin; Dai, Wei; Ye, Xingchen; Hou, Weiyi; Xie, Yi

    2005-12-01

    Ultra-long single-crystalline trigonal selenium submicrotubes were synthesized using a facile one-step solution-phase approach with the assistance of nonionic surfactant Polyoxyethylene(20)sorbitan monolaurate (Tween-20), which turned out to be significant for the formation of ultra-long Se submicrotubes. XRD, Raman, SEM, and TEM were adopted to characterize the morphology, structure and phase composition of the as-prepared Se products. It was found that the length of the obtained Se submicrotubes was over 100 microm. By variation of the experimental parameters, the t-Se spheres, nanowires, and broken microtubes can be prepared. The possible growth mechanism of the ultra-long selenium submicrotubes was explained. In addition, we have also demonstrated that the synthesized ultra-long t-Se submicrotubes using the Tween-20-assisted approach can electrochemically charge and discharge with the high capacity of 265 mAh/g (corresponding to 0.97 wt % hydrogen in SWNTs) under normal atmosphere at room temperature. Cyclic voltammetry was adopted to investigate the adsorption-oxidation behavior of ultra-long selenium submicrotubes. It was observed that the morphology of the synthesized selenium products had a remarkable influence on their capacity of electrochemical hydrogen storage. These differences in hydrogen storage capacity are likely due to the size and density of tubes as well as the microcosmic morphology of different Se samples. The as-obtained ultra-long Se submicrotubes are expected to find wide applications in hydrogen storage, high-energy batteries, and optoelectronic, biologic, and catalytic fields as well as in the studies of structure-property relationships. This simple Tween-assisted approach might be extended to the preparations of one-dimensional nanostructures of tellurium and other anisotropic materials. PMID:16853974

  1. Organometallic complexes with biological molecules. XVIII. Alkyltin(IV) cephalexinate complexes: synthesis, solid state and solution phase investigations.

    PubMed

    Di Stefano, R; Scopelliti, M; Pellerito, C; Casella, G; Fiore, T; Stocco, G C; Vitturi, R; Colomba, M; Ronconi, L; Sciacca, I D; Pellerito, L

    2004-03-01

    Dialkyltin(IV) and trialkyltin(IV) complexes of the deacetoxycephalo-sporin-antibiotic cephalexin [7-(d-2-amino-2-phenylacetamido)-3-methyl-3-cephem-4-carboxylic acid] (Hceph) have been synthesized and investigated both in solid and solution phase. Analytical and thermogravimetric data supported the general formula Alk(2)SnOHceph(.)H(2)O and Alk(3)Snceph(.)H(2)O (Alk=Me, n-Bu), while structural information has been gained by FT-IR, (119)Sn Mössbauer and (1)H, (13)C, (119)Sn NMR data. In particular, IR results suggested polymeric structures both for Alk(2)SnOHceph(.)H(2)O and Alk(3)Snceph(.)H(2)O. Moreover, cephalexin appears to behave as monoanionic tridentate ligand coordinating the tin(IV) atom through ester-type carboxylate, as well as through beta-lactam carbonyl oxygen atoms and the amino nitrogen donor atoms in Alk(2)SnOHceph(.)H(2)O complexes. On the basis of (119)Sn Mössbauer spectroscopy it could be inferred that tin(IV) was hexacoordinated in such complexes in the solid state, showing skew trapezoidal configuration. As far as Alk(3)Sn(IV)ceph(.)H(2)O derivatives are concerned, cephalexin coordinated the Alk(3)Sn moiety through the carboxylate acting as a bridging bidentate monoanionic group. Again, (119)Sn Mössbauer spectroscopy led us to propose a trigonal configuration around the tin(IV) atom, with R(3)Sn equatorial disposition and bridging carboxylate oxygen atoms in the axial positions. The nature of the complexes in solution state was investigated by using (1)H, (13)C and (119)Sn NMR spectroscopy. Finally, the cytotoxic activity of organotin(IV) cephalexinate derivatives has been tested using two different chromosome-staining techniques Giemsa and CMA(3), towards spermatocyte chromosomes of the mussel Brachidontes pharaonis (Mollusca: Bivalvia). Colchicinized-like mitoses (c-mitoses) on slides obtained from animals exposed to organotin(IV) cephalexinate compounds, demonstrated the high mitotic spindle-inhibiting potentiality of these chemicals

  2. Solid- and solution-phase synthesis and application of R6G dual-labeled oligonucleotide probes.

    PubMed

    Skoblov, Aleksander Yu; Vichuzhanin, Maxim V; Farzan, Valentina M; Veselova, Olga A; Konovalova, Tatiana A; Podkolzin, Alexander T; Shipulin, German A; Zatsepin, Timofei S

    2015-10-15

    A novel N-TFA-protected carboxyrhodamine 6G (R6G) phosphoramidite was synthesized for use in an automated DNA synthesis to prepare 5'-labeled oligonucleotides. Deprotection and purification conditions were optimized for 5'-labeled and dual-labeled oligonucleotide probes. As an alternative we synthesized an azide derivative of R6G for CuAAC post-synthetic oligonucleotide labeling. Dual-labeled probes obtained by both methods showed the same efficacy in a quantitative PCR assay. R6G-labeled probes demonstrated superior properties in a qPCR assay in comparison with alternative HEX, JOE and SIMA dyes due to more efficient fluorescence quenching by BHQ-1. We successfully used R6G dual-labeled probes for rotavirus genotyping. PMID:26392371

  3. Solution phase synthesis and intense pulsed light sintering and reduction of a copper oxide ink with an encapsulating nickel oxide barrier

    NASA Astrophysics Data System (ADS)

    Jha, M.; Dharmadasa, R.; Draper, G. L.; Sherehiy, A.; Sumanasekera, G.; Amos, D.; Druffel, T.

    2015-05-01

    Copper oxide nanoparticle inks sintered and reduced by intense pulsed light (IPL) are an inexpensive means to produce conductive patterns on a number of substrates. However, the oxidation and diffusion characteristics of copper are issues that must be resolved before it can be considered as a viable solution. Nickel can provide a degree of oxidation protection and act as a barrier for the diffusion of copper. In the present study we have for the first time synthesized copper oxide with an encapsulating nickel oxide nanostructure using a solution phase synthesis process in the presence of a surfactant at room temperature. The room temperature process enables us to easily prevent the formation of alloys at the copper-nickel interface. The synthesis results in a simple technique (easily commercializable, tested at a 10 g scale) with highly controllable layer thicknesses on a 20 nm copper oxide nanoparticle. These Cu2O@NiO dispersions were then directly deposited onto substrates and sintered/reduced using an IPL source. The sintering technique produces a highly conductive film with very short processing times. Films have been deposited onto silicon, and the copper-nickel structure has shown a lower copper diffusion. The nanostructures and resulting films were characterized using electron and x-ray spectroscopy, and the films’ resistivity was measured.

  4. Controllable synthesis and growth mechanism of {alpha}-Co(OH){sub 2} nanorods and nanoplates by a facile solution-phase route

    SciTech Connect

    Wang Wenzhong; Feng Kai; Wang Zhi; Ma Yunyan; Zhang Suyun; Liang Yujie

    2011-12-15

    A facile chemical precipitation route has been developed to control synthesis of {alpha}-cobalt hydroxide nanostructures with rod-like and plate-like morphologies. The {alpha}-Co(OH){sub 2} nanorods were achieved in large quantity when the experiments were carried out in the presence of a suitable shape-controlling reagent polyvinyl pyrrolidone (PVP), while the {alpha}-Co(OH){sub 2} nanoplates were obtained when the experiments were conducted in the absence of PVP, whilst keeping other experimental conditions constant. The chemical composition and morphologies of the as-prepared {alpha}-Co(OH){sub 2} nanoparticles were characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The effect of polymer PVP on the morphologies of {alpha}-Co(OH){sub 2} nanoparticles were discussed in detail. The results indicated that PVP played a key role for the formation of {alpha}-Co(OH){sub 2} nanorods. The growth mechanism of the as-synthesized nanorods and nanoplates were discussed in detail based on the experimental results. A possible growth mechanism has been proposed to illustrate the growth of {alpha}-Co(OH){sub 2} nanorods. - Graphical abstract: A facile solution-phase route has been developed to synthesize {alpha}-Co(OH){sub 2} nanorods and nanoplates. The possible growth mechanism of nanorods and nanoplates was proposed. Highlights: Black-Right-Pointing-Pointer A facile controllable route was described for {alpha}-Co(OH){sub 2} nanowires and nanoplates. Black-Right-Pointing-Pointer The {alpha}-Co(OH){sub 2} nanowires were achieved in the presence of shape controller PVP. Black-Right-Pointing-Pointer The {alpha}-Co(OH){sub 2} nanoplates were obtained in the absence of shape controller PVP. Black-Right-Pointing-Pointer The shape controller PVP played a key role in the formation of {alpha}-Co(OH){sub 2} nanowires.

  5. Solution-phase synthesis of single-crystal Cu3Si nanowire arrays on diverse substrates with dual functions as high-performance field emitters and efficient anti-reflective layers

    NASA Astrophysics Data System (ADS)

    Yuan, Fang-Wei; Wang, Chiu-Yen; Li, Guo-An; Chang, Shu-Hao; Chu, Li-Wei; Chen, Lih-Juann; Tuan, Hsing-Yu

    2013-09-01

    There is strong and growing interest in applying metal silicide nanowires as building blocks for a new class of silicide-based applications, including spintronics, nano-scale interconnects, thermoelectronics, and anti-reflective coating materials. Solution-phase environments provide versatile materials chemistry as well as significantly lower production costs compared to gas-phase synthesis. However, solution-phase synthesis of silicide nanowires remains challenging due to the lack of fundamental understanding of silicidation reactions. In this study, single-crystalline Cu3Si nanowire arrays were synthesized in an organic solvent. Self-catalyzed, dense single-crystalline Cu3Si nanowire arrays were synthesized by thermal decomposition of monophenylsilane in the presence of copper films or copper substrates at 420 to 475 °C and 10.3 MPa in supercritical benzene. The solution-grown Cu3Si nanowire arrays serve dual functions as field emitters and anti-reflective layers, which are reported on copper silicide materials for the first time. Cu3Si nanowires exhibit superior field-emission properties, with a turn-on-voltage as low as 1.16 V μm-1, an emission current density of 8 mA cm-2 at 4.9 V μm-1, and a field enhancement factor (β) of 1500. Cu3Si nanowire arrays appear black with optical absorption less than 5% between 400 and 800 nm with minimal reflectance, serving as highly efficient anti-reflective layers. Moreover, the Cu3Si nanowires could be grown on either rigid or flexible substrates (PI). This study shows that solution-phase silicide reactions are adaptable for high-quality silicide nanowire growth and demonstrates their promise towards fabrication of metal silicide-based devices.There is strong and growing interest in applying metal silicide nanowires as building blocks for a new class of silicide-based applications, including spintronics, nano-scale interconnects, thermoelectronics, and anti-reflective coating materials. Solution-phase environments

  6. Parallel synthesis of cell-penetrating peptide conjugates of PMO toward exon skipping enhancement in Duchenne muscular dystrophy.

    PubMed

    O'Donovan, Liz; Okamoto, Itaru; Arzumanov, Andrey A; Williams, Donna L; Deuss, Peter; Gait, Michael J

    2015-02-01

    We describe two new methods of parallel chemical synthesis of libraries of peptide conjugates of phosphorodiamidate morpholino oligonucleotide (PMO) cargoes on a scale suitable for cell screening prior to in vivo analysis for therapeutic development. The methods represent an extension of the SELection of PEPtide CONjugates (SELPEPCON) approach previously developed for parallel peptide-peptide nucleic acid (PNA) synthesis. However, these new methods allow for the utilization of commercial PMO as cargo with both C- and N-termini unfunctionalized. The synthetic methods involve conjugation in solution phase, followed by rapid purification via biotin-streptavidin immobilization and subsequent reductive release into solution, avoiding the need for painstaking high-performance liquid chromatography purifications. The synthesis methods were applied for screening of PMO conjugates of a 16-member library of variants of a 10-residue ApoE peptide, which was suggested for blood-brain barrier crossing. In this work the conjugate library was tested in an exon skipping assay using skeletal mouse mdx cells, a model of Duchene's muscular dystrophy where higher activity peptide-PMO conjugates were identified compared with the starting peptide-PMO. The results demonstrate the power of the parallel synthesis methods for increasing the speed of optimization of peptide sequences in conjugates of PMO for therapeutic screening. PMID:25412073

  7. Parallel Synthesis of Cell-Penetrating Peptide Conjugates of PMO Toward Exon Skipping Enhancement in Duchenne Muscular Dystrophy

    PubMed Central

    O'Donovan, Liz; Okamoto, Itaru; Arzumanov, Andrey A.; Williams, Donna L.; Deuss, Peter

    2015-01-01

    We describe two new methods of parallel chemical synthesis of libraries of peptide conjugates of phosphorodiamidate morpholino oligonucleotide (PMO) cargoes on a scale suitable for cell screening prior to in vivo analysis for therapeutic development. The methods represent an extension of the SELection of PEPtide CONjugates (SELPEPCON) approach previously developed for parallel peptide-peptide nucleic acid (PNA) synthesis. However, these new methods allow for the utilization of commercial PMO as cargo with both C- and N-termini unfunctionalized. The synthetic methods involve conjugation in solution phase, followed by rapid purification via biotin-streptavidin immobilization and subsequent reductive release into solution, avoiding the need for painstaking high-performance liquid chromatography purifications. The synthesis methods were applied for screening of PMO conjugates of a 16-member library of variants of a 10-residue ApoE peptide, which was suggested for blood-brain barrier crossing. In this work the conjugate library was tested in an exon skipping assay using skeletal mouse mdx cells, a model of Duchene's muscular dystrophy where higher activity peptide-PMO conjugates were identified compared with the starting peptide-PMO. The results demonstrate the power of the parallel synthesis methods for increasing the speed of optimization of peptide sequences in conjugates of PMO for therapeutic screening. PMID:25412073

  8. Building blocks for the solution phase synthesis of oligonucleotides: regioselective hydrolysis of 3',5'-Di-O-levulinylnucleosides using an enzymatic approach.

    PubMed

    García, Javier; Fernández, Susana; Ferrero, Miguel; Sanghvi, Yogesh S; Gotor, Vicente

    2002-06-28

    A short and convenient synthesis of 3'- and 5'-O-levulinyl-2'-deoxynucleosides has been developed from the corresponding 3',5'-di-O-levulinyl derivatives by regioselective enzymatic hydrolysis, avoiding several tedious chemical protection/deprotection steps. Thus, Candida antartica lipase B (CAL-B) was found to selectively hydrolyze the 5'-levulinate esters, furnishing 3'-O-levulinyl-2'-deoxynucleosides 3 in >80% isolated yields. On the other hand, immobilized Pseudomonas cepacia lipase (PSL-C) and Candida antarctica lipase A (CAL-A) exhibit the opposite selectivity toward the hydrolysis at the 3'-position, affording 5'-O-levulinyl derivatives 4 in >70% yields. A similar hydrolysis procedure was successfully extended to the synthesis of 3'- and 5'-O-levulinyl-protected 2'-O-alkylribonucleosides 7 and 8. This work demonstrates for the first time application of commercial CAL-B and PSL-C toward regioselective hydrolysis of levulinyl esters with excellent selectivity and yields. It is noteworthy that protected cytidine and adenosine base derivatives were not adequate substrates for the enzymatic hydrolysis with CAL-B, whereas PSL-C was able to accommodate protected bases during selective hydrolysis. In addition, we report an improved synthesis of dilevulinyl esters using a polymer-bound carbodiimide as a replacement for dicyclohexylcarbodiimide (DCC), thus considerably simplifying the workup for esterification reactions. PMID:12076150

  9. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    PubMed

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze

    1998-01-01

    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc. PMID:10099494

  10. Solution-phase synthesis and high photocatalytic activity of wurtzite ZnSe ultrathin nanobelts: a general route to 1D semiconductor nanostructured materials.

    PubMed

    Xiong, Shenglin; Xi, Baojuan; Wang, Chengming; Xi, Guangcheng; Liu, Xiaoyan; Qian, Yitai

    2007-01-01

    A general and facile synthetic route has been developed to prepare 1D semiconductor nanomaterials in a binary solution of distilled water and ethanol amine. The influence of the volume ratio of mixed solvents and reaction temperature on the yield and final morphology of products was investigated. Significantly, this is the first time that wurtzite ZnSe ultrathin nanobelts have been synthesized in solution. It has been confirmed that the photocatalytic activity of ZnSe nanobelts in the photodegradation of the fuchsine acid is higher than that of TiO(2) nanoparticles. The present work shows that the solvothermal route is facile, cheap, and versatile. Thus, it is very easy to realize scaled-up production, and brings new light on the synthesis and self-assembly of functional materials. PMID:17616961

  11. Gram-scale solution-phase synthesis of selective sodium bicarbonate co-transport inhibitor S0859: in vitro efficacy studies in breast cancer cells.

    PubMed

    Larsen, Ann M; Krogsgaard-Larsen, Niels; Lauritzen, Gitte; Olesen, Christina W; Honoré Hansen, Steen; Boedtkjer, Ebbe; Pedersen, Stine F; Bunch, Lennart

    2012-10-01

    Na(+)-coupled HCO(3)(-) transporters (NBCs) mediate the transport of bicarbonate ions across cell membranes and are thus ubiquitous regulators of intracellular pH. NBC dysregulation is associated with a range of diseases; for instance, NBCn1 is strongly up-regulated in a model of ErbB2-dependent breast cancer, a malignant and widespread cancer with no targeted treatment options, and single-nucleotide polymorphisms in NBCn1 genetically link to breast cancer development and hypertension. The N-cyanosulfonamide S0859 has been shown to selectively inhibit NBCs, and its availability on the gram scale is therefore of significant interest to the scientific community. Herein we describe a short and efficient synthesis of S0859 with an overall yield of 45 % from commercially available starting materials. The inhibitory effect of S0859 on recovery of intracellular pH after an acid load was verified in human and murine cancer cell lines in Ringer solutions. However, S0859 binds very strongly to components in plasma, and accordingly, measurements on isolated murine tissues showed no effect of S0859 at concentrations up to 50 μM. PMID:22927258

  12. A Laboratory Preparation of Aspartame Analogs Using Simultaneous Multiple Parallel Synthesis Methodology

    ERIC Educational Resources Information Center

    Qvit, Nir; Barda, Yaniv; Gilon, Chaim; Shalev, Deborah E.

    2007-01-01

    This laboratory experiment provides a unique opportunity for students to synthesize three analogues of aspartame, a commonly used artificial sweetener. The students are introduced to the powerful and useful method of parallel synthesis while synthesizing three dipeptides in parallel using solid-phase peptide synthesis (SPPS) and simultaneous…

  13. Type synthesis for 4-DOF parallel press mechanism using GF set theory

    NASA Astrophysics Data System (ADS)

    He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong

    2015-07-01

    Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.

  14. Parallel combinatorial chemical synthesis using single-layer poly(dimethylsiloxane) microfluidic devices

    PubMed Central

    Dexter, Joseph P.; Parker, William

    2009-01-01

    Improving methods for high-throughput combinatorial chemistry has emerged as a major area of research because of the importance of rapidly synthesizing large numbers of chemical compounds for drug discovery and other applications. In this investigation, a novel microfluidic chip for performing parallel combinatorial chemical synthesis was developed. Unlike past microfluidic systems designed for parallel combinatorial chemistry, the chip is a single-layer device made of poly(dimethylsiloxane) that is extremely easy and inexpensive to fabricate. Using the chip, a 2×2 combinatorial series of amide-formation reactions was performed. The results of this combinatorial synthesis indicate that the new device is an effective platform for running parallel organic syntheses at significantly higher throughput than with past methodologies. Additionally, a design algorithm for scaling up the 2×2 combinatorial synthesis chip to address more complex cases was developed. PMID:20216962

  15. Conditions for parallel realizable configurations in synthesis of constraint-based flexure mechanisms

    NASA Astrophysics Data System (ADS)

    Li, Shouzhong; Yu, Jingjun; Zong, Guanghua

    2012-11-01

    In synthesis of flexure mechanism, parallel arrangement is paid more attention due to its advantages, such as compact structure and higher stiffness. Researchers have derived many parallel flexure mechanisms, but seldom discuss which kind of flexure mechanism can be realized via fully parallel arrangement. The realizable conditions proposed in current work are complicated to engineering applications. To solve two problems on how to judge whether a flexure mechanism can be realized via fully parallel arrangement and how to realize those flexure mechanisms which cannot be realized via fully parallel arrangement, the algebraic condition is derived to judge whether a freedom space is parallel realizable after introducing the definition of parallel realizable and some propositions, and the condition is there exist 6- n independent line constraints in constraint space reciprocal to dimensional freedom space. Then the realizable constraint spaces reciprocal to freedom spaces with 1-3 dimensions are provided. As a result, not all freedom spaces are parallel realizable. For freedom spaces that are not parallel realizable, the criterion of decomposing DOF is proposed to achieve all motion patterns via parallel or hybrid arrangement, that is a high dimensional freedom space can be realized via combining several low dimensional freedom spaces which are parallel realizable. Specific decomposing strategies for 4 and 5 DOF are provided and a complete flowchart is presented to guide designing flexure mechanisms, particularly those which are not parallel realizable. As case studies, synthesis processes of two helical and 3T1R motions are provided to illustrate the proposed approach. The proposed method provides a feasible approach to realize all motion patterns.

  16. Synthesis of solution-phase combinatorial library of 4,6-diamino-1,2-dihydro-1,3,5-triazine and identification of new leads against A16V+S108T mutant dihydrofolate reductase of Plasmodium falciparum.

    PubMed

    Vilaivan, Tirayut; Saesaengseerung, Neungruthai; Jarprung, Deanpen; Kamchonwongpaisan, Sumalee; Sirawaraporn, Worachart; Yuthavong, Yongyuth

    2003-01-17

    An efficient method to synthesize solution-phase combinatorial library of 1-aryl-4,6-diamino-1,2-dihydro-1,3,5-triazine was developed. The strategy involved an acid-catalyzed cyclocondensation between arylbiguanide hydrochlorides and carbonyl compounds in the presence of triethyl orthoacetate as water scavenger. A 96-membered combinatorial library was constructed from 6 aryl biguanides and 16 carbonyl compounds. Screening of the library by iterative deconvolution method revealed two candidate leads which are equally active against wild-type Plasmodium falciparum dihydrofolate reductase, but are about 100-fold more effective against the A16V+S108T mutant enzyme as compared to cycloguanil. PMID:12470716

  17. Parallel Combinatorial Synthesis of Azo Dyes: A Combinatorial Experiment Suitable for Undergraduate Laboratories

    ERIC Educational Resources Information Center

    Gung, Benjamin W.; Taylor, Richard T.

    2004-01-01

    An experiment in the parallel synthesis of azo dyes that demonstrates the concepts of structure-activity relationships and chemical diversity with vivid colors is described. It is seen that this experiment is suitable for the second-semester organic chemistry laboratory and also for the one-semester organic laboratory.

  18. Solution-Phase Processes of Macromolecular Crystallization

    NASA Technical Reports Server (NTRS)

    Pusey, Marc L.; Minamitani, Elizabeth Forsythe

    2004-01-01

    We have proposed, for the tetragonal form of chicken egg lysozyme, that solution phase assembly processes are needed to form the growth units for crystal nucleation and growth. The starting point for the self-association process is the monomeric protein, and the final crystallographic symmetry is defined by the initial dimerization interactions of the monomers and subsequent n-mers formed, which in turn are a function of the crystallization conditions. It has been suggested that multimeric proteins generally incorporate the underlying multimers symmetry into the final crystallographic symmetry. We posed the question of what happens to a protein that is known to grow as an n-mer when it is placed in solution conditions where it is monomeric. The trypsin-treated, or cut, form of the protein canavalin (CCAN) has been shown to nucleate and grow crystals as a trimer from neutral to slightly acidic solutions. Under these conditions the solution is composed almost wholly of trimers. The insoluble protein can be readily dissolved by weakly basic solution, which results in a solution that is monomeric. There are three possible outcomes to an attempt at crystallization of the protein under monomeric (high pH) conditions: 1) we will obtain the same crystals as under trimer conditions, but at different protein concentrations governed by the self association equilibria; 2) we will obtain crystals having a different symmetry, based upon a monomeric growth unit; 3) we will not obtain crystals. Obtaining the first result would be indicative that the solution-phase self-association process is critical to the crystal nucleation and growth process. The second result would be less clear, as it may also reflect a pH-dependent shift in the trimer-trimer molecular interactions. The third result, particularly for experiments in the transition pH's between trimeric and monomeric CCAN, would indicate that the monomer does not crystallize, and that solution phase self association is not part

  19. Model-integrated program synthesis environment for parallel/real-time image processing

    NASA Astrophysics Data System (ADS)

    Moore, Michael S.; Sztipanovitz, Janos; Karsai, Gabor; Nichols, James A.

    1997-09-01

    In this paper, it is shown that, through the use of model- integrated program synthesis (MIPS), parallel real-time implementations of image processing data flows can be synthesized from high level graphical specifications. The complex details in inherent to parallel and real-time software development become transparent to the programmer, enabling the cost-effective exploitation of parallel hardware for building more flexible and powerful real-time imaging systems. The model integrated real-time image processing system (MIRTIS) is presented as an example. MIRTIS employs the multigraph architecture (MGA), a framework and set of tools for building MIPS systems, to generate parallel real-time image processing software which runs under the control of a parallel run-time kernel on a network of Texas Instruments TMS320C40 DSPs (C40s). The MIRTIS models contain graphical declarations of the image processing computations to be performed, the available hardware resources, and the timing constraints of the application. The MIRTIS model interpreter performs the parallelization, scaling, and mapping of the computations to the resources automatically or determines that the timing constraints cannot be met with the available resources. MIRTIS is a clear example of how parallel real-time image processing systems can be built which are (1) cost-effectively programmable, (2) flexible, (3) scalable, and (4) built from commercial off-the-shelf (COTS) components.

  20. Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao

    2015-07-01

    Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.

  1. Ultrasound-Assisted Solvent-Free Parallel Synthesis of 3-Arylcoumarins Using N-Acylbenzotriazoles.

    PubMed

    Wet-Osot, Sirawit; Duangkamol, Chuthamat; Phakhodee, Wong; Pattarawarapan, Mookda

    2016-06-13

    An ultrasound-assisted one-pot acylation/cyclization reaction between N-acylbenzotriazoles and 2-hydroxybenzaldehydes has been developed for the synthesis of substituted 3-arylcoumarins. Using ultrasound not only allows rapid and clean conversion but also simplifies experimental setup and parallel workup leading to rapid generation of 3-arylcoumarin libraries under mild, solvent-free, and chromatography-free conditions. PMID:27191624

  2. Parallel Chemoenzymatic Synthesis of Sialosides Containing a C5-Diversified Sialic Acid

    PubMed Central

    Cao, Hongzhi; Muthana, Saddam; Li, Yanhong; Cheng, Jiansong; Chen, Xi

    2009-01-01

    A convenient chemoenzymatic strategy for synthesizing sialosides containing a C5-diversified sialic acid was developed. The α2,3- and α2,6-linked sialosides containing a 5-azido neuraminic acid synthesized by a highly efficient one-pot three-enzyme approach were converted to C5″-amino sialosides, which were used as common intermediates for chemical parallel synthesis to quickly generate a series of sialosides containing various sialic acid forms. PMID:19740656

  3. Microfluidic Reactor Array Device for Massively Parallel In-situ Synthesis of Oligonucleotides

    PubMed Central

    Srivannavit, Onnop; Gulari, Mayurachat; Hua, Zhishan.; Gao, Xiaolian; Zhou, Xiaochuan; Hong, Ailing; Zhou, Tiecheng; Gulari, Erdogan

    2009-01-01

    We have designed and fabricated a microfluidic reactor array device for massively parallel in-situ synthesis of oligonucleotides (oDNA). The device is made of glass anodically bonded to silicon consisting of three level features: microreactors, microchannels and through inlet/outlet holes. Main challenges in the design of this device include preventing diffusion of photogenerated reagents upon activation and achieving uniform reagent flow through thousands of parallel reactors. The device embodies a simple and effective dynamic isolation mechanism which prevents the intermixing of active reagents between discrete microreactors. Depending on the design parameters, it is possible to achieve uniform flow and synthesis reaction in all of the reactors by proper design of the microreactors and the microchannels. We demonstrated the use of this device on a solution-based, light-directed parallel in-situ oDNA synthesis. We were able to synthesize long oDNA, up to 120 mers at stepwise yield of 98 %. The quality of our microfluidic oDNA microarray including sensitivity, signal noise, specificity, spot variation and accuracy was characterized. Our microfluidic reactor array devices show a great potential for genomics and proteomics researches. PMID:20161215

  4. Sequence-Defined Oligomers from Hydroxyproline Building Blocks for Parallel Synthesis Applications.

    PubMed

    Kanasty, Rosemary L; Vegas, Arturo J; Ceo, Luke M; Maier, Martin; Charisse, Klaus; Nair, Jayaprakash K; Langer, Robert; Anderson, Daniel G

    2016-08-01

    The functionality of natural biopolymers has inspired significant effort to develop sequence-defined synthetic polymers for applications including molecular recognition, self-assembly, and catalysis. Conjugation of synthetic materials to biomacromolecules has played an increasingly important role in drug delivery and biomaterials. We developed a controlled synthesis of novel oligomers from hydroxyproline-based building blocks and conjugated these materials to siRNA. Hydroxyproline-based monomers enable the incorporation of broad structural diversity into defined polymer chains. Using a perfluorocarbon purification handle, we were able to purify diverse oligomers through a single solid-phase extraction method. The efficiency of synthesis was demonstrated by building 14 unique trimers and 4 hexamers from 6 diverse building blocks. We then adapted this method to the parallel synthesis of hundreds of materials in 96-well plates. This strategy provides a platform for the screening of libraries of modified biomolecules. PMID:27365192

  5. Rapid parallel synthesis of bioactive folded cyclotides using a tea-bag approach

    PubMed Central

    Aboye, Teshome; Kuang, Yuting; Neamati, Nouri

    2015-01-01

    We report here for the first time the rapid parallel production of bioactive folded cyclotides by using Fmoc-based solid-phase peptide synthesis in combination with a tea-bag approach. Using this approach we efficiently synthesized 15 different analogs of the CXCR4 antagonist cyclotide MCo-CVX-5c. Cyclotides were cyclized using a single-pot cyclization/folding reaction in the presence of reduced glutathione. Natively folded cyclotides were quickly purified from the cyclization/folding crude by activated thiol sepharose-based chromatography. The different folded cyclotide analogs were finally tested for their ability to inhibit the CXCR4 receptor in a cell-based assay. These results indicate that this approach can be used for the efficient chemical synthesis of cyclotide-based libraries that can be easily interfaced with solution or cell-based assays for the rapid screening of novel cyclotides with improved biological properties. PMID:25663016

  6. Controlled synthesis of bismuth oxo nanoscale crystals (BiOCl, Bi{sub 12}O{sub 17}Cl{sub 2}, {alpha}-Bi{sub 2}O{sub 3}, and (BiO){sub 2}CO{sub 3}) by solution-phase methods

    SciTech Connect

    Chen Xiangying; Huh, Hyun Sue; Lee, Soon W.

    2007-09-15

    We present the controlled solution-phase synthesis of several sheet- or rod-like bismuth oxides, BiOCl, Bi{sub 12}O{sub 17}Cl{sub 2}, {alpha}-Bi{sub 2}O{sub 3} and (BiO){sub 2}CO{sub 3}, by adjusting growth parameters such as reaction temperature, mole ratios of reactants, and the base used. BiOCl, Bi{sub 12}O{sub 17}Cl{sub 2}, and {alpha}-Bi{sub 2}O{sub 3} could be prepared from BiCl{sub 3} and NaOH, whereas (BiO){sub 2}CO{sub 3} was prepared from BiCl{sub 3} and urea. BiOCl and Bi{sub 12}O{sub 17}Cl{sub 2} could also be prepared from BiCl{sub 3} and ammonia. The {alpha}-Bi{sub 2}O{sub 3} sample exhibited strong emission at room temperature. - Graphical abstract: We prepared bismuth oxo nanomaterials by adjusting growth parameters. BiOCl, Bi{sub 12}O{sub 17}Cl{sub 2}, and {alpha}-Bi{sub 2}O{sub 3} could be prepared from BiCl{sub 3} and NaOH, whereas (BiO){sub 2}CO{sub 3} was prepared from BiCl{sub 3} and urea. BiOCl and Bi{sub 12}O{sub 17}Cl{sub 2} could also be prepared from BiCl{sub 3} and ammonia. The {alpha}-Bi{sub 2}O{sub 3} sample exhibited strong emission at room temperature.

  7. A traceless approach for the parallel solid-phase synthesis of 2-(arylamino)quinazolinones.

    PubMed

    Yu, Yongping; Ostresh, John M; Houghten, Richard A

    2002-08-01

    A traceless approach for the parallel solid-phase synthesis of 2-arylamino-substituted quinazolinones is described. Acylation of MBHA resin with o-nitrobenzoic acid derivatives, followed by reduction of the nitro group with tin chloride, generated a resin-bound o-anilino derivative. Reaction of resin-bound o-anilino derivative with arylisothiocyanates yielded resin-bound thioureas, which reacted with amines in the present of Mukaiyama's reagent (2-chloro-1-methylpyridinium iodide) to afford resin-bound guanidines. Following intramolecular cyclization of the resin-bound guanidines during cleavage from the resin by HF/anisole (95/5) for 1.5 h at 0 degrees C, the desired products were obtained in good yield and purity. PMID:12153287

  8. Synthesis of branched chains with actuation redundancy for eliminating interior singularities of 3T1R parallel mechanisms

    NASA Astrophysics Data System (ADS)

    Li, Shihua; Liu, Yanmin; Cui, Hongliu; Niu, Yunzhan; Zhao, Yanzhi

    2016-03-01

    Although it is common to eliminate the singularity of parallel mechanism by adding the branched chain with actuation redundancy, there is no theory and method for the configuration synthesis of the branched chain with actuation redundancy in parallel mechanism. Branched chains with actuation redundancy are synthesized for eliminating interior singularity of 3-translational and 1- rotational(3T1R) parallel mechanisms. Guided by the discriminance method of hybrid screw group according to Grassmann line geometry, all the possibilities are listed for the occurrence of interior singularities in 3T1R parallel mechanism. Based on the linear relevance of screw system and the principles of eliminating parallel mechanism singularity with actuation redundancy, different types of branched chains with actuation redundancy are synthesized systematically to indicate the layout and the number of the branched chainsinterior with actuation redundancy. A general method is proposed for the configuration synthesis of the branched chains with actuation redundancy of the redundant parallel mechanism, and it builds a solid foundation for the subsequent performance optimization of the redundant actuation parallel mechanism.

  9. Parallel Synthesis of Poly(amino ether)-Templated Plasmonic Nanoparticles for Transgene Delivery

    PubMed Central

    2015-01-01

    Plasmonic nanoparticles have been increasingly investigated for numerous applications in medicine, sensing, and catalysis. In particular, gold nanoparticles have been investigated for separations, sensing, drug/nucleic acid delivery, and bioimaging. In addition, silver nanoparticles demonstrate antibacterial activity, resulting in potential application in treatments against microbial infections, burns, diabetic skin ulcers, and medical devices. Here, we describe the facile, parallel synthesis of both gold and silver nanoparticles using a small set of poly(amino ethers), or PAEs, derived from linear polyamines, under ambient conditions and in absence of additional reagents. The kinetics of nanoparticle formation were dependent on PAE concentration and chemical composition. In addition, yields were significantly greater in case of PAEs when compared to 25 kDa poly(ethylene imine), which was used as a standard catonic polymer. Ultraviolet radiation enhanced the kinetics and the yield of both gold and silver nanoparticles, likely by means of a coreduction effect. PAE-templated gold nanoparticles demonstrated the ability to deliver plasmid DNA, resulting in transgene expression, in 22Rv1 human prostate cancer and MB49 murine bladder cancer cell lines. Taken together, our results indicate that chemically diverse poly(amino ethers) can be employed for rapidly templating the formation of metal nanoparticles under ambient conditions. The simplicity of synthesis and chemical diversity make PAE-templated nanoparticles useful tools for several applications in biotechnology, including nucleic acid delivery. PMID:25084138

  10. Parallel implementation of the genetic algorithm on NVIDIA GPU architecture for synthesis and inversion

    NASA Astrophysics Data System (ADS)

    Karthik, Victor U.; Sivasuthan, Sivamayam; Hoole, Samuel Ratnajeevan H.

    2014-02-01

    The computational algorithms for device synthesis and nondestructive evaluation (NDE) are often the same. In both we have a goal - a particular field configuration yielding the design performance in synthesis or to match exterior measurements in NDE. The geometry of the design or the postulated interior defect is then computed. Several optimization methods are available for this. The most efficient like conjugate gradients are very complex to program for the required derivative information. The least efficient zeroth order algorithms like the genetic algorithm take much computational time but little programming effort. This paper reports launching a Genetic Algorithm kernel on thousands of compute unified device architecture (CUDA) threads exploiting the NVIDIA graphics processing unit (GPU) architecture. The efficiency of parallelization, although below that on shared memory supercomputer architectures, is quite effective in cutting down solution time into the realm of the practicable. We carry this further into multi-physics electro-heat problems where the parameters of description are in the electrical problem and the object function in the thermal problem. Indeed, this is where the derivative of the object function in the heat problem with respect to the parameters in the electrical problem is the most difficult to compute for gradient methods, and where the genetic algorithm is most easily implemented.

  11. Parallel microfluidic synthesis of size-tunable polymeric nanoparticles using 3D flow focusing towards in vivo study

    PubMed Central

    Lim, Jong-Min; Bertrand, Nicolas; Valencia, Pedro M.; Rhee, Minsoung; Langer, Robert; Jon, Sangyong; Farokhzad, Omid C.; Karnik, Rohit

    2014-01-01

    Microfluidic synthesis of nanoparticles (NPs) can enhance the controllability and reproducibility in physicochemical properties of NPs compared to bulk synthesis methods. However, applications of microfluidic synthesis are typically limited to in vitro studies due to low production rates. Herein, we report the parallelization of NP synthesis by 3D hydrodynamic flow focusing (HFF) using a multilayer microfluidic system to enhance the production rate without losing the advantages of reproducibility, controllability, and robustness. Using parallel 3D HFF, polymeric poly(lactide-co-glycolide)-b-polyethyleneglycol (PLGA-PEG) NPs with sizes tunable in the range of 13–150 nm could be synthesized reproducibly with high production rate. As a proof of concept, we used this system to perform in vivo pharmacokinetic and biodistribution study of small (20 nm diameter) PLGA-PEG NPs that are otherwise difficult to synthesize. Microfluidic parallelization thus enables synthesis of NPs with tunable properties with production rates suitable for both in vitro and in vivo studies. PMID:23969105

  12. Kinematic Analysis and Synthesis of a 3-URU Pure Rotational Parallel Mechanism with Respect to Singularity and Workspace

    NASA Astrophysics Data System (ADS)

    Huda, Syamsul; Takeda, Yukio

    This paper concerns kinematics and dimensional synthesis of a three universal-revolute-universal (3-URU) pure rotational parallel mechanism. The mechanism is composed of a base, a platform and three symmetric limbs consisting of U-R-U joints. This mechanism is a spatial non-overconstrained mechanism with three degrees of freedom. The joints in each limb are so arranged to perform pure rotational motion of the platform around a specific point. Equations for inverse displacement analysis and singularities were derived to investigate the relationship of the kinematic constants to the solution of the inverse kinematics and singularities. Based on the results, a dimensional synthesis procedure for the 3-URU parallel mechanism considering singularities and the workspace was proposed. A numerical example was also presented to illustrate the synthesis method.

  13. Parallel Synthesis of 1,6-Disubstituted-1,2,4-Triazin-3-Ones on Solid-Phase

    PubMed Central

    Hu, Miao; Huang, Wei; Giulianotti, Marc A.; Houghten, Richard A.; Yu, Yongping

    2013-01-01

    A parallel solid-phase synthesis of 1,6-disubstituted-1,2,4-triazin-3-ones from MBHA resin is described. The reduction of resin-bound nitrosamino acids provides hydrazines efficiently without affecting the amide bond. The trityl protected hydrazine is then reduced with borane, and cyclized with 1,1-carbonyldiimidazole. The desired products are cleaved from their solid support and obtained in good yield and purity. This methodology is of value for the rapid parallel preparation of these potentially bioactive molecules. PMID:23750635

  14. Type Synthesis of Two-Degrees-of-Freedom Rotational Parallel Mechanism with Two Continuous Rotational Axes

    NASA Astrophysics Data System (ADS)

    Xu, Yundou; Zhang, Dongsheng; Wang, Min; Yao, Jiantao; Zhao, Yongsheng

    2016-04-01

    The two-rotational-degrees-of-freedom(2R) parallel mechanism(PM) with two continuous rotational axes(CRAs) has a simple kinematic model. It is therefore easy to implement trajectory planning, parameter calibration, and motion control, which allows for a variety of application prospects. However, no systematic analysis on structural constraints of the 2R-PM with two CRAs has been performed, and there are only a few types of 2R-PM with two CRAs. Thus, a theory regarding the type synthesis of the 2R-PM with two CRAs is systematically established. First, combining the theories of reciprocal screw and space geometry, the spatial arrangement relationships of the constraint forces applied to the moving platform by the branches are explored, which give the 2R-PM two CRAs. The different distributions of the constraint forces in each branch are also studied. On the basis of the obtained structural constraints of branches, and considering the geometric relationships of constraint forces in each branch, the appropriate kinematic chains are constructed. Through the reasonable configuration of branch kinematic chains corresponding to every structural constraint, a series of new 2R-PMs with two CRAs are finally obtained.

  15. Parallel evolution of Nitric Oxide signaling: Diversity of synthesis & memory pathways

    PubMed Central

    Moroz, Leonid L.; Kohn, Andrea B.

    2014-01-01

    The origin of NO signaling can be traceable back to the origin of life with the large scale of parallel evolution of NO synthases (NOSs). Inducible-like NOSs may be the most basal prototype of all NOSs and that neuronal-like NOS might have evolved several times from this prototype. Other enzymatic and non-enzymatic pathways for NO synthesis have been discovered using reduction of nitrites, an alternative source of NO. Diverse synthetic mechanisms can co-exist within the same cell providing a complex NO-oxygen microenvironment tightly coupled with cellular energetics. The dissection of multiple sources of NO formation is crucial in analysis of complex biological processes such as neuronal integration and learning mechanisms when NO can act as a volume transmitter within memory-forming circuits. In particular, the molecular analysis of learning mechanisms (most notably in insects and gastropod molluscs) opens conceptually different perspectives to understand the logic of recruiting evolutionarily conserved pathways for novel functions. Giant uniquely identified cells from Aplysia and related species precent unuque opportunities for integrative analysis of NO signaling at the single cell level. PMID:21622160

  16. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  17. Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.

    PubMed

    Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul

    2009-02-15

    The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models. PMID:19138845

  18. Automated parallel synthesis of 5'-triphosphate oligonucleotides and preparation of chemically modified 5'-triphosphate small interfering RNA.

    PubMed

    Zlatev, Ivan; Lackey, Jeremy G; Zhang, Ligang; Dell, Amy; McRae, Kathy; Shaikh, Sarfraz; Duncan, Richard G; Rajeev, Kallanthottathil G; Manoharan, Muthiah

    2013-02-01

    A fully automated chemical method for the parallel and high-throughput solid-phase synthesis of 5'-triphosphate and 5'-diphosphate oligonucleotides is described. The desired full-length oligonucleotides were first constructed using standard automated DNA/RNA solid-phase synthesis procedures. Then, on the same column and instrument, efficient implementation of an uninterrupted sequential cycle afforded the corresponding unmodified or chemically modified 5'-triphosphates and 5'-diphosphates. The method was readily translated into a scalable and high-throughput synthesis protocol compatible with the current DNA/RNA synthesizers yielding a large variety of unique 5'-polyphosphorylated oligonucleotides. Using this approach, we accomplished the synthesis of chemically modified 5'-triphosphate oligonucleotides that were annealed to form small-interfering RNAs (ppp-siRNAs), a potentially interesting class of novel RNAi therapeutic tools. The attachment of the 5'-triphosphate group to the passenger strand of a siRNA construct did not induce a significant improvement in the in vitro RNAi-mediated gene silencing activity nor a strong specific in vitro RIG-I activation. The reported method will enable the screening of many chemically modified ppp-siRNAs, resulting in a novel bi-functional RNAi therapeutic platform. PMID:23260577

  19. Parallel synthesis of a series of potentially brain penetrant aminoalkyl benzoimidazoles.

    PubMed

    Micco, Iolanda; Nencini, Arianna; Quinn, Joanna; Bothmann, Hendrick; Ghiron, Chiara; Padova, Alessandro; Papini, Silvia

    2008-03-01

    Alpha7 agonists were identified via GOLD (CCDC) docking in the putative agonist binding site of an alpha7 homology model and a series of aminoalkyl benzoimidazoles was synthesised to obtain potentially brain penetrant drugs. The array was prepared starting from the reaction of ortho-fluoronitrobenzenes with a selection of diamines, followed by reduction of the nitro group to obtain a series of monoalkylated phenylene diamines. N,N'-Carbonyldiimidazole (CDI) mediated acylation, followed by a parallel automated work-up procedure, afforded the monoacylated phenylenediamines which were cyclised under acidic conditions. Parallel work-up and purification afforded the array products in good yields and purities with a robust parallel methodology which will be useful for other libraries. Screening for alpha7 activity revealed compounds with agonist activity for the receptor. PMID:18078760

  20. Synthesis and analysis of a new class of six-degree-of-freedom parallel minimanipulators

    NASA Technical Reports Server (NTRS)

    Tsai, Lung-Wen; Tahmasebi, Farhad

    1993-01-01

    A new class of six-degree-of-freedom parallel minimanipulators capable of providing high resolution and high-stiffness for fine position and force control in a hybrid serial-parallel manipulator system is presented. Positional resolution and stiffness of minimanipulators are improved using two-degree-of-freedom planar linkages as drivers. The minimanipulators are based on only three inextensible limbs, as opposed to most of the six-limbed parallel manipulators, which makes it possible to reduce its direct kinematics to solving a polynomial in a single variable and to diminish possibility of mechanical interference between limbs. The base-mounted minimanipulator actuators are characterized by high payload capacity, small actuator size, and low power dissipitation.

  1. Parallel synthesis of 1,3-dihydro-1,4-benzodiazepine-2-ones employing catch and release.

    PubMed

    Laustsen, Line S; Sams, Christian K

    2007-01-01

    An efficient solid-phase method has been developed for the parallel synthesis of 1,3-dihydro-1,4-benzodiazepine-2-one derivatives. A key step in this procedure involves catching crude 2-aminobenzoimine products 4 on an amino acid Wang resin 10. Mild acidic conditions then promote a ring closure and in the same step cleavage from the resin to give pure benzodiazepine products 12. The 2-aminobenzoimines 4 can be synthesized from either 2-aminobenzonitriles 1 and Grignard reagents 2 or from iodoanilines 5 and nitriles 7 allowing a range of diversification. Further diversification can be introduced to the benzodiazepine products by N-alkylation promoted by a resin bound base and alkylating agents 13. PMID:17915962

  2. Production of complex nucleic acid libraries using highly parallel in situ oligonucleotide synthesis.

    PubMed

    Cleary, Michele A; Kilian, Kristopher; Wang, Yanqun; Bradshaw, Jeff; Cavet, Guy; Ge, Wei; Kulkarni, Amit; Paddison, Patrick J; Chang, Kenneth; Sheth, Nihar; Leproust, Eric; Coffey, Ernest M; Burchard, Julja; McCombie, W Richard; Linsley, Peter; Hannon, Gregory J

    2004-12-01

    Generation of complex libraries of defined nucleic acid sequences can greatly aid the functional analysis of protein and gene function. Previously, such studies relied either on individually synthesized oligonucleotides or on cellular nucleic acids as the starting material. As each method has disadvantages, we have developed a rapid and cost-effective alternative for construction of small-fragment DNA libraries of defined sequences. This approach uses in situ microarray DNA synthesis for generation of complex oligonucleotide populations. These populations can be recovered and either used directly or immortalized by cloning. From a single microarray, a library containing thousands of unique sequences can be generated. As an example of the potential applications of this technology, we have tested the approach for the production of plasmids encoding short hairpin RNAs (shRNAs) targeting numerous human and mouse genes. We achieved high-fidelity clone retrieval with a uniform representation of intended library sequences. PMID:15782200

  3. Kinematic synthesis and analysis of a novel class of six-DOF parallel minimanipulators

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Farhad

    A new class of six degree of freedom (six-DOF) parallel minimanipulators is introduced. The minimanipulators are designed to provide high resolution and high stiffness in fine-manipulation operations. Two-DOF planar mechanisms (e.g., five-bar linkages, pantographs) and inextensible limbs are used to improve positional resolution and stiffness of the minimanipulators. The two-DOF mechanisms serve as drivers for the minimanipulators. The minimanipulators require only three inextensible limbs and, unlike most of the six-limbed parallel manipulators, their direct kinematics can be reduced to solving a polynomial in a single variable. All of the minimanipulator actuators are base-mounted. As a result, higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. Inverse kinematics of the minimanipulators has been reduced to solving three decoupled quadratic equations, each of which contains only one unknown. Kinematic inversion is used to reduce the direct kinematics of the minimanipulator to an eighth-degree polynomial in the square of a single variable. Hence, the maximum number of assembly configurations for the minimanipulator is sixteen. It is proved that the sixteen solutions are eight pairs of reflected configurations with respect to the plane passing through the lower ends of the three limbs. The Jacobian and stiffness matrices of two types of minimanipulators are derived. It is shown that, at a central configuration, the stiffness matrix of the first type minimanipulator (driven by bidirectional linear stepper motors) can be decoupled, if proper design parameters are chosen. It is also shown that the stiffness of the minimanipulators is higher than that of the Stewart platform. Guidelines for obtaining large stiffness values and for designing the drivers of the second type minimanipulator (simplified five-bar linkages) are established. An algorithm is developed to determine the workspace of the minimanipulators. Given any

  4. Parallel Synthesis and Biological Evaluation of 837 Analogues of Procaspase-Activating Compound 1 (PAC-1)

    PubMed Central

    Hsu, Danny C.; Roth, Howard S.; West, Diana C.; Botham, Rachel C.; Novotny, Chris J.; Schmid, Steven C.; Hergenrother, Paul J.

    2011-01-01

    Procaspase-Activating Compound 1 (PAC-1) is an ortho-hydroxy N-acyl hydrazone that enhances the enzymatic activity of procaspase-3 in vitro and induces apoptosis in cancer cells. An analogue of PAC-1, called S-PAC-1, was evaluated in a veterinary clinical trial in pet dogs with lymphoma and found to have considerable potential as an anticancer agent. With the goal of identifying more potent compounds in this promising class of experimental therapeutics, a combinatorial library based on PAC-1 was created, and the compounds were evaluated for their ability to induce death of cancer cells in culture. For library construction, 31 hydrazides were condensed in parallel with 27 aldehydes to create 837 PAC-1 analogues, with an average purity of 91%. The compounds were evaluated for their ability to induce apoptosis in cancer cells, and through this work, six compounds were discovered to be substantially more potent than PAC-1 and S-PAC-1. These six hits were further evaluated for their ability to relieve zinc-mediated inhibition of procaspase-3 in vitro. In general, the newly identified hit compounds are two- to four-fold more potent than PAC-1 and S-PAC-1 in cell culture, and thus have promise as experimental therapeutics for treatment of the many cancers that have elevated expression levels of procaspase-3. PMID:22007686

  5. Structure-based Design and In-Parallel Synthesis of Inhibitors of AmpC b-lactamase

    SciTech Connect

    Tondi, D.; Powers, R.A.; Negri, M.C.; Caselli, M.C.; Blazquez, J.; Costi, M.P.; Shoichet, B.K.

    2010-03-08

    Group I {beta}-lactamases are a major cause of antibiotic resistance to {beta}-lactams such as penicillins and cephalosporins. These enzymes are only modestly affected by classic {beta}-lactam-based inhibitors, such as clavulanic acid. Conversely, small arylboronic acids inhibit these enzymes at sub-micromolar concentrations. Structural studies suggest these inhibitors bind to a well-defined cleft in the group I {beta}-lactamase AmpC; this cleft binds the ubiquitous R1 side chain of {beta}-lactams. Intriguingly, much of this cleft is left unoccupied by the small arylboronic acids. To investigate if larger boronic acids might take advantage of this cleft, structure-guided in-parallel synthesis was used to explore new inhibitors of AmpC. Twenty-eight derivatives of the lead compound, 3-aminophenylboronic acid, led to an inhibitor with 80-fold better binding (2; K{sub i} 83 nM). Molecular docking suggested orientations for this compound in the R1 cleft. Based on the docking results, 12 derivatives of 2 were synthesized, leading to inhibitors with K{sub i} values of 60 nM and with improved solubility. Several of these inhibitors reversed the resistance of nosocomial Gram-positive bacteria, though they showed little activity against Gram-negative bacteria. The X-ray crystal structure of compound 2 in complex with AmpC was subsequently determined to 2.1 {angstrom} resolution. The placement of the proximal two-thirds of the inhibitor in the experimental structure corresponds with the docked structure, but a bond rotation leads to a distinctly different placement of the distal part of the inhibitor. In the experimental structure, the inhibitor interacts with conserved residues in the R1 cleft whose role in recognition has not been previously explored. Combining structure-based design with in-parallel synthesis allowed for the rapid exploration of inhibitor functionality in the R1 cleft of AmpC. The resulting inhibitors differ considerably from {beta}-lactams but

  6. Solution-Phase Epitaxial Growth of Quasi-Monocrystalline Cuprous Oxide on Metal Nanowires

    PubMed Central

    2014-01-01

    The epitaxial growth of monocrystalline semiconductors on metal nanostructures is interesting from both fundamental and applied perspectives. The realization of nanostructures with excellent interfaces and material properties that also have controlled optical resonances can be very challenging. Here we report the synthesis and characterization of metal–semiconductor core–shell nanowires. We demonstrate a solution-phase route to obtain stable core–shell metal–Cu2O nanowires with outstanding control over the resulting structure, in which the noble metal nanowire is used as the nucleation site for epitaxial growth of quasi-monocrystalline Cu2O shells at room temperature in aqueous solution. We use X-ray and electron diffraction, high-resolution transmission electron microscopy, energy dispersive X-ray spectroscopy, photoluminescence spectroscopy, and absorption spectroscopy, as well as density functional theory calculations, to characterize the core–shell nanowires and verify their structure. Metal–semiconductor core–shell nanowires offer several potential advantages over thin film and traditional nanowire architectures as building blocks for photovoltaics, including efficient carrier collection in radial nanowire junctions and strong optical resonances that can be tuned to maximize absorption. PMID:25233392

  7. Discovery of a potent inhibitor of the antiapoptotic protein Bcl-xL from NMR and parallel synthesis.

    PubMed

    Petros, Andrew M; Dinges, Jurgen; Augeri, David J; Baumeister, Steven A; Betebenner, David A; Bures, Mark G; Elmore, Steven W; Hajduk, Philip J; Joseph, Mary K; Landis, Shelley K; Nettesheim, David G; Rosenberg, Saul H; Shen, Wang; Thomas, Sheela; Wang, Xilu; Zanze, Irini; Zhang, Haichao; Fesik, Stephen W

    2006-01-26

    The antiapoptotic proteins Bcl-x(L) and Bcl-2 play key roles in the maintenance of normal cellular homeostasis. However, their overexpression can lead to oncogenic transformation and is responsible for drug resistance in certain types of cancer. This makes Bcl-x(L) and Bcl-2 attractive targets for the development of potential anticancer agents. Here we describe the structure-based discovery of a potent Bcl-x(L) inhibitor directed at a hydrophobic groove on the surface of the protein. This groove represents the binding site for BH3 peptides from proapoptotic Bcl-2 family members such as Bak and Bad. Application of NMR-based screening yielded an initial biaryl acid with an affinity (K(d)) of approximately 300 microM for the protein. Following the classical "SAR by NMR" approach, a second-site ligand was identified that bound proximal to the first-site ligand in the hydrophobic groove. From NMR-based structural studies and parallel synthesis, a potent ligand was obtained, which binds to Bcl-x(L) with an inhibition constant (K(i)) of 36 +/- 2 nM. PMID:16420051

  8. Anomalous regioselective four-member multicomponent Biginelli reaction II: one-pot parallel synthesis of spiro heterobicyclic aliphatic rings.

    PubMed

    Byk, Gerardo; Kabha, Eihab

    2004-01-01

    In a previous preliminary study, we found that a cyclic five-member ring beta-keto ester (lactone) reacts with one molecule of urea and two of aldehyde to give a new family of spiro heterobicyclic aliphatic rings in good yields with no traces of the expected dihydropyrimidine (Biginelli) products. The reaction is driven by a regiospecific condensation of two molecules of aldehyde with urea and beta-keto-gamma-lactone to afford only products harboring substitutions exclusively in a syn configuration (Byk, G.; Gottlieb, H. E.; Herscovici, J.; Mirkin, F. J. Comb. Chem. 2000, 2, 732-735). In the present work ((a) Presented in part at ISCT Combitech, October 15, 2002, Israel, and Eurocombi-2, Copenhagen 2003 (oral and poster presentation). (b) Also in American Peptide Society Symposium, Boston, 2003 (poster presentation). (c) Abstract in Biopolymers 2003, 71 (3), 354-355), we report a large and exciting extension of this new reaction utilizing parallel organic synthesis arrays, as demonstrated by the use of chiral beta-keto-gamma-lactams, derived from natural amino acids, instead of tetronic acid (beta-keto-gamma-lactone) and the potential of the spirobicyclic products for generating "libraries from libraries". Interestingly, we note an unusual and important anisotropy effect induced by perpendicular interactions between rigid pi systems and different groups placed at the alpha position of the obtained spirobicyclic system. Stereo/regioselectivity of the aldehyde condensation is driven by the nature of the substitutions on the starting beta-keto-gamma-lactam. Aromatic aldehydes can be used as starting reagents with good yields; however, when aliphatic aldehydes are used, the desired products are obtained in poor yields, as observed in the classical Biginelli reaction. The possible reasons for these poor yields are addressed and clarify, to some extent, the complexity of the Biginelli multicomponent reaction mechanism and, in particular, the mechanism of the present

  9. Vertical Single-Crystalline Organic Nanowires on Graphene: Solution-Phase Epitaxy and Optical Microcavities.

    PubMed

    Zheng, Jian-Yao; Xu, Hongjun; Wang, Jing Jing; Winters, Sinéad; Motta, Carlo; Karademir, Ertuğrul; Zhu, Weigang; Varrla, Eswaraiah; Duesberg, Georg S; Sanvito, Stefano; Hu, Wenping; Donegan, John F

    2016-08-10

    Vertically aligned nanowires (NWs) of single crystal semiconductors have attracted a great deal of interest in the past few years. They have strong potential to be used in device structures with high density and with intriguing optoelectronic properties. However, fabricating such nanowire structures using organic semiconducting materials remains technically challenging. Here we report a simple procedure for the synthesis of crystalline 9,10-bis(phenylethynyl) anthracene (BPEA) NWs on a graphene surface utilizing a solution-phase van der Waals (vdW) epitaxial strategy. The wires are found to grow preferentially in a vertical direction on the surface of graphene. Structural characterization and first-principles ab initio simulations were performed to investigate the epitaxial growth and the molecular orientation of the BPEA molecules on graphene was studied, revealing the role of interactions at the graphene-BPEA interface in determining the molecular orientation. These free-standing NWs showed not only efficient optical waveguiding with low loss along the NW but also confinement of light between the two end facets of the NW forming a microcavity Fabry-Pérot resonator. From an analysis of the optical dispersion within such NW microcavities, we observed strong slowing of the waveguided light with a group velocity reduced to one-tenth the speed of light. Applications of the vertical single-crystalline organic NWs grown on graphene will benefit from a combination of the unique electronic properties and flexibility of graphene and the tunable optical and electronic properties of organic NWs. Therefore, these vertical organic NW arrays on graphene offer the potential for realizing future on-chip light sources. PMID:27438189

  10. Automated liquid-liquid extraction workstation for library synthesis and its use in the parallel and chromatography-free synthesis of 2-alkyl-3-alkyl-4-(3H)-quinazolinones.

    PubMed

    Carpintero, Mercedes; Cifuentes, Marta; Ferritto, Rafael; Haro, Rubén; Toledo, Miguel A

    2007-01-01

    An automated liquid-liquid extraction workstation has been developed. This module processes up to 96 samples in an automated and parallel mode avoiding the time-consuming and intensive sample manipulation during the workup process. To validate the workstation, a highly automated and chromatography-free synthesis of differentially substituted quinazolin-4(3H)-ones with two diversity points has been carried out using isatoic anhydride as starting material. PMID:17645313

  11. Mapping the Catechol Binding Site in Dopamine D1 Receptors: Synthesis and Evaluation of Two Parallel Series of Bicyclic Dopamine Analogues

    PubMed Central

    Bonner, Lisa A.; Laban, Uros; Chemel, Benjamin R.; Juncosa, Jose I.; Lill, Markus A.; Watts, Val J.; Nichols, David E.

    2012-01-01

    A novel class of isochroman dopamine analogues, 1, originally reported by Abbott Laboratories, had greater than 100-fold selectivity for D1-like vs. D2-like receptors. We synthesized a parallel series of chroman compounds, 2, and showed that repositioning the oxygen in the heterocyclic ring reduced potency and conferred D2-like receptor selectivity to these compounds. In silico modeling supported the hypothesis that the altered pharmacology for 2 was due to potential intramolecular hydrogen bonding between the oxygen in the chroman ring and the meta-hydroxyl of the catechol moiety. This interaction realigns the catechol hydroxyl groups and disrupts key interactions between these ligands and critical serine residues in TM5 of the D1-like receptors. This hypothesis was tested by the synthesis and pharmacological evaluation of a parallel series of carbocyclic compounds, 3. Our results suggest that when the potential for intramolecular hydrogen bonding is removed, D1-like receptor potency and selectivity is restored. PMID:21538900

  12. Direct screening of solution phase combinatorial libraries encoded with externally sensitized photolabile tags

    PubMed Central

    Kottani, Rudresha; Valiulin, Roman A.; Kutateladze, Andrei G.

    2006-01-01

    Solution phase combinatorial chemistry holds an enormous promise for modern drug discovery. Much needed are direct methods to assay such libraries for binding of biological targets. An approach to encoding and screening of solution phase libraries has been developed based on the conditional photorelease of externally sensitized photolabile tags. The encoding tags are released into solution only when a sought-for binding event occurs between the ligand and the receptor, outfitted with an electron-transfer sensitizer. The released tags are analyzed in solution revealing the identity of the lead ligand or narrowing the range of potential leads. PMID:16956977

  13. Parametric synthesis of optical systems composed of thin lenses by using the plane-parallel plate aberration properties

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Zverev, Victor; Ezhova, Vasilisa

    2015-09-01

    The possibility of constructing the optical system with an aplanatic correction of aberrations representing generally combination of the thin lens with an aplanatic meniscus and plane-parallel plate of small thickness is shown.

  14. Doping and Alloying in the Solution-Phase Synthesis of Germanium Nanocrystals

    SciTech Connect

    Ruddy, D. A.; Neale, N. R.

    2012-01-01

    Group IV nanocrystals (NCs) are receiving increased attention as a potentially non-toxic nanomaterial for use in a number of important optoelectronic applications (e.g., solar photoconversion, photodetectors, LEDs, biological imaging). With these goals in mind, doping and alloying with Group III, IV, and V elements may play a major role in tailoring the NC properties, such as developing n-type and p-type conductivity through substitutional doping, as well as affecting the optical absorption, emission, and overall charge transport in a NC film. Here we present an extension of the mixed-valence iodide precursor methodology to incorporate Group III, IV, and V elements to produce E-GeNC materials. All main-group elements (E) that surround Ge on the periodic table (i.e., E = Al, Si, P, Ga, As, In, Sn, and Sb) can be incorporated via this methodology. The extent to which the dopant elements are included will be discussed, along with the optical absorbance, emission, and related properties of the NCs. In addition, the effect of the dopant elements on the NC growth kinetics will be discussed.

  15. Scalable synthesis of quaterrylene: solution-phase 1H NMR spectroscopy of its oxidative dication.

    PubMed

    Thamatam, Rajesh; Skraba, Sarah L; Johnson, Richard P

    2013-10-14

    Quaterrylene is prepared in a single reaction and high yield by Scholl-type coupling of perylene, utilizing trifluoromethanesulfonic acid as catalyst and DDQ or molecular oxygen as oxidant. Dissolution in 1 M triflic acid/dichloroethane with sonication yields the aromatic quaterrylene oxidative dication, which is characterized by its (1)H NMR spectrum. PMID:23999880

  16. A New Application of Parallel Synthesis Strategy for Discovery of Amide-Linked Small Molecules as Potent Chondroprotective Agents in TNF-α-Stimulated Chondrocytes

    PubMed Central

    Lee, Chia-Chung; Lo, Yang; Ho, Ling-Jun; Lai, Jenn-Haung; Lien, Shiu-Bii; Lin, Leou-Chyr; Chen, Chun-Liang; Chen, Tsung-Chih; Liu, Feng-Cheng; Huang, Hsu-Shan

    2016-01-01

    As part of an effort to profile potential therapeutics for the treatment of inflammation-related diseases, a diversity of amide-linked small molecules was synthesized by using parallel synthesis strategy. Moreover, these new compounds were also evaluated for their inhibitory effects on nitric oxide (NO) by using tumor necrosis factor alpha (TNF-α)-induced inflammatory responses in chondrocytes. Among the tested compounds, N-(3-chloro-4-fluorophenyl)-2-hydroxybenzamide (HS-Ck) was the most potent inhibitor of NO production and inducible nitric oxide synthase (iNOS) expression in TNF-α-stimulated chondrocytes. In addition, our biological results indicated that HS-Ck might suppress the expression levels of iNOS and matrix metalloproteinases-13 (MMP-13) activities through downregulating the activation of nuclear factor kappa B (NF-κB) and signal transducer and activator of transcription 3 (STAT-3) transcriptional factors. Therefore, the parallel synthesis was successful used to develop a new class of potential anti-inflammatory agents as chondroprotective candidates for the treatment of osteoarthritis. PMID:26963090

  17. A New Application of Parallel Synthesis Strategy for Discovery of Amide-Linked Small Molecules as Potent Chondroprotective Agents in TNF-α-Stimulated Chondrocytes.

    PubMed

    Lee, Chia-Chung; Lo, Yang; Ho, Ling-Jun; Lai, Jenn-Haung; Lien, Shiu-Bii; Lin, Leou-Chyr; Chen, Chun-Liang; Chen, Tsung-Chih; Liu, Feng-Cheng; Huang, Hsu-Shan

    2016-01-01

    As part of an effort to profile potential therapeutics for the treatment of inflammation-related diseases, a diversity of amide-linked small molecules was synthesized by using parallel synthesis strategy. Moreover, these new compounds were also evaluated for their inhibitory effects on nitric oxide (NO) by using tumor necrosis factor alpha (TNF-α)-induced inflammatory responses in chondrocytes. Among the tested compounds, N-(3-chloro-4-fluorophenyl)-2-hydroxybenzamide (HS-Ck) was the most potent inhibitor of NO production and inducible nitric oxide synthase (iNOS) expression in TNF-α-stimulated chondrocytes. In addition, our biological results indicated that HS-Ck might suppress the expression levels of iNOS and matrix metalloproteinases-13 (MMP-13) activities through downregulating the activation of nuclear factor kappa B (NF-κB) and signal transducer and activator of transcription 3 (STAT-3) transcriptional factors. Therefore, the parallel synthesis was successful used to develop a new class of potential anti-inflammatory agents as chondroprotective candidates for the treatment of osteoarthritis. PMID:26963090

  18. Solution-phase EPR studies of single-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Chen, J.; Hu, H.; Hamon, M. A.; Itkis, M. E.; Haddon, R. C.

    1999-01-01

    We report EPR studies on pristine, purified, shortened and soluble SWNTs in various solution phases. Some of these samples give rise to strong, sharp EPR signals, and this technique is useful for monitoring the presence of SWNTs in aqueous and organic solvents. The soluble SWNTs carry about 1 unpaired electron per 10000 carbon atoms and give a free electron g-value.

  19. Tetragonal Lysozyme Nucleation and Crystal Growth: The Role of the Solution Phase

    NASA Technical Reports Server (NTRS)

    Pusey, Marc L.; Forsythe, Elizabeth; Sumida, John; Maxwell, Daniel; Gorti, Sridhar; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    Experimental evidence indicates a dominant role of solution phase interactions in nucleating and growing tetragonal lysozyme crystals. These interactions are extensive, even at saturation, and may be a primary cause of misoriented regions in crystals grown on Earth. Microgravity, by limiting interfacial concentrations to diffusion-controlled levels, may benefit crystal quality by also reducing the extent of associated species present at the interface.

  20. A New Relationship Among Self- and Impurity Diffusion Coefficients in Binary Solution Phases

    NASA Astrophysics Data System (ADS)

    Xin, Jinghua; Du, Yong; Shang, Shunli; Cui, Senlin; Wang, Jianchuan; Huang, Baiyun; Liu, Zikui

    2016-07-01

    A new relationship among self- and impurity diffusion coefficients has been proposed for binary solution phases and verified via 30 solid solutions. In terms of this model, one impurity diffusion coefficient in a binary phase can be predicted once the other three diffusion coefficients are available. The application of the present model is exemplified in the Al-Mg system.

  1. A New Relationship Among Self- and Impurity Diffusion Coefficients in Binary Solution Phases

    NASA Astrophysics Data System (ADS)

    Xin, Jinghua; Du, Yong; Shang, Shunli; Cui, Senlin; Wang, Jianchuan; Huang, Baiyun; Liu, Zikui

    2016-05-01

    A new relationship among self- and impurity diffusion coefficients has been proposed for binary solution phases and verified via 30 solid solutions. In terms of this model, one impurity diffusion coefficient in a binary phase can be predicted once the other three diffusion coefficients are available. The application of the present model is exemplified in the Al-Mg system.

  2. Synthesis of Azide-Tagged Library of 2,3-Dihydro-4-Quinolones

    PubMed Central

    Lee, Hajoong; Suzuki, Masato; Cui, Jiayue; Kozmin, Sergey A.

    2010-01-01

    We describe the assembly of a 960-member library of tricyclic 2,3-dihydro-4-quinolones using a combination of solution-phase high-throughput organic synthesis and parallel chromatographic purification. The library was produced with high efficiency and complete chemo and diastereoselectivity by diversification of an azide-bearing quinolone via a sequence of [4+2] cycloadditions, N-acylations and reductive aminations. The azide-functionalization of this library is designed to facilitate subsequent preparation of fluorescent or affinity probes, as well as small-molecule/surface conjugation. PMID:20141224

  3. An open-source, massively parallel code for non-LTE synthesis and inversion of spectral lines and Zeeman-induced Stokes profiles

    NASA Astrophysics Data System (ADS)

    Socas-Navarro, H.; de la Cruz Rodríguez, J.; Asensio Ramos, A.; Trujillo Bueno, J.; Ruiz Cobo, B.

    2015-05-01

    With the advent of a new generation of solar telescopes and instrumentation, interpreting chromospheric observations (in particular, spectropolarimetry) requires new, suitable diagnostic tools. This paper describes a new code, NICOLE, that has been designed for Stokes non-LTE radiative transfer, for synthesis and inversion of spectral lines and Zeeman-induced polarization profiles, spanning a wide range of atmospheric heights from the photosphere to the chromosphere. The code features a number of unique features and capabilities and has been built from scratch with a powerful parallelization scheme that makes it suitable for application on massive datasets using large supercomputers. The source code is written entirely in Fortran 90/2003 and complies strictly with the ANSI standards to ensure maximum compatibility and portability. It is being publicly released, with the idea of facilitating future branching by other groups to augment its capabilities. The source code is currently hosted at the following repository: http://https://github.com/hsocasnavarro/NICOLE

  4. Comparison of photoluminescence of carbon nanotube/ZnO nanostructures synthesized by gas- and solution-phase transport

    NASA Astrophysics Data System (ADS)

    Jin, Changhyun; Lee, Seawook; Kim, Chang-Wan; Park, Suyoung; Lee, Chongmu; Lee, Dongjin

    2015-02-01

    Multiwalled carbon nanotubes (MWCNTs)/ZnO heterostructures were synthesized by two different processes: (1) gas-phase transport (GPT) and nucleation of Zn powders and (2) solution-phase transport (SPT) chemical reaction of zinc nitrate solution on the MWCNTs. Transmission electron microscopy and X-ray diffraction analysis indicated that the ZnO nanostructures on the MWCNTs from the GPT and SPT processes were poly- and single-crystal hexagonal wurtzite structure, respectively. The major photoluminescence (PL) spectra of our MWCNT/ZnO hybrid, excited at 380 nm and 550 nm, were presented. The PL intensity of the MWCNT/ZnO coaxial nanostructures behaves differently depending on the ZnO synthesis methods on the MWCNTs. The MWCNT/ZnO heterostructures synthesized using the GPT process were more efficient than those synthesized by SPT process in enhancing the PL intensity around the near-band-edge emission region. However, the emission enhancement around defect region was mostly attributed to increase in the O vacancy concentration in the ZnO on the MWCNTs during the SPT process.

  5. Comparison of photoluminescence of carbon nanotube/ZnO nanostructures synthesized by gas- and solution-phase transport

    NASA Astrophysics Data System (ADS)

    Jin, Changhyun; Lee, Seawook; Kim, Chang-Wan; Park, Suyoung; Lee, Chongmu; Lee, Dongjin

    2014-09-01

    Multiwalled carbon nanotubes (MWCNTs)/ZnO heterostructures were synthesized by two different processes: (1) gas-phase transport (GPT) and nucleation of Zn powders and (2) solution-phase transport (SPT) chemical reaction of zinc nitrate solution on the MWCNTs. Transmission electron microscopy and X-ray diffraction analysis indicated that the ZnO nanostructures on the MWCNTs from the GPT and SPT processes were poly- and single-crystal hexagonal wurtzite structure, respectively. The major photoluminescence (PL) spectra of our MWCNT/ZnO hybrid, excited at 380 nm and 550 nm, were presented. The PL intensity of the MWCNT/ZnO coaxial nanostructures behaves differently depending on the ZnO synthesis methods on the MWCNTs. The MWCNT/ZnO heterostructures synthesized using the GPT process were more efficient than those synthesized by SPT process in enhancing the PL intensity around the near-band-edge emission region. However, the emission enhancement around defect region was mostly attributed to increase in the O vacancy concentration in the ZnO on the MWCNTs during the SPT process.

  6. Supramolecular chemistry: from aromatic foldamers to solution-phase supramolecular organic frameworks.

    PubMed

    Li, Zhan-Ting

    2015-01-01

    This mini-review covers the growth, education, career, and research activities of the author. In particular, the developments of various folded, helical and extended secondary structures from aromatic backbones driven by different noncovalent forces (including hydrogen bonding, donor-acceptor, solvophobicity, and dimerization of conjugated radical cations) and solution-phase supramolecular organic frameworks driven by hydrophobically initiated aromatic stacking in the cavity of cucurbit[8]uril (CB[8]) are highlighted. PMID:26664626

  7. Supramolecular chemistry: from aromatic foldamers to solution-phase supramolecular organic frameworks

    PubMed Central

    2015-01-01

    Summary This mini-review covers the growth, education, career, and research activities of the author. In particular, the developments of various folded, helical and extended secondary structures from aromatic backbones driven by different noncovalent forces (including hydrogen bonding, donor–acceptor, solvophobicity, and dimerization of conjugated radical cations) and solution-phase supramolecular organic frameworks driven by hydrophobically initiated aromatic stacking in the cavity of cucurbit[8]uril (CB[8]) are highlighted. PMID:26664626

  8. Solution-phase secondary-ion mass spectrometry of protonated amino acids.

    PubMed

    Pettit, G R; Cragg, G M; Holzapfel, C W; Tuinman, A A; Gieschen, D P

    1987-04-01

    Although sulfolane proved unexpectedly to be a poor solvent for solution-phase secondary-ion mass spectrometry of underivatized amino acids in the presence of thallium(I) salts, glycerol was somewhat more effective. Also, the addition of trifluoromethanesulfonic acid proved more effective than addition of the metal in generating molecular ion complexes. A convenient and reliable method for rapidly determining amino acid molecular ions is based on these observations. PMID:3037939

  9. Solution phase space and conserved charges: A general formulation for charges associated with exact symmetries

    NASA Astrophysics Data System (ADS)

    Hajian, K.; Sheikh-Jabbari, M. M.

    2016-02-01

    We provide a general formulation for calculating conserved charges for solutions to generally covariant gravitational theories with possibly other internal gauge symmetries, in any dimensions and with generic asymptotic behaviors. These solutions are generically specified by a number of exact (continuous, global) symmetries and some parameters. We define "parametric variations" as field perturbations generated by variations of the solution parameters. Employing the covariant phase space method, we establish that the set of these solutions (up to pure gauge transformations) form a phase space, the solution phase space, and that the tangent space of this phase space includes the parametric variations. We then compute conserved charge variations associated with the exact symmetries of the family of solutions, caused by parametric variations. Integrating the charge variations over a path in the solution phase space, we define the conserved charges. In particular, we revisit "black hole entropy as a conserved charge" and the derivation of the first law of black hole thermodynamics. We show that the solution phase space setting enables us to define black hole entropy by an integration over any compact, codminesion-2, smooth spacelike surface encircling the hole, as well as to a natural generalization of Wald and Iyer-Wald analysis to cases involving gauge fields.

  10. Fast and efficient synthesis of Zorro-LNA type 3'-5'-5'-3' oligonucleotide conjugates via parallel in situ stepwise conjugation.

    PubMed

    Gissberg, O I; Jezowska, M; Zaghloul, E M; Bungsu, N I; Strömberg, R; Smith, C I E; Lundin, K E; Honcharenko, M

    2016-04-14

    Zorro-LNA is a new class of therapeutic anti-gene oligonucleotides (ONs) capable of invading supercoiled DNA. The synthesis of single stranded Zorro-LNA is typically complex and laborious, requiring reverse phosphoramidites and a chemical linker connecting the two separate ON arms. Here, a simplified synthesis strategy based on 'click chemistry' is presented with a high potential for screening Zorro-LNA ONs directed against new anti-gene targets. Four different Zorro type 3'-5' 5'-3' constructs were synthesized via parallel in situ Cu(i) [3 + 2] catalysed cycloaddition. They were prepared from commercially obtained ONs functionalized on solid support (one ON with the azide and the other ON with the activated triple bond linker N-propynoylamino)-p-toluic acid (PATA)) and after cleavage from resin, they were conjugated in solution. Our report shows the benefit of combining different approaches when developing anti-gene ONs, (1) the ability for rapid and robust screening of potential targets and (2) refining the hits with more anti-gene optimized constructs. We present as well the first report showing double-strand invasion (DSI) efficiency of two combined Zorro-LNAs. PMID:26975344

  11. Parallel Chemical Protein Synthesis on a Surface Enables the Rapid Analysis of the Phosphoregulation of SH3 Domains.

    PubMed

    Zitterbart, Robert; Seitz, Oliver

    2016-06-13

    Analysis of postranslationally modified protein domains is complicated by an availability problem, as recombinant methods rarely allow site-specificity at will. Although total synthesis enables full control over posttranslational and other modifications, chemical approaches are limited to shorter peptides. To solve this problem, we herein describe a method that combines a) immobilization of N-terminally thiolated peptide hydrazides by hydrazone ligation, b) on-surface native chemical ligation with self-purified peptide thioesters, c) radical-induced desulfurization, and d) a surface-based fluorescence binding assay for functional characterization. We used the method to rapidly investigate 20 SH3 domains, with a focus on their phosphoregulation. The analysis suggests that tyrosine phosphorylation of SH3 domains found in Abl kinases act as a switch that can induce both the loss and, unexpectedly, gain of affinity for proline-rich ligands. PMID:27161995

  12. Analysis and synthesis of harmonic reduction in single-phase inverters using a parallel operation technique with different PWM strategies

    SciTech Connect

    Kamel, A.M.

    1989-01-01

    A new technique for harmonic reduction in single-phase inverters, using sinusoidal pulse width modulation (PWM), is presented. A modified circuit based on parallel operation techniques of MOS inverter sets using interphase reactors as current balancers and making use of a specific phase shift between the modulating and carrier waveforms is investigated as a means to economically and efficiently reduce the harmonic contents in the output waveform. In this study, extensions of the natural sampling methods were considered for the PWM strategy, as this method is popular for the single bridge converters. A variety of PWM techniques were investigated analytically using computer simulation. A number of the more promising techniques were tested in the laboratory using commercial signal generators. Finally, a dedicated signal generator was designed and built which implements the best PWM strategy. Good agreement between analytical results and experimental results was observed throughout the project. Detailed analysis shows that the harmonics low than the 15th (or the 29th) are all less than one percent of the fundamental component when the frequency ratio is relatively low 9 (or 15). When the frequency ratio is increased to 45, at low output frequency, harmonics lower than 87th are canceled from the output waveform. This is done without noticeable reduction in the fundamental component. The results show the feasibility of obtaining practically sinusoidal output waveforms.

  13. Hardware-software-co-design of parallel and distributed systems using a behavioural programming and multi-process model with high-level synthesis

    NASA Astrophysics Data System (ADS)

    Bosse, Stefan

    2011-05-01

    A new design methodology for parallel and distributed embedded systems is presented using the behavioural hardware compiler ConPro providing an imperative programming model based on concurrently communicating sequential processes (CSP) with an extensive set of interprocess-communication primitives and guarded atomic actions. The programming language and the compiler-based synthesis process enables the design of constrained power- and resourceaware embedded systems with pure Register-Transfer-Logic (RTL) efficiently mapped to FPGA and ASIC technologies. Concurrency is modelled explicitly on control- and datapath level. Additionally, concurrency on data-path level can be automatically explored and optimized by different schedulers. The CSP programming model can be synthesized to hardware (SoC) and software (C,ML) models and targets. A common source for both hardware and software implementation with identical functional behaviour is used. Processes and objects of the entire design can be distributed on different hardware and software platforms, for example, several FPGA components and software executed on several microprocessors, providing a parallel and distributed system. Intersystem-, interprocess-, and object communication is automatically implemented with serial links, not visible on programming level. The presented design methodology has the benefit of high modularity, freedom of choice of target technologies, and system architecture. Algorithms can be well matched to and distributed on different suitable execution platforms and implementation technologies, using a unique programming model, providing a balance of concurrency and resource complexity. An extended case study of a communication protocol used in high-density sensor-actuator networks should demonstrate and compare the design of a hardware and software target. The communication protocol is suited for high-density intra-and interchip networks.

  14. Processing of organic electro-optic materials: solution-phase assisted reorientation of chromophores

    NASA Astrophysics Data System (ADS)

    Olbricht, Benjamin C.; Eng, David L. K.; Kozacik, Stephen T.; Ross, Dylan; Prather, Dennis W.

    2013-03-01

    Organic EO materials, sometimes called EO polymers, offer a variety of very promising properties that have improved at remarkable rates over the last decade, and will continue to improve. However, these materials rely on a "poling" process to afford EO activity, which is commonly cited as the bottleneck for the widespread implementation of organic EO material-containing devices. The Solution Phase-Assisted Reorientation of Chromophores (SPARC) is a process that utilizes the mobility of chromophores in the solution phase to afford acentric molecular order during deposition. The electric field can be generated by a corona discharge in a carefully-controlled gas environment. The absence of a poling director during conventional spin deposition forms centric pairs of chromophores which may compromise the efficacy of thermal poling. Direct spectroscopic evidence of linear dichroism in modern organic EO materials has estimated the poling-induced order of the chromophores to be 10-15% of its theoretical maximum, offering the potential for a manyfold enhancement in EO activity if poling is improved. SPARC is designed to overcome these limitations and also to allow the poling of polymeric hosts with temporal thermal (alignment) stabilities greater than the decomposition temperature of the guest chromophore. In this report evidence supporting the theory motivating the SPARC process and the resulting EO activities will be presented. Additionally, the results of trials towards a device demonstration of the SPARC process will be discussed.

  15. Product energy deposition of CN + alkane H abstraction reactions in gas and solution phases

    NASA Astrophysics Data System (ADS)

    Glowacki, David R.; Orr-Ewing, Andrew J.; Harvey, Jeremy N.

    2011-06-01

    In this work, we report the first theoretical studies of post-transition state dynamics for reaction of CN with polyatomic organic species. Using electronic structure theory, a newly developed analytic reactive PES, a recently implemented rare-event acceleration algorithm, and a normal mode projection scheme, we carried out and analyzed quasi-classical and classical non-equilibrium molecular dynamics simulations of the reactions CN + propane (R1) and CN + cyclohexane (R2). For (R2), we carried out simulations in both the gas phase and in a CH2Cl2 solvent. Analysis of the results suggests that the solvent perturbations to the (R2) reactive free energy surface are small, leading to product energy partitioning in the solvent that is similar to the gas phase. The distribution of molecular geometries at the respective gas and solution phase variational association transition states is very similar, leading to nascent HCN which is vibrationally excited in both its CH stretching and HCN bending coordinates. This study highlights the fact that significant non-equilibrium energy distributions may follow in the wake of solution phase bimolecular reactions, and may persist for hundreds of picoseconds despite frictional damping. Consideration of non-thermal distributions is often neglected in descriptions of condensed-phase reactivity; the extent to which the present intriguing observations are widespread remains an interesting question.

  16. Accelerated exploration of multi-principal element alloys with solid solution phases.

    PubMed

    Senkov, O N; Miller, J D; Miracle, D B; Woodward, C

    2015-01-01

    Recent multi-principal element, high entropy alloy (HEA) development strategies vastly expand the number of candidate alloy systems, but also pose a new challenge--how to rapidly screen thousands of candidate alloy systems for targeted properties. Here we develop a new approach to rapidly assess structural metals by combining calculated phase diagrams with simple rules based on the phases present, their transformation temperatures and useful microstructures. We evaluate over 130,000 alloy systems, identifying promising compositions for more time-intensive experimental studies. We find the surprising result that solid solution alloys become less likely as the number of alloy elements increases. This contradicts the major premise of HEAs--that increased configurational entropy increases the stability of disordered solid solution phases. As the number of elements increases, the configurational entropy rises slowly while the probability of at least one pair of elements favouring formation of intermetallic compounds increases more rapidly, explaining this apparent contradiction. PMID:25739749

  17. Ultrafast energy flow in the wake of solution-phase bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Glowacki, David R.; Rose, Rebecca A.; Greaves, Stuart J.; Orr-Ewing, Andrew J.; Harvey, Jeremy N.

    2011-11-01

    Vibrational energy flow into reactants, and out of products, plays a key role in chemical reactivity, so understanding the microscopic detail of the pathways and rates associated with this phenomenon is of considerable interest. Here, we use molecular dynamics simulations to model the vibrational relaxation that occurs during the reaction CN + c-C6H12 → HCN + c-C6H11 in CH2Cl2, which produces vibrationally hot HCN. The calculations reproduce the observed energy distribution, and show that HCN relaxation follows multiple timescales. Initial rapid decay occurs through energy transfer to the cyclohexyl co-product within the solvent cage, and slower relaxation follows once the products diffuse apart. Re-analysis of the ultrafast experimental data also provides evidence for the dual timescales. These results, which represent a formal violation of conventional linear response theory, provide a detailed picture of the interplay between fluctuations in organic solvent structure and thermal solution-phase chemistry.

  18. Accelerated exploration of multi-principal element alloys with solid solution phases

    PubMed Central

    Senkov, O.N.; Miller, J.D.; Miracle, D.B.; Woodward, C.

    2015-01-01

    Recent multi-principal element, high entropy alloy (HEA) development strategies vastly expand the number of candidate alloy systems, but also pose a new challenge—how to rapidly screen thousands of candidate alloy systems for targeted properties. Here we develop a new approach to rapidly assess structural metals by combining calculated phase diagrams with simple rules based on the phases present, their transformation temperatures and useful microstructures. We evaluate over 130,000 alloy systems, identifying promising compositions for more time-intensive experimental studies. We find the surprising result that solid solution alloys become less likely as the number of alloy elements increases. This contradicts the major premise of HEAs—that increased configurational entropy increases the stability of disordered solid solution phases. As the number of elements increases, the configurational entropy rises slowly while the probability of at least one pair of elements favouring formation of intermetallic compounds increases more rapidly, explaining this apparent contradiction. PMID:25739749

  19. Solution-phase self-assembly of complementary halogen bonding polymers.

    PubMed

    Vanderkooy, Alan; Taylor, Mark S

    2015-04-22

    Noncovalent halogen bonding interactions are explored as a driving force for solution phase macromolecular self-assembly. Conditions for controlled radical polymerization of an iodoperfluoroarene-bearing methacrylate halogen bond donor were identified. An increase in association constant relative to monomeric species was observed for the interaction between halogen bond donor and acceptor polymers in solution. When the polymeric donor was combined with a block copolymer bearing halogen bond-accepting amine groups, higher-order structures were obtained in both organic solvent and in water. Transmission electron microscopy, dynamic light scattering and nuclear magnetic resonance spectroscopic data are consistent with structures having cores composed of the interacting halogen bond donor and acceptor segments. PMID:25867188

  20. Solution-phase photochemistry of a [FeFe]hydrogenase model compound: Evidence of photoinduced isomerisation

    SciTech Connect

    Kania, Rafal; Hunt, Neil T.; Frederix, Pim W. J. M.; Wright, Joseph A.; Pickett, Christopher J.; Ulijn, Rein V.

    2012-01-28

    The solution-phase photochemistry of the [FeFe] hydrogenase subsite model ({mu}-S(CH{sub 2}){sub 3}S)Fe{sub 2}(CO){sub 4}(PMe{sub 3}){sub 2} has been studied using ultrafast time-resolved infrared spectroscopy supported by density functional theory calculations. In three different solvents, n-heptane, methanol, and acetonitrile, relaxation of the tricarbonyl intermediate formed by UV photolysis of a carbonyl ligand leads to geminate recombination with a bias towards a thermodynamically less stable isomeric form, suggesting that facile interconversion of the ligand groups at the Fe center is possible in the unsaturated species. In a polar or hydrogen bonding solvent, this process competes with solvent substitution leading to the formation of stable solvent adduct species. The data provide further insight into the effect of incorporating non-carbonyl ligands on the dynamics and photochemistry of hydrogenase-derived biomimetic compounds.

  1. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  2. Solution-phase laser processing of π-conjugated polymers: Switching between different molecular states

    NASA Astrophysics Data System (ADS)

    Takada, K.; Tomioka, A.

    2012-04-01

    Liquid-phase laser processing, where the laser-irradiated target material is immersed in water for cooling, has been reported as a promising processing technique for thermally fragile organic materials. Although nanometer-sized particles have been reported to be obtained with the liquid-phase laser processing, the physical property did not change because quantum-mechanical size effect does not exhibit itself in the zero-radius Frenkel excitons. In the present study, we step further to use solution droplets as a target material, where organic molecules are molecularly dispersed in organic solvent and, therefore, expected to easily alter the conformation and the energy state upon laser irradiation. Small volume organic solvent is quickly evaporated upon laser irradiation, letting the bare organic molecule placed in water and rapidly cooled. To prevent the chemical decomposition of the target π-conjugated molecule, the specimen was resonantly irradiated by a ns-pulse green laser, not by a conventional UV laser. When the solid state spin-coat film made from MEH-PPV chloroform solution was used as a irradiation target immersed in water, resulting MEH-PPV particles showed similar photoluminescence (PL) like the PL of the spin-coat film and PL of the chloroform solution, including the 0→1, 0→2 vibrational transitions: this indicates that the energy levels were not modified from the spin-coat film. In comparison, when tiny droplets of MEH-PPV chloroform solution (orange color) were suspended in water, laser irradiation gave rise to yellow MEH-PPV particles which showed 550 nm and 530 nm PL (type B), blue-shifted from the spin-coat film PL 580 nm (type A), suggesting a successful phase transition of MEH-PPV polymer to type B. Further solution-phase laser processing left the type B state unchanged. The irreversible phase transition from type A to type B suggests that the type B ground state has lower energy than type A, which is consistent with the blue-shifted PL of

  3. Dynamics of organic and inorganic arsenic in the solution phase of an acidic fen in Germany

    NASA Astrophysics Data System (ADS)

    Huang, J.-H.; Matzner, E.

    2006-04-01

    Wetland soils play a key role for the transformation of heavy metals in forested watersheds, influencing their mobility, and ecotoxicity. Our goal was to investigate the mechanisms of release from solid to solution phase, the mobility, and the transformation of arsenic species in a fen soil. In methanol-water extracts, monomethylarsonic acid, dimethylarsinic acid, trimethylarsine oxide, arsenobetaine, and two unknown organic arsenic species were found with concentrations up to 14 ng As g -1 at the surface horizon. Arsenate is the dominant species at the 0-30 cm depth, whereas arsenite predominated at the 30-70 cm depth. Only up to 2.2% of total arsenic in fen was extractable with methanol-water. In porewaters, depth gradient spatial variation of arsenic species, pH, redox potentials, and the other chemical parameters along the profile was observed in June together with high proportion of organic arsenic species (up to 1.2 μg As L -1, 70% of total arsenic). Tetramethylarsonium ion and an unknown organic arsenic species were additionally detected in porewaters at deeper horizons. In comparison, the arsenic speciation in porewaters in April was homogeneous with depth and no organic arsenic species were found. Thus, the occurrence of microbial methylation of arsenic in fen was demonstrated for the first time. The 10 times elevated total arsenic concentrations in porewaters in June compared to April were accompanied by elevated concentrations of total iron, lower concentrations of sulfate and the presence of ammonium and phosphate. The low proportion of methanol-water extractable total arsenic suggests a generally low mobility of arsenic in fen soils. The release of arsenic from solid to solution phases in fen is dominantly controlled by dissolution of iron oxides, redox transformation, and methylation of arsenic, driven by microbial activity in the growing season. As a result, increased concentrations of total arsenic and potentially toxic arsenic species in fen

  4. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  5. Magnesium-solution phase catholyte semi-fuel cell for undersea vehicles

    NASA Astrophysics Data System (ADS)

    Medeiros, Maria G.; Bessette, Russell R.; Deschenes, Craig M.; Patrissi, Charles J.; Carreiro, Louis G.; Tucker, Steven P.; Atwater, Delmas W.

    A magnesium-solution phase catholyte semi-fuel cell (SFC) is under development at the Naval Undersea Warfare Center (NUWC) as an energetic electrochemical system for low rate, long endurance undersea vehicle applications. This electrochemical system consists of a magnesium anode, a sodium chloride anolyte, a conductive membrane, a catalyzed carbon current collector, and a catholyte of sodium chloride, sulfuric acid and hydrogen peroxide. Bipolar electrode fabrication to minimize cell stack volume, long duration testing, and scale-up of electrodes from 77 to 1000 cm 2 have been the objectives of this project. Single cell and multi-cell testing at the 77 cm 2 configuration have been utilized to optimize all testing parameters including start-up conditions, flow rates, temperatures, and electrolyte concentrations while maintaining high voltages and efficiencies. The fabrication and testing of bipolar electrodes and operating parameter optimization for large electrode area cells will be presented. Designs for 1000 cm 2 electrodes, electrolyte flow patterns and current/voltage distribution across these large area cells will also be discussed.

  6. Promoting solution phase discharge in Li-O2 batteries containing weakly solvating electrolyte solutions

    NASA Astrophysics Data System (ADS)

    Gao, Xiangwen; Chen, Yuhui; Johnson, Lee; Bruce, Peter G.

    2016-08-01

    On discharge, the Li-O2 battery can form a Li2O2 film on the cathode surface, leading to low capacities, low rates and early cell death, or it can form Li2O2 particles in solution, leading to high capacities at relatively high rates and avoiding early cell death. Achieving discharge in solution is important and may be encouraged by the use of high donor or acceptor number solvents or salts that dissolve the LiO2 intermediate involved in the formation of Li2O2. However, the characteristics that make high donor or acceptor number solvents good (for example, high polarity) result in them being unstable towards LiO2 or Li2O2. Here we demonstrate that introduction of the additive 2,5-di-tert-butyl-1,4-benzoquinone (DBBQ) promotes solution phase formation of Li2O2 in low-polarity and weakly solvating electrolyte solutions. Importantly, it does so while simultaneously suppressing direct reduction to Li2O2 on the cathode surface, which would otherwise lead to Li2O2 film growth and premature cell death. It also halves the overpotential during discharge, increases the capacity 80- to 100-fold and enables rates >1 mA cmareal-2 for cathodes with capacities of >4 mAh cmareal-2. The DBBQ additive operates by a new mechanism that avoids the reactive LiO2 intermediate in solution.

  7. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  8. Nanomaterial-based biosensors using dual transducing elements for solution phase detection.

    PubMed

    Li, Ning; Su, Xiaodi; Lu, Yi

    2015-05-01

    Biosensors incorporating nanomaterials have demonstrated superior performance compared to their conventional counterparts. Most reported sensors use nanomaterials as a single transducer of signals, while biosensor designs using dual transducing elements have emerged as new approaches to further improve overall sensing performance. This review focuses on recent developments in nanomaterial-based biosensors using dual transducing elements for solution phase detection. The review begins with a brief introduction of the commonly used nanomaterial transducers suitable for designing dual element sensors, including quantum dots, metal nanoparticles, upconversion nanoparticles, graphene, graphene oxide, carbon nanotubes, and carbon nanodots. This is followed by the presentation of the four basic design principles, namely Förster Resonance Energy Transfer (FRET), Amplified Fluorescence Polarization (AFP), Bio-barcode Assay (BCA) and Chemiluminescence (CL), involving either two kinds of nanomaterials, or one nanomaterial and an organic luminescent agent (e.g. organic dyes, luminescent polymers) as dual transducers. Biomolecular and chemical analytes or biological interactions are detected by their control of the assembly and disassembly of the two transducing elements that change the distance between them, the size of the fluorophore-containing composite, or the catalytic properties of the nanomaterial transducers, among other property changes. Comparative discussions on their respective design rules and overall performances are presented afterwards. Compared with the single transducer biosensor design, such a dual-transducer configuration exhibits much enhanced flexibility and design versatility, allowing biosensors to be more specifically devised for various purposes. The review ends by highlighting some of the further development opportunities in this field. PMID:25763412

  9. A liquid flatjet system for solution phase soft-x-ray spectroscopy

    PubMed Central

    Ekimova, Maria; Quevedo, Wilson; Faubel, Manfred; Wernet, Philippe; Nibbering, Erik T. J.

    2015-01-01

    We present a liquid flatjet system for solution phase soft-x-ray spectroscopy. The flatjet set-up utilises the phenomenon of formation of stable liquid sheets upon collision of two identical laminar jets. Colliding the two single water jets, coming out of the nozzles with 50 μm orifices, under an impact angle of 48° leads to double sheet formation, of which the first sheet is 4.6 mm long and 1.0 mm wide. The liquid flatjet operates fully functional under vacuum conditions (<10−3 mbar), allowing soft-x-ray spectroscopy of aqueous solutions in transmission mode. We analyse the liquid water flatjet thickness under atmospheric pressure using interferomeric or mid-infrared transmission measurements and under vacuum conditions by measuring the absorbance of the O K-edge of water in transmission, and comparing our results with previously published data obtained with standing cells with Si3N4 membrane windows. The thickness of the first liquid sheet is found to vary between 1.4–3 μm, depending on the transverse and longitudinal position in the liquid sheet. We observe that the derived thickness is of similar magnitude under 1 bar and under vacuum conditions. A catcher unit facilitates the recycling of the solutions, allowing measurements on small sample volumes (∼10 ml). We demonstrate the applicability of this approach by presenting measurements on the N K-edge of aqueous NH4+. Our results suggest the high potential of using liquid flatjets in steady-state and time-resolved studies in the soft-x-ray regime. PMID:26798824

  10. A liquid flatjet system for solution phase soft-x-ray spectroscopy.

    PubMed

    Ekimova, Maria; Quevedo, Wilson; Faubel, Manfred; Wernet, Philippe; Nibbering, Erik T J

    2015-09-01

    We present a liquid flatjet system for solution phase soft-x-ray spectroscopy. The flatjet set-up utilises the phenomenon of formation of stable liquid sheets upon collision of two identical laminar jets. Colliding the two single water jets, coming out of the nozzles with 50 μm orifices, under an impact angle of 48° leads to double sheet formation, of which the first sheet is 4.6 mm long and 1.0 mm wide. The liquid flatjet operates fully functional under vacuum conditions (<10(-3) mbar), allowing soft-x-ray spectroscopy of aqueous solutions in transmission mode. We analyse the liquid water flatjet thickness under atmospheric pressure using interferomeric or mid-infrared transmission measurements and under vacuum conditions by measuring the absorbance of the O K-edge of water in transmission, and comparing our results with previously published data obtained with standing cells with Si3N4 membrane windows. The thickness of the first liquid sheet is found to vary between 1.4-3 μm, depending on the transverse and longitudinal position in the liquid sheet. We observe that the derived thickness is of similar magnitude under 1 bar and under vacuum conditions. A catcher unit facilitates the recycling of the solutions, allowing measurements on small sample volumes (∼10 ml). We demonstrate the applicability of this approach by presenting measurements on the N K-edge of aqueous NH4 (+). Our results suggest the high potential of using liquid flatjets in steady-state and time-resolved studies in the soft-x-ray regime. PMID:26798824

  11. Total Synthesis of Teixobactin.

    PubMed

    Giltrap, Andrew M; Dowman, Luke J; Nagalingam, Gayathri; Ochoa, Jessica L; Linington, Roger G; Britton, Warwick J; Payne, Richard J

    2016-06-01

    The first total synthesis of the cyclic depsipeptide natural product teixobactin is described. Synthesis was achieved by solid-phase peptide synthesis, incorporating the unusual l-allo-enduracididine as a suitably protected synthetic cassette and employing a key on-resin esterification and solution-phase macrolactamization. The synthetic natural product was shown to possess potent antibacterial activity against a range of Gram-positive pathogenic bacteria, including a virulent strain of Mycobacterium tuberculosis and methicillin-resistant Staphylococcus aureus (MRSA). PMID:27191730

  12. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  13. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

  14. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer`s task easier.

  15. Protein oxidative modifications during electrospray ionization: solution phase electrochemistry or corona discharge-induced radical attack?

    PubMed

    Boys, Brian L; Kuprowski, Mark C; Noël, James J; Konermann, Lars

    2009-05-15

    The exposure of solution-phase proteins to reactive oxygen species (ROS) causes oxidative modifications, giving rise to the formation of covalent +16 Da adducts. Electrospray ionization (ESI) mass spectrometry (MS) is the most widely used method for monitoring the extent of these modifications. Unfortunately, protein oxidation can also take place as an experimental artifact during ESI, such that it may be difficult to assess the actual level of oxidation in bulk solution. Previous work has demonstrated that ESI-induced oxidation is highly prevalent when operating at strongly elevated capillary voltage V(0) (e.g., +8 kV) and with oxygen nebulizer gas in the presence of a clearly visible corona discharge. Protein oxidation under these conditions is commonly attributed to OH radicals generated in the plasma of the discharge. On the other hand, charge balancing oxidation reactions are known to take place at the metal/liquid interface of the emitter. Previous studies have not systematically explored whether such electrochemical processes could be responsible for the formation of oxidative +16 Da adducts instead of (or in combination with) plasma-generated ROS. Using hemoglobin as a model system, this work illustrates the occurrence of extensive protein oxidation even under typical operating conditions (e.g., V(0) = 3.5 kV, N(2) nebulizer gas). Surprisingly, measurements of the current flowing in the ESI circuit demonstrate that a weak corona discharge persists for these relatively gentle settings. On the basis of comparative experiments with nebulizer gases of different dielectric strength, it is concluded that ROS generated under discharge conditions are solely responsible for ESI-induced protein oxidation. This result is corroborated through off-line electrolysis experiments designed to mimic the electrochemical processes taking place during ESI. Our findings highlight the necessity of using easily oxidizable internal standards in biophysical or biomedical ESI

  16. Polarization Sensitive THz TDS and Fabrication of Alignment Cells for Solution Phase THz Spectroscopy

    NASA Astrophysics Data System (ADS)

    George, Deepu Koshy

    sense that it makes use of the polarization state of THz pulse which is also the case for the alignment spectroscopy. PMOTS technique detects the rotation and change in ellipticity to the incident polarization from which the hall coefficients of the sample can be calculated. The final section deals with the fabrication of Dynamical Alignment Terahertz Spectroscopy cells for solution phase measurements. Design, fabrication and process optimization are detailed. Micro-fabrication based on optical lithography and SU-8 negative photoresist has been explored.

  17. Synthesis of Chemiluminescent Esters: A Combinatorial Synthesis Experiment for Organic Chemistry Students

    ERIC Educational Resources Information Center

    Duarte, Robert; Nielson, Janne T.; Dragojlovic, Veljko

    2004-01-01

    A group of techniques aimed at synthesizing a large number of structurally diverse compounds is called combinatorial synthesis. Synthesis of chemiluminescence esters using parallel combinatorial synthesis and mix-and-split combinatorial synthesis is experimented.

  18. Use of cyclohexylisocyanide and methyl 2-isocyanoacetate as convertible isocyanides for microwave-assisted fluorous synthesis of 1,4-benzodiazepine-2,5-dione library.

    PubMed

    Zhou, Hongyu; Zhang, Wei; Yan, Bing

    2010-01-01

    A new protocol in which cyclohexylisocyanide and methyl 2-isocyanoacetate are used as convertible isocyanides for Ugi/de-Boc/cyclization/Suzuki synthesis of biaryl-substituted 1,4-benzodiazepine-2,5-diones has been developed. Ugi reactions of Boc-protected anthranilic acids, fluorous benzaldehydes, amines, and cyclohexylisocyanide or methyl 2-isocyanoacetate were carried out at room temperature. Microwave-promoted de-Boc/cyclization reactions afforded 1,4-benzodiazepine-2,5-diones (BZDs). Suzuki coupling reactions further derivatized the BZD ring by removing the fluorous tag and introducing the biaryl group. A thirty three-member biaryl-substituted BZD library containing four points of diversity was prepared by microwave-assisted solution-phase fluorous parallel synthesis. PMID:19947585

  19. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  20. 2'-O-Methyl- and 2'-O-propargyl-5-methylisocytidine: synthesis, properties and impact on the isoCd-dG and the isoCd-isoGd base pairing in nucleic acids with parallel and antiparallel strand orientation.

    PubMed

    Jana, Sunit K; Leonard, Peter; Ingale, Sachin A; Seela, Frank

    2016-06-01

    Oligonucleotides containing 2'-O-methylated 5-methylisocytidine (3) and 2'-O-propargyl-5-methylisocytidine (4) as well as the non-functionalized 5-methyl-2'-deoxyisocytidine (1b) were synthesized. MALDI-TOF mass spectra of oligonucleotides containing 1b are susceptible to a stepwise depyrimidination. In contrast, oligonucleotides incorporating 2'-O-alkylated nucleosides 3 and 4 are stable. This is supported by acid catalyzed hydrolysis experiments performed on nucleosides in solution. 2'-O-Alkylated nucleoside 3 was synthesized from 2'-O-5-dimethyluridine via tosylation, anhydro nucleoside formation and ring opening. The corresponding 4 was obtained by direct regioselective alkylation of 5-methylisocytidine (1d) with propargyl bromide under phase-transfer conditions. Both compounds were converted to phosphoramidites and employed in solid-phase oligonucleotide synthesis. Hybridization experiments resulted in duplexes with antiparallel or parallel chains. In parallel duplexes, methylation or propargylation of the 2'-hydroxyl group of isocytidine leads to destabilization while in antiparallel DNA this effect is less pronounced. 2'-O-Propargylated 4 was used to cross-link nucleosides and oligonucleotides to homodimers by a stepwise click ligation with a bifunctional azide. PMID:27221215

  1. Understanding the solution phase chemistry and solid state thermodynamic behavior of pharmaceutical cocrystals

    NASA Astrophysics Data System (ADS)

    Maheshwari, Chinmay

    Cocrystals have drawn a lot of research interest in the last decade due to their potential to favorably alter the physicochemical and biopharmaceutical properties of active pharmaceutical ingredients. This dissertation focuses on the thermodynamic stability and solubility of pharmaceutical cocrystals. Specifically, the objectives are to; (i) investigate the influence of coformer properties such as solubility and ionization characteristics on cocrystal solubility and stability as a function of pH, (ii) to measure the thermodynamic solubility of metastable cocrystals, and study the solubility differences measured by kinetic and equilibrium methods, (iii) investigate the role of surfactants on the solubility and synthesis of cocrystals, (iv) investigate the solid state phase transformation of reactants to cocrystals and the factors that influence the reaction kinetics and, (v) provide models that enable the prediction of cocrystal formation by calculating the free energy of formation for a solid to solid transformation of reactants to cocrystals. Cocrystal solubilities were measured directly when cocrystals were thermodynamically stable, while solubilities were calculated from eutectic concentration measurements when cocrystals were of higher solubility than its components. Cocrystal solubility was highly dependent on coformer solubilities for gabapentin-lactam and lamotrigine cocrystals. It was found that melting point is not a good indicator of cocrystal solubility as solute-solvent interactions quantified by the activity coefficient play a huge role in the observed solubility. Similar to salts, cocrystals also exhibit pHmax, however the salts and cocrystals have different dependencies on the parameters that govern the value of pHmax. It is also shown that cocrystals could provide solubility advantage over salts as lamotrigine-nicotinamide cocrystal hydrate has about 6 fold higher solubility relative to lamotrigine-saccharin salt. In the case of mixtures of solid

  2. Anti-parallel triplexes: Synthesis of 8-aza-7-deazaadenine nucleosides with a 3-aminopropynyl side-chain and its corresponding LNA analog.

    PubMed

    Kosbar, Tamer R; Sofan, Mamdouh A; Waly, Mohamed A; Pedersen, Erik B

    2015-05-15

    The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti-parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2°C and 3°C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4°C compared to the wild type oligonucleotide. In order to be sure that the 3-aminopropyn-1-yl chain was contributing to the stability of the triplex, the nucleobase X without the aminopropynyl group was inserted in the same positions. In all cases the thermal stability was lower than the corresponding oligonucleotides carrying the 3-aminopropyn-1-yl chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm=-9.1°C). Also the thermal stability decreased about 6.1°C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (A(L)). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72Ǻ, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which is weakening the stacking interactions with the TFO nucleobases and the binding with the duplex part. PMID:25868748

  3. Design, synthesis and biological evaluation of paralleled Aza resveratrol-chalcone compounds as potential anti-inflammatory agents for the treatment of acute lung injury.

    PubMed

    Chen, Wenbo; Ge, Xiangting; Xu, Fengli; Zhang, Yali; Liu, Zhiguo; Pan, Jialing; Song, Jiao; Dai, Yuanrong; Zhou, Jianmin; Feng, Jianpeng; Liang, Guang

    2015-08-01

    Acute lung injury (ALI) is a major cause of acute respiratory failure in critically-ill patients. It has been reported that both resveratrol and chalcone derivatives could ameliorate lung injury induced by inflammation. A series of paralleled Aza resveratrol-chalcone compounds (5a-5m, 6a-6i) were designed, synthesized and screened for anti-inflammatory activity. A majority showed potent inhibition on the IL-6 and TNF-α expression-stimulated by LPS in macrophages, of which compound 6b is the most potent analog by inhibition of LPS-induced IL-6 release in a dose-dependent manner. Moreover, 6b exhibited protection against LPS-induced acute lung injury in vivo. These results offer further insight into the use of Aza resveratrol-chalcone compounds for the treatment of inflammatory diseases, and the use of compound 6b as a lead compound for the development of anti-ALI agents. PMID:26048788

  4. High resolution ion mobility measurements for gas phase proteins: correlation between solution phase and gas phase conformations

    NASA Astrophysics Data System (ADS)

    Hudgins, Robert R.; Woenckhaus, Jürgen; Jarrold, Martin F.

    1997-11-01

    Our high resolution ion mobility apparatus has been modified by attaching an electrospray source to perform measurements for biological molecules. While the greater resolving power permits the resolution of more conformations for BPTI and cytochrome c, the resolved features are generally much broader than expected for a single rigid conformation. A major advantage of the new experimental configuration is the much gentler introduction of ions into the drift tube, so that the observed gas phase conformations appear to more closely reflect those present in solution. For example, it is possible to distinguish between the native state of cytochrome c and the methanol-denatured form on the basis of the ion mobility measurements; the mass spectra alone are not sensitive enough to detect this change. Thus this approach may provide a quick and sensitive tool for probing the solution phase conformations of biological molecules.

  5. Stability and spinodal decomposition of the solid-solution phase in the ruthenium-cerium-oxide electro-catalyst.

    PubMed

    Li, Yanmei; Wang, Xin; Shao, Yanqun; Tang, Dian; Wu, Bo; Tang, Zhongzhi; Lin, Wei

    2015-01-14

    The phase diagram of Ru-Ce-O was calculated by a combination of ab initio density functional theory and thermodynamic calculations. The phase diagram indicates that the solubility between ruthenium oxide and cerium oxide is very low at temperatures below 1100 K. Solid solution phases, if existing under normal experimental conditions, are metastable and subject to a quasi-spinodal decomposition to form a mixture of a Ru-rich rutile oxide phase and a Ce-rich fluorite oxide phase. To study the spinodal decomposition of Ru-Ce-O, Ru0.6Ce0.4O2 samples were prepared at 280 °C and 450 °C. XRD and in situ TEM characterization provide proof of the quasi-spinodal decomposition of Ru0.6Ce0.4O2. The present study provides a fundamental reference for the phase design of the Ru-Ce-O electro-catalyst. PMID:25418197

  6. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  7. Jeffamine Derivatized TentaGel Beads and PDMS Microbead Cassettes for Ultra-high Throughput in situ Releasable Solution-Phase Cell-based Screening of OBOC Combinatorial Small Molecule Libraries

    PubMed Central

    Townsend, Jared B.; Shaheen, Farzana; Liu, Ruiwu; Lam, Kit S.

    2011-01-01

    A method to efficiently immobilize and partition large quantities of microbeads in an array format in microfabricated polydimethylsiloxane (PDMS) cassette for high-throughput in situ releasable solution-phase cell-based screening of one-bead-one-compound (OBOC) combinatorial libraries is described. Commercially available Jeffamine triamine T-403 (∼440 Da) was derivatized such that two of its amino groups were protected by Fmoc and the remaining amino group capped with succinic anhydride to generate a carboxyl group. This resulting tri-functional hydrophilic polymer was then sequentially coupled two times to the outer layer of topologically segregated bilayer TentaGel (TG) beads with solid phase peptide synthesis chemistry, resulting in beads with increased loading capacity, hydrophilicity and porosity at the outer layer. We have found that such bead configuration can facilitate ultra high-throughput in situ releasable solution-phase screening of OBOC libraries. An encoded releasable OBOC small molecule library was constructed on Jeffamine derivatized TG beads with library compounds tethered to the outer layer via a disulfide linker and coding tags in the interior of the beads. Compound-beads could be efficiently loaded (5-10 minutes) into a 5 cm diameter Petri dish containing a 10,000-well PDMS microbead cassette, such that over 90% of the microwells were each filled with only one compound-bead. Jurkat T-lymphoid cancer cells suspended in Matrigel® were then layered over the microbead cassette to immobilize the compound-beads. After 24 hours of incubation at 37°C, dithiothreitol was added to trigger the release of library compounds. Forty-eight hours later, MTT reporter assay was used to identify regions of reduced cell viability surrounding each positive bead. From a total of about 20,000 beads screened, 3 positive beads were detected and physically isolated for decoding. A strong consensus motif was identified for these three positive compounds. These

  8. On the study and development of aqueous inorganic hydroxoaquo tridecamers: Structural observations in the solid and solution phases

    NASA Astrophysics Data System (ADS)

    Kamunde-Devonish, Maisha Kanyua

    Group 13 metals play a pivotal role in many areas of research ranging from materials to environmental chemistry. An important facet of these disciplines is the design of discrete molecules that can serve as functional materials for electronics applications and modeling studies. A solution-based synthetic strategy for the preparation of discrete Group 13 hydroxo-aquo tridecamers with utility as single-source precursors for amorphous functional thin film oxides is introduced in this dissertation. Several techniques including 1H-Nuclear Magnetic Resonance (NMR) spectroscopy, 1H-Diffusion Ordered spectroscopy, Solid-state NMR, Dynamic Light Scattering, and Raman spectroscopy are used to acquire structural information necessary for understanding the nature of these precursors in both the solid and solution phases. The dynamic behavior of these compounds has encouraged additional experiments that will pave the way for new studies with significant importance as the environmental ramifications of these compounds become relevant for future technologies. This dissertation includes previously published and unpublished co-authored material.

  9. Discovery of a thermally persistent h.c.p. solid-solution phase in the Ni-W system

    SciTech Connect

    Kurz, S. J. B. Leineweber, A.; Maisel, S. B.; Höfler, M.; Müller, S.; Mittemeijer, E. J.

    2014-08-28

    Although the accepted Ni-W phase diagram does not reveal the existence of h.c.p.-based phases, h.c.p.-like stacking sequences were observed in magnetron-co-sputtered Ni-W thin films at W contents of 20 to 25 at. %, by using transmission electron microscopy and X-ray diffraction. The occurrence of this h.c.p.-like solid-solution phase could be rationalized by first-principles calculations, showing that the vicinity of the system's ground-state line is populated with metastable h.c.p.-based superstructures in the intermediate concentration range from 20 to 50 at. % W. The h.c.p.-like stacking in Ni-W films was observed to be thermally persistent, up to temperatures as high as at least 850 K, as evidenced by extensive X-ray diffraction analyses on specimens before and after annealing treatments. The tendency of Ni-W for excessive planar faulting is discussed in the light of these new findings.

  10. Promoting solution phase discharge in Li-O2 batteries containing weakly solvating electrolyte solutions.

    PubMed

    Gao, Xiangwen; Chen, Yuhui; Johnson, Lee; Bruce, Peter G

    2016-08-01

    On discharge, the Li-O2 battery can form a Li2O2 film on the cathode surface, leading to low capacities, low rates and early cell death, or it can form Li2O2 particles in solution, leading to high capacities at relatively high rates and avoiding early cell death. Achieving discharge in solution is important and may be encouraged by the use of high donor or acceptor number solvents or salts that dissolve the LiO2 intermediate involved in the formation of Li2O2. However, the characteristics that make high donor or acceptor number solvents good (for example, high polarity) result in them being unstable towards LiO2 or Li2O2. Here we demonstrate that introduction of the additive 2,5-di-tert-butyl-1,4-benzoquinone (DBBQ) promotes solution phase formation of Li2O2 in low-polarity and weakly solvating electrolyte solutions. Importantly, it does so while simultaneously suppressing direct reduction to Li2O2 on the cathode surface, which would otherwise lead to Li2O2 film growth and premature cell death. It also halves the overpotential during discharge, increases the capacity 80- to 100-fold and enables rates >1 mA cmareal(-2) for cathodes with capacities of >4 mAh cmareal(-2). The DBBQ additive operates by a new mechanism that avoids the reactive LiO2 intermediate in solution. PMID:27111413

  11. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  12. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  13. Side-chain effects on the solution-phase conformations and charge photogeneration dynamics of low-bandgap copolymers

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Ming; Liang, Ran; Xing, Ya-Dong; Hu, Rong; Zhao, Ning-Jiu; Zhang, Wei; Fu, Li-Min; Ai, Xi-Cheng; Zhang, Jian-Ping; Hou, Jian-Hui

    2013-09-01

    Solution-phase conformations and charge photogeneration dynamics of a pair of low-bandgap copolymers based on benzo[1,2-b:4,5-b']dithiophene (BDT) and thieno[3,4-b]thiophene (TT), differed by the respective carbonyl (-C) and ester (-E) substituents at the TT units, were comparatively investigated by using near-infrared time-resolved absorption (TA) spectroscopy at 25 °C and 120 °C. Steady-state and TA spectroscopic results corroborated by quantum chemical analyses prove that both PBDTTT-C and PBDTTT-E in chlorobenzene solutions are self-aggregated; however, the former bears a relatively higher packing order. Specifically, PBDTTT-C aggregates with more π-π stacked domains, whereas PBDTTT-E does with more random coils interacting strongly at the chain intersections. At 25 °C, the copolymers exhibit comparable exciton lifetimes (˜1 ns) and fluorescence quantum yields (˜2%), but distinctly different charge photogeneration dynamics: PBDTTT-C on photoexcitation gives rise to a branching ratio of charge separated (CS) over charge transfer (CT) states more than 20% higher than PBDTTT-E does, correlating with their photovoltaic performance. Temperature and excitation-wavelength dependent exciton/charge dynamics suggest that the CT states localize at the chain intersections that are survivable up to 120 °C, and that the excitons and the CS states inhabit the stretched strands and the also thermally robust orderly stacked domains. The stable self-aggregation structures and the associated primary charge dynamics of the PBDTTT copolymers in solutions are suggested to impact intimately on the morphologies and the charge photogeneration efficiency of the solid-state photoactive layers.

  14. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  15. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  16. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  17. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  18. Amorphous and nanocrystalline titanium nitride and carbonitride materials obtained by solution phase ammonolysis of Ti(NMe 2) 4

    NASA Astrophysics Data System (ADS)

    Jackson, Andrew W.; Shebanova, Olga; Hector, Andrew L.; McMillan, Paul F.

    2006-05-01

    Solution phase reactions between tetrakisdimethylamidotitanium (Ti(NMe 2) 4) and ammonia yield precipitates with composition TiC 0.5N 1.1H 2.3. Thermogravimetric analysis (TGA) indicates that decomposition of these precursor materials proceeds in two steps to yield rocksalt-structured TiN or Ti(C,N), depending upon the gas atmosphere. Heating to above 700 °C in NH 3 yields nearly stoichiometric TiN. However, heating in N 2 atmosphere leads to isostructural carbonitrides, approximately TiC 0.2N 0.8 in composition. The particle sizes of these materials range between 4-12 nm. Heating to a temperature that corresponds to the intermediate plateau in the TGA curve (450 °C) results in a black powder that is X-ray amorphous and is electrically conducting. The bulk chemical composition of this material is found to be TiC 0.22N 1.01H 0.07, or Ti 3(C 0.17N 0.78H 0.05) 3.96, close to Ti 3(C,N) 4. Previous workers have suggested that the intermediate compound was an amorphous form of Ti 3N 4. TEM investigation of the material indicates the presence of nanocrystalline regions <5 nm in dimension embedded in an amorphous matrix. Raman and IR reflectance data indicate some structural similarity with the rocksalt-structured TiN and Ti(C,N) phases, but with disorder and substantial vacancies or other defects. XAS indicates that the local structure of the amorphous solid is based on the rocksalt structure, but with a large proportion of vacancies on both the cation (Ti) and anion (C,N) sites. The first shell Ti coordination is approximately 4.5 and the second-shell coordination ˜5.5 compared with expected values of 6 and 12, respectively, for the ideal rocksalt structure. The material is thus approximately 50% less dense than known Ti x(C,N) y crystalline phases.

  19. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  20. Eclipse Parallel Tools Platform

    SciTech Connect

    Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

  1. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  2. UV-visible spectral identification of the solution-phase and solid-phase permanganate oxidation reactions of thymine acetic acid.

    PubMed

    Bui, Chinh T; Sam, Lien A; Cotton, Richard G H

    2004-03-01

    Solution-phase and solid-phase permanganate oxidation reactions of thymine acetic acid were investigated by spectroscopy. The spectral data showed the formation of a stable organomanganese intermediate, which was responsible for the rise in the absorbance at 420 nm. This result enables unambiguous interpretation of the absorbance change at 420 nm, as the intermediate permanganate ions could be isolated on the solid supports. PMID:14980689

  3. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  4. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  5. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  6. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  7. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  8. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  9. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  10. Parallel tetrahedral mesh adaptation with dynamic load balancing

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    2000-06-28

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  11. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  12. Multifunctional nanocomposites constructed from Fe3O4-Au nanoparticle cores and a porous silica shell in the solution phase.

    PubMed

    Chen, Fenghua; Chen, Qingtao; Fang, Shaoming; Sun, Yu'an; Chen, Zhijun; Xie, Gang; Du, Yaping

    2011-11-01

    This work is directed towards the synthesis of multifunctional nanoparticles composed of Fe(3)O(4)-Au nanocomposite cores and a porous silica shell (Fe(3)O(4)-Au/pSiO(2)), aimed at ensuring the stability, magnetic, and optical properties of magnetic-gold nanocomposite simultaneously. The prepared Fe(3)O(4)-Au/pSiO(2) core/shell nanoparticles are characterized by means of TEM, N(2) adsorption-desorption isotherms, FTIR, XRD, UV-vis, and VSM. Meanwhile, as an example of the applications, catalytic activity of the porous silica shell-encapsulated Fe(3)O(4)-Au nanoparticles is investigated by choosing a model reaction, reduction of o-nitroaniline to benzenediamine by NaBH(4). Due to the existence of porous silica shells, the reaction with Fe(3)O(4)-Au/pSiO(2) core/shell nanoparticles as a catalyst follows second-order kinetics with the rate constant (k) of about 0.0165 l mol(-1) s(-1), remarkably different from the first-order kinetics with the k of about 0.002 s(-1) for the reduction reaction with the core Fe(3)O(4)-Au nanoparticles as a catalyst. PMID:21637876

  13. Synthesis and library construction of privileged tetra-substituted Δ5-2-oxopiperazine as β-turn structure mimetics.

    PubMed

    Kim, Jonghoon; Lee, Won Seok; Koo, Jaeyoung; Lee, Jeongae; Park, Seung Bum

    2014-01-13

    In this study, we developed an efficient and practical procedure for the synthesis of tetra-substituted Δ5-2-oxopiperazine that mimics the bioactive β-turn structural motif of proteins. This synthetic route is robust and modular enough to accommodate four different substituents to obtain a high level of molecular diversity without any deterioration in stereochemical enrichment of the natural and unnatural amino acids. Through the in silico studies, including a distance calculation of side chains and a conformational overlapping of our model compound with a native β-turn structure, we successfully demonstrated the conformational similarity of tetra-substituted Δ5-2-oxopiperazine to the β-turn motif. For the library construction in a high-throughput manner, the fluorous tag technology was adopted with the use of a solution-phase parallel synthesis platform. A 140-membered pilot library of tetra-substituted Δ5-2-oxopiperazines was achieved with an average purity of 90% without further purification. PMID:24215277

  14. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  15. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  16. Parallel system simulation

    SciTech Connect

    Tai, H.M.; Saeks, R.

    1984-03-01

    A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

  17. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  18. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  19. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  20. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  1. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  2. Partitioning and parallel radiosity

    NASA Astrophysics Data System (ADS)

    Merzouk, S.; Winkler, C.; Paul, J. C.

    1996-03-01

    This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

  3. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  4. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  5. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  6. Ceramic powder synthesis in supercritical fluids

    SciTech Connect

    Adkins, C.L.J.; Russick, E.M.; Cesarano, J; Tadros, M.E.; Voigt, J.A.

    1996-04-01

    Gas-phase processing plays an important role in the commercial production of a number of ceramic powders. These include titanium dioxide, carbon black, zinc oxide, and silicon dioxide. The total annual output of these materials is on the order of 2 million tons. The physical processes involved in gas-phase synthesis are typical of those involved in solution -phase synthesis: chemical reaction kinetics, mass transfer, nucleation, coagulation, and condensation. This report focuses on the work done under a Laboratory-Directed Research and Development (LDRD) project that explored the use of various high pressure techniques for ceramic powder synthesis. Under this project, two approaches were taken. First, a continuous flow, high pressure water reactor was built and studied for powder synthesis. And second, a supercritical carbon dioxide static reactor, which was used in conjunction with surfactants, was built and used to generate oxide powders.

  7. Parallel Stitching of Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Ling, Xi; Lin, Yuxuan; Dresselhaus, Mildred; Palacios, Tomás; Kong, Jing; Department of Electrical Engineering; Computer Science, Massachusetts Institute of Technology Team

    Large scale integration of atomically thin metals (e.g. graphene), semiconductors (e.g. transition metal dichalcogenides (TMDs)), and insulators (e.g. hexagonal boron nitride) is critical for constructing the building blocks for future nanoelectronics and nanophotonics. However, the construction of in-plane heterostructures, especially between two atomic layers with large lattice mismatch, could be extremely difficult due to the strict requirement of spatial precision and the lack of a selective etching method. Here, we developed a general synthesis methodology to achieve both vertical and in-plane ``parallel stitched'' heterostructures between a two-dimensional (2D) and TMD materials, which enables both multifunctional electronic/optoelectronic devices and their large scale integration. This is achieved via selective ``sowing'' of aromatic molecule seeds during the chemical vapor deposition growth. MoS2 is used as a model system to form heterostructures with diverse other 2D materials. Direct and controllable synthesis of large-scale parallel stitched graphene-MoS2 heterostructures was further investigated. Unique nanometer overlapped junctions were obtained at the parallel stitched interface, which are highly desirable both as metal-semiconductor contact and functional devices/systems, such as for use in logical integrated circuits (ICs) and broadband photodetectors.

  8. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  9. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  10. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  11. Rapid screening for potential epitopes reactive with a polycolonal antibody by solution-phase H/D exchange monitored by FT-ICR mass spectrometry.

    PubMed

    Zhang, Qian; Noble, Kyle A; Mao, Yuan; Young, Nicolas L; Sathe, Shridhar K; Roux, Kenneth H; Marshall, Alan G

    2013-07-01

    The potential epitopes of a recombinant food allergen protein, cashew Ana o 2, reactive to polyclonal antibodies, were mapped by solution-phase amide backbone H/D exchange (HDX) coupled with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). Ana o 2 polyclonal antibodies were purified in the serum from a goat immunized with cashew nut extract. Antibodies were incubated with recombinant Ana o 2 (rAna o 2) to form antigen:polyclonal antibody (Ag:pAb) complexes. Complexed and uncomplexed (free) rAna o 2 were then subjected to HDX-MS analysis. Four regions protected from H/D exchange upon pAb binding are identified as potential epitopes and mapped onto a homologous model. PMID:23681851

  12. Rapid Screening for Potential Epitopes Reactive with a Polycolonal Antibody by Solution-Phase H/D Exchange Monitored by FT-ICR Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Noble, Kyle A.; Mao, Yuan; Young, Nicolas L.; Sathe, Shridhar K.; Roux, Kenneth H.; Marshall, Alan G.

    2013-07-01

    The potential epitopes of a recombinant food allergen protein, cashew Ana o 2, reactive to polyclonal antibodies, were mapped by solution-phase amide backbone H/D exchange (HDX) coupled with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). Ana o 2 polyclonal antibodies were purified in the serum from a goat immunized with cashew nut extract. Antibodies were incubated with recombinant Ana o 2 (rAna o 2) to form antigen:polyclonal antibody (Ag:pAb) complexes. Complexed and uncomplexed (free) rAna o 2 were then subjected to HDX-MS analysis. Four regions protected from H/D exchange upon pAb binding are identified as potential epitopes and mapped onto a homologous model.

  13. The Influence of the Linker Geometry in Bis(3-hydroxy-N-methyl-pyridin-2-one) Ligands on Solution-Phase Uranyl Affinity

    SciTech Connect

    Szigethy, Geza; Raymond, Kenneth

    2010-08-12

    Seven water-soluble, tetradentate bis(3-hydroxy-N-methyl-pyridin-2-one) (bis-Me-3,2-HOPO) ligands were synthesized that vary only in linker geometry and rigidity. Solution phase thermodynamic measurements were conducted between pH 1.6 and pH 9.0 to determine the effects of these variations on proton and uranyl cation affinity. Proton affinity decreases by introduction of the solubilizing triethylene glycol group as compared to un-substituted reference ligands. Uranyl affinity was found to follow no discernable trends with incremental geometric modification. The butyl-linked 4Li-Me-3,2-HOPO ligand exhibited the highest uranyl affinity, consistent with prior in vivo decorporation results. Of the rigidly-linked ligands, the o-phenylene linker imparted the best uranyl affinity to the bis-Me-3,2-HOPO ligand platform.

  14. Epitope mapping of 7S cashew antigen in complex with antibody by solution-phase H/D exchange monitored by FT-ICR mass spectrometry.

    PubMed

    Guan, Xiaoyan; Noble, Kyle A; Tao, Yeqing; Roux, Kenneth H; Sathe, Shridhar K; Young, Nicolas L; Marshall, Alan G

    2015-06-01

    The potential epitope of a recombinant food allergen protein, cashew Ana o 1, reactive to monoclonal antibody, mAb 2G4, has been mapped by solution-phase amide backbone H/D exchange (HDX) monitored by Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). Purified mAb 2G4 was incubated with recombinant Ana o 1 (rAna o 1) to form antigen:monoclonal antibody (Ag:mAb) complexes. Complexed and uncomplexed (free) rAna o 1 were then subjected to HDX-MS analysis. Five regions protected from H/D exchange upon mAb binding are identified as potential conformational epitope-contributing segments. PMID:26169135

  15. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  16. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  17. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  18. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  19. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  20. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  1. Structure Revision of Similanamide to PF1171C by Total Synthesis.

    PubMed

    Masuda, Yuichi; Tanaka, Ren; Ganesan, A; Doi, Takayuki

    2015-09-25

    The total synthesis of the proposed structure of similanamide, a cyclic hexapeptide recently isolated from the marine sponge-associated fungus Aspergillus similanensis KUFA 0013, was achieved by solid-phase synthesis of a linear precursor and solution-phase macrolactamization. The NMR spectra of our synthetic final product were not identical to those of the isolated material and led us to conclude that similanamide is identical to PF1171C, a previously reported diastereomeric hexapeptide. PMID:26348363

  2. Parallel Total Energy

    Energy Science and Technology Software Center (ESTSC)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  3. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  4. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  5. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  6. Parallel Multigrid Equation Solver

    Energy Science and Technology Software Center (ESTSC)

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  7. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (ESTSC)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  8. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  9. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  10. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  11. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  12. Serial multiplier arrays for parallel computation

    NASA Technical Reports Server (NTRS)

    Winters, Kel

    1990-01-01

    Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.

  13. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  14. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  15. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  16. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  17. Seeing in parallel

    SciTech Connect

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.

    1988-01-01

    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  18. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  19. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  20. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  1. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  2. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  3. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  4. Combinatorial Synthesis and Discovery of an Antibiotic Compound. An Experiment Suitable for High School and Undergraduate Laboratories

    NASA Astrophysics Data System (ADS)

    Wolkenberg, Scott E.; Su, Andrew I.

    2001-06-01

    An exercise demonstrating solution-phase combinatorial chemistry and its application to drug discovery is described. The experiment involves the synthesis of six libraries of three hydrazones, screening the libraries for antibiotic activity, and deconvolution to determine the active individual compound. The laboratory was designed for a high school classroom, though it can easily be expanded to suit a college introductory organic laboratory course.

  5. Adaptive parallel logic networks

    SciTech Connect

    Martinez, T.R.; Vidal, J.J.

    1988-02-01

    This paper presents a novel class of special purpose processors referred to as ASOCS (adaptive self-organizing concurrent systems). Intended applications include adaptive logic devices, robotics, process control, system malfunction management, and in general, applications of logic reasoning. ASOCS combines massive parallelism with self-organization to attain a distributed mechanism for adaptation. The ASOCS approach is based on an adaptive network composed of many simple computing elements (nodes) which operate in a combinational and asynchronous fashion. Problem specification (programming) is obtained by presenting to the system if-then rules expressed as Boolean conjunctions. New rules are added incrementally. In the current model, when conflicts occur, precedence is given to the most recent inputs. With each rule, desired network response is simply presented to the system, following which the network adjusts itself to maintain consistency and parsimony of representation. Data processing and adaptation form two separate phases of operation. During processing, the network acts as a parallel hardware circuit. Control of the adaptive process is distributed among the network nodes and efficiently exploits parallelism.

  6. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  7. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  8. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  9. Effect of separation dimensions on resolution and throughput using very narrow-range IEF for 2-DE after solution phase isoelectric fractionation of a complex proteome.

    PubMed

    Lee, KiBeom; Pi, KyungBae

    2009-04-01

    This paper describes how the lengths of the first and second dimensions in narrow pH-range 2-DE affect the number of detected protein spots, by analysis of human breast carcinoma cell line lysates prefractionated by solution phase IEF. The aim is to maximize throughput while minimizing experimental costs. In this study, systematic evaluation of narrow-range IPG strip lengths showed that separation distances were very important, with dramatic increases in resolution when longer gels were used. Compared with 7 cm minigels, maximal resolution was obtained using 18 and 24 cm IPG strips. Systematic evaluation of SDS-PAGE gel length showed a far weaker influence of separation length on resolution in the second dimension compared with that observed for the IEF dimension. There was little benefit in using separation distances greater than 12-15 cm, at least with currently available electrophoresis units. The work shows that regions of the IPG strip not containing any proteins can be excised to fit a smaller gel if prefractionation using IEF in solution has been performed. As expected, larger 2-DE gel volumes resulting from the use of longer IPG strips and second dimension gels decreased detection sensitivity when equal protein loads were used. However, this effect could be readily eliminated by increasing the loads applied to IPG strips. PMID:19301322

  10. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  11. Synthesis and biological evaluation of a beauveriolide analogue library.

    PubMed

    Nagai, Kenichiro; Doi, Takayuki; Sekiguchi, Takafumi; Namatame, Ichiji; Sunazuka, Toshiaki; Tomoda, Hiroshi; Omura, Satoshi; Takahashi, Takashi

    2006-01-01

    Synthesis of beauveriolide III (1b), which is an inhibitor of lipid droplet accumulation in macrophages, was achieved by solid-phase assembly of linear depsipeptide using a 2-chlorotrityl linker followed by solution-phase cyclization. On the basis of this strategy, a combinatorial library of beauveriolide analogues was carried out by radio frequency-encoded combinatorial chemistry. After automated purification using preparative reversed-phase HPLC, the library was tested for inhibitory activity of CE synthesis in macrophages to determine structure-activity relationships of beauveriolides. Among them, we found that diphenyl derivative 7{9,1} is 10 times more potent than 1b. PMID:16398560

  12. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  13. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  14. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  15. Fastpath Speculative Parallelization

    NASA Astrophysics Data System (ADS)

    Spear, Michael F.; Kelsey, Kirk; Bai, Tongxin; Dalessandro, Luke; Scott, Michael L.; Ding, Chen; Wu, Peng

    We describe Fastpath, a system for speculative parallelization of sequential programs on conventional multicore processors. Our system distinguishes between the lead thread, which executes at almost-native speed, and speculative threads, which execute somewhat slower. This allows us to achieve nontrivial speedup, even on two-core machines. We present a mathematical model of potential speedup, parameterized by application characteristics and implementation constants. We also present preliminary results gleaned from two different Fastpath implementations, each derived from an implementation of software transactional memory.

  16. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  17. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  18. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  19. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  20. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  1. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  2. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  3. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  4. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  5. A novel solution-phase approach to nanocrystalline niobates: selective syntheses of Sr0.4H1.2Nb2O6.H2O nanopolyhedrons and SrNb2O6 nanorods photocatalysts.

    PubMed

    Liang, Shijing; Wu, Ling; Bi, Jinhong; Wang, Wanjun; Gao, Jian; Li, Zhaohui; Fu, Xianzhi

    2010-03-01

    A novel solution-phase route using Nb(2)O(5).nH(2)O as precursor was developed to selectively synthesize single-crystalline Sr(0.4)H(1.2)Nb(2)O(6).H(2)O nanopolyhedrons and SrNb(2)O(6) nanorods photocatalysts via simply adjusting pH values of the reactive solutions. PMID:20162143

  6. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  7. Adaptive synthesis of distance fields.

    PubMed

    Lee, Sung-Ho; Park, Taejung; Kim, Jong-Hyeon; Kim, Chang-Hun

    2012-07-01

    We address the computational resource requirements of 3D example-based synthesis with an adaptive synthesis technique that uses a tree-based synthesis map. A signed-distance field (SDF) is determined for the 3D exemplars, and then new models can be synthesized as SDFs by neighborhood matching. Unlike voxel synthesis approach, our input is posed in the real domain to preserve maximum detail. In comparison to straightforward extensions to the existing volume texture synthesis approach, we made several improvements in terms of memory requirements, computation times, and synthesis quality. The inherent parallelism in this method makes it suitable for a multicore CPU. Results show that computation times and memory requirements are very much reduced, and large synthesized scenes exhibit fine details which mimic the exemplars. PMID:21808095

  8. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  9. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  10. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  11. Unified Parallel Software

    Energy Science and Technology Software Center (ESTSC)

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  12. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  13. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  14. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  15. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  16. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  17. Proline Editing: A General and Practical Approach to the Synthesis of Functionally and Structurally Diverse Peptides. Analysis of Steric versus Stereoelectronic Effects of 4-Substituted Prolines on Conformation within Peptides

    PubMed Central

    Pandey, Anil K.; Naduthambi, Devan; Thomas, Krista M.; Zondlo, Neal J.

    2013-01-01

    Functionalized proline residues have diverse applications. Herein we describe a practical approach, proline editing, for the synthesis of peptides with stereospecifically modified proline residues. Peptides are synthesized by standard solid-phase-peptide-synthesis to incorporate Fmoc-Hydroxyproline (4R-Hyp). In an automated manner, the Hyp hydroxyl is protected and the remainder of the peptide synthesized. After peptide synthesis, the Hyp protecting group is orthogonally removed and Hyp selectively modified to generate substituted proline amino acids, with the peptide main chain functioning to “protect” the proline amino and carboxyl groups. In a model tetrapeptide (Ac-TYPN-NH2), 4R-Hyp was stereospecifically converted to 122 different 4-substituted prolyl amino acids, with 4R or 4S stereochemistry, via Mitsunobu, oxidation, reduction, acylation, and substitution reactions. 4-Substituted prolines synthesized via proline editing include incorporated structured amino acid mimetics (Cys, Asp/Glu, Phe, Lys, Arg, pSer/pThr), recognition motifs (biotin, RGD), electron-withdrawing groups to induce stereoelectronic effects (fluoro, nitrobenzoate), handles for heteronuclear NMR (19F:fluoro; pentafluorophenyl or perfluoro-tert-butyl ether; 4,4-difluoro; 77SePh) and other spectroscopies (fluorescence, IR: cyanophenyl ether), leaving groups (sulfonate, halide, NHS, bromoacetate), and other reactive handles (amine, thiol, thioester, ketone, hydroxylamine, maleimide, acrylate, azide, alkene, alkyne, aryl halide, tetrazine, 1,2-aminothiol). Proline editing provides access to these proline derivatives with no solution phase synthesis. All peptides were analyzed by NMR to identify stereoelectronic and steric effects on conformation. Proline derivatives were synthesized to permit bioorthogonal conjugation reactions, including azide-alkyne, tetrazinetrans-cyclooctene, oxime, reductive amination, native chemical ligation, Suzuki, Sonogashira, cross-metathesis, and Diels

  18. Synthesis of epoxybenzo[d]isothiazole 1,1-dioxides via a reductive-Heck, metathesis-sequestration protocol†‡

    PubMed Central

    Asad, Naeem; Hanson, Paul R.; Long, Toby R.; Rayabarapu, Dinesh K.; Rolfe, Alan

    2011-01-01

    An atom-economical purification protocol, using solution phase processing via ring-opening metathesis polymerization (ROMP) has been developed for the synthesis of tricyclic sultams. This chromatography-free method allows for convenient isolation of reductive-Heck products and reclamation of excess starting material via sequestration involving metathesis catalysts and a catalyst-armed Si-surface. PMID:21727956

  19. Recent Trends in Nucleotide Synthesis.

    PubMed

    Roy, Béatrice; Depaix, Anaïs; Périgaud, Christian; Peyrottes, Suzanne

    2016-07-27

    Focusing on the recent literature (since 2000), this review outlines the main synthetic approaches for the preparation of 5'-mono-, 5'-di-, and 5'-triphosphorylated nucleosides, also known as nucleotides, as well as several derivatives, namely, cyclic nucleotides and dinucleotides, dinucleoside 5',5'-polyphosphates, sugar nucleotides, and nucleolipids. Endogenous nucleotides and their analogues can be obtained enzymatically, which is often restricted to natural substrates, or chemically. In chemical synthesis, protected or unprotected nucleosides can be used as the starting material, depending on the nature of the reagents selected from P(III) or P(V) species. Both solution-phase and solid-support syntheses have been developed and are reported here. Although a considerable amount of research has been conducted in this field, further work is required because chemists are still faced with the challenge of developing a universal methodology that is compatible with a large variety of nucleoside analogues. PMID:27319940

  20. Chalcogen nanowires: synthesis and properties

    NASA Astrophysics Data System (ADS)

    Mayers, Brian T.; Gates, Byron D.; Xia, Younan

    2002-11-01

    We have demonstrated a solution-phase approach based on homogeneous nucleation and controlled growth for the synthesis of 1-dimensional nanostructures from a chalcogens such as Se, Te, and Se/Te alloys. These nanostructures include monodispersed nanowires, nanorods, and nanotubes with good dimensional control (lateral dimensions from 10 to 1000 nm, and lengths ranging from a 0.25 to >20 μm). These nanomaterials are ideal components for fabricating devices or composites for photoconductive and piezoelectric applications. In this presentation, we will discuss the mechanisms (as revealed by our SEM and TEM studies) for the formation of these 1-dimensional nanostructures, as well as some preliminary measurements on their properties.

  1. PMESH: A parallel mesh generator

    SciTech Connect

    Hardin, D.D.

    1994-10-21

    The Parallel Mesh Generation (PMESH) Project is a joint LDRD effort by A Division and Engineering to develop a unique mesh generation system that can construct large calculational meshes (of up to 10{sup 9} elements) on massively parallel computers. Such a capability will remove a critical roadblock to unleashing the power of massively parallel processors (MPPs) for physical analysis. PMESH will support a variety of LLNL 3-D physics codes in the areas of electromagnetics, structural mechanics, thermal analysis, and hydrodynamics.

  2. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  3. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  4. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  5. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  6. Plasmonic nanoshell synthesis in microfluidic composite foams.

    PubMed

    Duraiswamy, Suhanya; Khan, Saif A

    2010-09-01

    The availability of robust, scalable, and automated nanoparticle manufacturing processes is crucial for the viability of emerging nanotechnologies. Metallic nanoparticles of diverse shape and composition are commonly manufactured by solution-phase colloidal chemistry methods, where rapid reaction kinetics and physical processes such as mixing are inextricably coupled, and scale-up often poses insurmountable problems. Here we present the first continuous flow process to synthesize thin gold "nanoshells" and "nanoislands" on colloidal silica surfaces, which are nanoparticle motifs of considerable interest in plasmonics-based applications. We assemble an ordered, flowing composite foam lattice in a simple microfluidic device, where the lattice cells are alternately aqueous drops containing reagents for nanoparticle synthesis or gas bubbles. Microfluidic foam generation enables precisely controlled reagent dispensing and mixing, and the ordered foam structure facilitates compartmentalized nanoparticle growth. This is a general method for aqueous colloidal synthesis, enabling continuous, inherently digital, scalable, and automated production processes for plasmonic nanomaterials. PMID:20731386

  7. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  8. Parallel execution model for Prolog

    SciTech Connect

    Fagin, B.S.

    1987-01-01

    One candidate language for parallel symbolic computing is Prolog. Numerous ways for executing Prolog in parallel have been proposed, but current efforts suffer from several deficiencies. Many cannot support fundamental types of concurrency in Prolog. Other models are of purely theoretical interest, ignoring implementation costs. Detailed simulation studies of execution models are scare; at present little is known about the costs and benefits of executing Prolog in parallel. In this thesis, a new parallel execution model for Prolog is presented: the PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR-parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented, and compilation to the PPP abstract instruction set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

  9. Reordering computations for parallel execution

    NASA Technical Reports Server (NTRS)

    Adams, L.

    1985-01-01

    The computations are reordered in the SOR algorithm to maintain the same asymptotic rate of convergence as the rowwise ordering to obtain parallelism at different levels. A parallel program is written to illustrate these ideas and actual machines for implementation of this program are discussed.

  10. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  11. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  12. Theory and practice of parallel direct optimization.

    PubMed

    Janies, Daniel A; Wheeler, Ward C

    2002-01-01

    Our ability to collect and distribute genomic and other biological data is growing at a staggering rate (Pagel, 1999). However, the synthesis of these data into knowledge of evolution is incomplete. Phylogenetic systematics provides a unifying intellectual approach to understanding evolution but presents formidable computational challenges. A fundamental goal of systematics, the generation of evolutionary trees, is typically approached as two distinct NP-complete problems: multiple sequence alignment and phylogenetic tree search. The number of cells in a multiple alignment matrix are exponentially related to sequence length. In addition, the number of evolutionary trees expands combinatorially with respect to the number of organisms or sequences to be examined. Biologically interesting datasets are currently comprised of hundreds of taxa and thousands of nucleotides and morphological characters. This standard will continue to grow with the advent of highly automated sequencing and development of character databases. Three areas of innovation are changing how evolutionary computation can be addressed: (1) novel concepts for determination of sequence homology, (2) heuristics and shortcuts in tree-search algorithms, and (3) parallel computing. In this paper and the online software documentation we describe the basic usage of parallel direct optimization as implemented in the software POY (ftp://ftp.amnh.org/pub/molecular/poy). PMID:11924490

  13. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  14. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  15. Magnetic nanoparticles: synthesis, functionalization, and applications in bioimaging and magnetic energy storage

    PubMed Central

    Frey, Natalie A.; Peng, Sheng; Cheng, Kai; Sun, Shouheng

    2009-01-01

    This tutorial review summarizes the recent advances in the chemical synthesis and potential applications of monodisperse magnetic nanoparticles. After a brief introduction to nanomagnetism, the review focuses on recent developments in solution phase syntheses of monodisperse MFe2O4, Co, Fe, CoFe, FePt and SmCo5 nanoparticles. The review further outlines the surface, structural, and magnetic properties of these nanoparticles for biomedicine and magnetic energy storage applications. PMID:19690734

  16. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  17. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  18. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  19. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  20. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  1. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  2. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  3. Multigrid on massively parallel architectures

    SciTech Connect

    Falgout, R D; Jones, J E

    1999-09-17

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs.

  4. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  5. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  6. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  7. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  8. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  9. Predicting performance of parallel computations

    NASA Technical Reports Server (NTRS)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  10. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  11. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  12. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  13. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  14. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  15. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  16. Linear quadratic optimal controller for cable-driven parallel robots

    NASA Astrophysics Data System (ADS)

    Abdolshah, Saeed; Shojaei Barjuei, Erfan

    2015-12-01

    In recent years, various cable-driven parallel robots have been investigated for their advantages, such as low structural weight, high acceleration, and large work-space, over serial and conventional parallel systems. However, the use of cables lowers the stiffness of these robots, which in turn may decrease motion accuracy. A linear quadratic (LQ) optimal controller can provide all the states of a system for the feedback, such as position and velocity. Thus, the application of such an optimal controller in cable-driven parallel robots can result in more efficient and accurate motion compared to the performance of classical controllers such as the proportional- integral-derivative controller. This paper presents an approach to apply the LQ optimal controller on cable-driven parallel robots. To employ the optimal control theory, the static and dynamic modeling of a 3-DOF planar cable-driven parallel robot (Feriba-3) is developed. The synthesis of the LQ optimal control is described, and the significant experimental results are presented and discussed.

  17. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  18. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  19. Efficiency of parallel direct optimization.

    PubMed

    Janies, D A; Wheeler, W C

    2001-03-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. PMID:12240679

  20. The NICMOS Parallel Observing Program

    NASA Astrophysics Data System (ADS)

    McCarthy, Patrick

    2002-07-01

    We propose to manage the default set of pure parallels with NICMOS. Our experience with both our GO NICMOS parallel program and the public parallel NICMOS programs in cycle 7 prepared us to make optimal use of the parallel opportunities. The NICMOS G141 grism remains the most powerful survey tool for HAlpha emission-line galaxies at cosmologically interesting redshifts. It is particularly well suited to addressing two key uncertainties regarding the global history of star formation: the peak rate of star formation in the relatively unexplored but critical 1<= z <= 2 epoch, and the amount of star formation missing from UV continuum-based estimates due to high extinction. Our proposed deep G141 exposures will increase the sample of known HAlpha emission- line objects at z ~ 1.3 by roughly an order of magnitude. We will also obtain a mix of F110W and F160W images along random sight-lines to examine the space density and morphologies of the reddest galaxies. The nature of the extremely red galaxies remains unclear and our program of imaging and grism spectroscopy provides unique information regarding both the incidence of obscured star bursts and the build up of stellar mass at intermediate redshifts. In addition to carrying out the parallel program we will populate a public database with calibrated spectra and images, and provide limited ground- based optical and near-IR data for the deepest parallel fields.

  1. Evolution of an adenine-copper cluster to a highly porous cuboidal framework: solution-phase ripening and gas-adsorption properties.

    PubMed

    Venkatesh, V; Pachfule, Pradip; Banerjee, Rahul; Verma, Sandeep

    2014-09-15

    The synthesis and directed evolution of a tetranuclear copper cluster, supported by 8-mercapto-N9-propyladenine ligand, to a highly porous three-dimensional cubic framework in the solid state is reported. The structure of this porous framework was unambiguously characterized by X-ray crystallography. The framework contains about 62 % solvent-accessible void; the presence of a free exocyclic amino group in the porous framework facilitates reversible adsorption of gas and solvent molecules. Oriented growth of framework in solution was also tracked by force and scanning electron microscopy studies, leading to identification of an intriguing ripening process, over a period of 30 days, which also revealed formation of cuboidal aggregates in solution. The elemental composition of these cuboidal aggregates was ascertained by EDAX analysis. PMID:25112608

  2. Synthesis and biological evaluation of destruxin A and related analogs.

    PubMed

    Ast, T; Barron, E; Kinne, L; Schmidt, M; Germeroth, L; Simmons, K; Wenschuh, H

    2001-07-01

    This report describes the development of an efficient solid-phase synthesis protocol and adaptation of reported solution phase procedures for the synthesis of the cyclic depsihexapeptide destruxin A and related analogs. The solid-phase method described is based on standard Fmoc peptide chemistry, including a new synthetic method for the assembly of the depsi bond-containing unit. In order to select analogs of destruxin A for synthesis and evaluation of insecticidal activity, the work of Hellberg et al., describing a set of Z-descriptors for amino acid side-chains comparing their physicochemical properties, was utilized. Destruxin A and 27 different analogs with structural variations in four residues were synthesized and insecticidal activity was evaluated via injections into tobacco budworm (Heliothis virescens) larvae. Several destruxin A analogs were found to be at least as potent as the native compound. PMID:11454164

  3. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  4. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  5. The economics of parallel trade.

    PubMed

    Danzon, P M

    1998-03-01

    The potential for parallel trade in the European Union (EU) has grown with the accession of low price countries and the harmonisation of registration requirements. Parallel trade implies a conflict between the principle of autonomy of member states to set their own pharmaceutical prices, the principle of free trade and the industrial policy goal of promoting innovative research and development (R&D). Parallel trade in pharmaceuticals does not yield the normal efficiency gains from trade because countries achieve low pharmaceutical prices by aggressive regulation, not through superior efficiency. In fact, parallel trade reduces economic welfare by undermining price differentials between markets. Pharmaceutical R&D is a global joint cost of serving all consumers worldwide; it accounts for roughly 30% of total costs. Optimal (welfare maximising) pricing to cover joint costs (Ramsey pricing) requires setting different prices in different markets, based on inverse demand elasticities. By contrast, parallel trade and regulation based on international price comparisons tend to force price convergence across markets. In response, manufacturers attempt to set a uniform 'euro' price. The primary losers from 'euro' pricing will be consumers in low income countries who will face higher prices or loss of access to new drugs. In the long run, even higher income countries are likely to be worse off with uniform prices, because fewer drugs will be developed. One policy option to preserve price differentials is to exempt on-patent products from parallel trade. An alternative is confidential contracting between individual manufacturers and governments to provide country-specific ex post discounts from the single 'euro' wholesale price, similar to rebates used by managed care in the US. This would preserve differentials in transactions prices even if parallel trade forces convergence of wholesale prices. PMID:10178655

  6. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  7. Bounded Parallel-Batch Scheduling on Unrelated Parallel Machines

    NASA Astrophysics Data System (ADS)

    Miao, Cuixia; Zhang, Yuzhong; Wang, Chengfei

    In this paper, we consider the bounded parallel-batch scheduling problem on unrelated parallel machines. Problems R m |B|F are NP-hard for any objective function F. For this reason, we discuss the special case with p ij = p i for i = 1, 2, ⋯ , m , j = 1, 2, ⋯ , n. We give optimal algorithms for the general scheduling to minimize total weighted completion time, makespan and the number of tardy jobs. And we design pseudo-polynomial time algorithms for the case with rejection penalty to minimize the makespan and the total weighted completion time plus the total penalty of the rejected jobs, respectively.

  8. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  9. PARAVT: Parallel Voronoi Tessellation code

    NASA Astrophysics Data System (ADS)

    Gonzalez, Roberto E.

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  10. Massively parallel MRI detector arrays

    NASA Astrophysics Data System (ADS)

    Keil, Boris; Wald, Lawrence L.

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays.

  11. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  12. Massively parallel MRI detector arrays.

    PubMed

    Keil, Boris; Wald, Lawrence L

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called "ultimate" SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  13. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  14. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  15. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  16. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  17. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  18. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  19. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  20. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-12-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

  1. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  2. ITER LHe Plants Parallel Operation

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Bonneton, M.; Chalifour, M.; Chang, H.-S.; Chodimella, C.; Monneret, E.; Vincent, G.; Flavien, G.; Fabre, Y.; Grillot, D.

    The ITER Cryogenic System includes three identical liquid helium (LHe) plants, with a total average cooling capacity equivalent to 75 kW at 4.5 K.The LHe plants provide the 4.5 K cooling power to the magnets and cryopumps. They are designed to operate in parallel and to handle heavy load variations.In this proceedingwe will describe the presentstatusof the ITER LHe plants with emphasis on i) the project schedule, ii) the plantscharacteristics/layout and iii) the basic principles and control strategies for a stable operation of the three LHe plants in parallel.

  3. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  4. Parallelization of the SIR code

    NASA Astrophysics Data System (ADS)

    Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jurčak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

    A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

  5. Synthesis and structural characterization of monomeric and dimeric peptide nucleic acids prepared by using microwave-promoted multicomponent reactions.

    PubMed

    Ovadia, Reuben; Lebrun, Aurélien; Barvik, Ivan; Vasseur, Jean-Jacques; Baraguey, Carine; Alvarez, Karine

    2015-12-01

    A solution phase synthesis of peptide nucleic acid monomers and dimers was developed by using microwave-promoted Ugi multicomponent reactions. A mixture of a functionalized amine, a carboxymethyl nucleobase, paraformaldehyde and an isocyanide as building blocks generates PNA monomers which are then partially deprotected and used in a second Ugi 4CC reaction, leading to PNA dimers. Conformational rotamers were identified by using NMR and MD simulations. PMID:26394794

  6. Lead inhibition of enzyme synthesis in soil.

    PubMed Central

    Cole, M A

    1977-01-01

    Addition of 2 mg of Pb2+/g of soil concident with or after amendment with starch or maltose resulted in 75 and 50% decreases in net synthesis of amylase and alpha-glucosidase, respectively. Invertase synthesis in sucrose-amended soil was transiently reduced after Pb2+ addition. Amylase activity was several times less sensitive to Pb2+ inhibition than was enzyme synthesis. In most cases, the rate of enzyme synthesis returned to control (Pb2+) values 24 to 48 h after the addition of Pb. The decrease in amylase synthesis was paralleled by a decrease in the number of Pb-sensitive, amylase-producing bacteria, whereas recovery of synthesis was associated with an increase in the number of amylase-producing bacteria. The degree of inhibition of enzyme synthesis was related to the quantity of Pb added and to the specific form of lead. PbSO4 decreased amylase synthesis at concentrations of 10.2 mg of Pb2+/g of soil or more, whereas PbO did not inhibit amylase synthesis at 13 mg of Pb2+/g of soil. Lead acetate, PbCl2, and PbS reduced amylase synthesis at total Pb2+ concentrations of 0.45 mg of Pb2+/g of soil or higher. The results indicated that lead is a potent but somewhat selective inhibitor of enzyme synthesis in soil, and that highly insoluble lead compounds, such as PbS, may be potent modifiers of soil biological activity. PMID:848950

  7. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  8. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  9. Coupled parallel waveguide semiconductor laser

    NASA Technical Reports Server (NTRS)

    Katz, J.; Kapon, E.; Lindsey, C.; Rav-Noy, Z.; Margalit, S.; Yariv, A.; Mukai, S.

    1984-01-01

    The operation of a new type of tunable laser, where the two separately controlled individual lasers are placed vertically in parallel, has been demonstrated. One of the cavities ('control' cavity) is operated below threshold and assists the longitudinal mode selection and tuning of the other laser. With a minor modification, the same device can operate as an independent two-wavelength laser source.

  10. [Total Synthesis of Biologically Active Natural Products toward Elucidation of the Mode of Action].

    PubMed

    Yoshida, Masahito

    2015-01-01

    Total synthesis of biologically active cyclodepsipeptide destruxin E using solid- and solution-phase synthesis is described. The solid-phase synthesis of destruxin E was initially investigated for the efficient synthesis of destruxin analogues. Peptide elongation from polymer-supported β-alanine was efficiently performed using DIC/HOBt or PyBroP/DIEA, and subsequent cleavage from the polymer-support under weakly acidic conditions furnished a cyclization precursor in moderate yield. Macrolactonization of the cyclization precursor was smoothly performed using 2-methyl-6-nitrobenzoic anhydride (MNBA)/4-(dimethylamino)pyridine N-oxide (DMAPO) to afford macrolactone in moderate yield. Finally, formation of the epoxide in the side chain via three steps provided destruxin E, and the stereochemistry of the epoxide was determined to be S. Its diastereomer, epi-destruxin E, was also synthesized in the same manner used to synthesize the natural product. The stereochemistry of the epoxide was critical for the V-ATPase inhibition; natural product destruxin E exhibited 10-fold more potent V-ATPase inhibition than epi-destruxin E. Next, the scalable synthesis of destruxin E for in vivo study was also performed via solution-phase synthesis. The scalable synthesis of a key component, (S)-HA-Pro-OH, was achieved using osmium-catalyzed diastereoselective dihydroxylation with (DHQD)2PHAL as a chiral ligand; peptide synthesis using Cbz-protected amino acid derivatives furnished the cyclization precursor on a gram-scale. Macrolactonization smoothly provided the macrolactone without forming a dimerized product, even at 6 mM, and the synthesis of destruxin E was achieved via three steps on a gram scale in high purity (>98%). PMID:26423864

  11. Flow invariant droplet formation for stable parallel microreactors

    PubMed Central

    Riche, Carson T.; Roberts, Emily J.; Gupta, Malancha; Brutchey, Richard L.; Malmstadt, Noah

    2016-01-01

    The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis. PMID:26902825

  12. Flow invariant droplet formation for stable parallel microreactors

    NASA Astrophysics Data System (ADS)

    Riche, Carson T.; Roberts, Emily J.; Gupta, Malancha; Brutchey, Richard L.; Malmstadt, Noah

    2016-02-01

    The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis.

  13. Flow invariant droplet formation for stable parallel microreactors.

    PubMed

    Riche, Carson T; Roberts, Emily J; Gupta, Malancha; Brutchey, Richard L; Malmstadt, Noah

    2016-01-01

    The translation of batch chemistries onto continuous flow platforms requires addressing the issues of consistent fluidic behaviour, channel fouling and high-throughput processing. Droplet microfluidic technologies reduce channel fouling and provide an improved level of control over heat and mass transfer to control reaction kinetics. However, in conventional geometries, the droplet size is sensitive to changes in flow rates. Here we report a three-dimensional droplet generating device that exhibits flow invariant behaviour and is robust to fluctuations in flow rate. In addition, the droplet generator is capable of producing droplet volumes spanning four orders of magnitude. We apply this device in a parallel network to synthesize platinum nanoparticles using an ionic liquid solvent, demonstrate reproducible synthesis after recycling the ionic liquid, and double the reaction yield compared with an analogous batch synthesis. PMID:26902825

  14. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  15. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  16. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  17. Static stability of parallel operation of asynchronized generators in an electrical system

    NASA Astrophysics Data System (ADS)

    Plotnikova, T. V.; Sokur, P. V.; Tuzov, P. Yu.; Shakaryan, Yu. G.

    2014-12-01

    The static stability of single and parallel operations of an asynchronized generator (ASG) in a long-distance power transmission line is investigated. The synthesis of the ASG excitation control law at which set of the machine's stable operating conditions G s will comprise sufficiently conservative set of permissible operating conditions G p is considered.

  18. Mirror versus parallel bimanual reaching

    PubMed Central

    2013-01-01

    Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching movements, and also compared veridical display to one that transforms the right hand’s cursor to the opposite side, reducing the area that the visual system has to monitor. Twenty healthy, right-handed subjects (5 in each group) practiced 200 movements. We hypothesized that parallel reaching movements would be the best performing, and attending to one visual area would reduce the task difficulty. Results The two-way comparison revealed that mirror movement times took an average 1.24 s longer to complete than parallel. Surprisingly, subjects’ movement times moving to one target (attending to one visual area) also took an average of 1.66 s longer than subjects moving to two targets. For both hands, there was also a significant interaction effect, revealing the lowest errors for parallel movements moving to two targets (p < 0.001). This was the only group that began and maintained low errors throughout training. Conclusion Combined with other evidence, these results suggest that the most intuitive reaching performance can be observed with parallel movements with a veridical display (moving to two separate targets). These results point to the expected levels of challenge for these bimanual training modes, which could be used to advise therapy choices in self-neurorehabilitation. PMID:23837908

  19. Alternative fuels and chemicals from synthesis gas

    SciTech Connect

    Unknown

    1998-08-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  20. Alternative Fuels and Chemicals From Synthesis Gas

    SciTech Connect

    1998-07-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  1. ALTERNATIVE FUELS AND CHEMICALS FROM SYNTHESIS GAS

    SciTech Connect

    Unknown

    1998-01-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  2. ALTERNATIVE FUELS AND CHEMICALS FROM SYNTHESIS GAS

    SciTech Connect

    Unknown

    1999-01-01

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  3. Alternative Fuels and Chemicals from Synthesis Gas

    SciTech Connect

    Peter Tijrn

    2003-01-02

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE's LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  4. Low Mach number parallel and quasi-parallel shocks

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Quest, K. B.; Winske, D.

    1990-01-01

    The properties of low-Mach-number parallel and quasi-parallel shocks are studied using the results of one-dimensional hybrid simulations. It is shown that both the structure and ion dissipation at the shocks differ considerably. In the parallel limit, the shock remains coupled to the piston and consists of large-amplitude magnetosonic-whistler waves in the upstream, through the shock and into the downstream region, where the waves eventually damp out. These waves are generated by an ion beam instability due to the interaction between the incident and piston-reflected ions. The excited waves decelerate the plasma sufficiently that it becomes stable far into the downstream. The increase in ion temperature along the shock normal in the downstream region is due to superposition of incident and piston-rflected ions. These two populations of ions remain distinct through the downstream region. While they are both gyrophase-bunched, their counterstreaming nature results in a 180-deg phase shift in their perpendicular velocities.

  5. Synthesis and antitubercular activity of 1,2,4-trisubstitued piperazines.

    PubMed

    Rohde, Kyle H; Michaels, Heather A; Nefzi, Adel

    2016-05-01

    Parallel solid phase synthesis offers a unique opportunity for the synthesis and screening of large numbers of compounds and significantly enhances the prospect of finding new leads. We report the synthesis and antitubercular activity of chiral 1,2,4-trisubstituted piperazines derived from resin bound acylated dipeptides against Mycobacterium tuberculosis strain H37Rv. PMID:27020522

  6. Solid-Phase Organic Synthesis and Combinatorial Chemistry: A Laboratory Preparation of Oligopeptides

    NASA Astrophysics Data System (ADS)

    Truran, George A.; Aiken, Karelle S.; Fleming, Thomas R.; Webb, Peter J.; Hodge Markgraf, J.

    2002-01-01

    The principles and practice of solid-phase organic synthesis and combinatorial chemistry are utilized in a laboratory preparation of oligopeptides. A parallel synthesis scheme is used to generate a series of tripeptides. A divergent synthesis scheme is used to prepare two pentapeptides, one of which is leucine enkephalin, a neurotransmitter known to be an analgesic agent.

  7. Synthesis and characterization of hybrid nanostructures

    PubMed Central

    Mokari, Taleb

    2011-01-01

    There has been significant interest in the development of multicomponent nanocrystals formed by the assembly of two or more different materials with control over size, shape, composition, and spatial orientation. In particular, the selective growth of metals on the tips of semiconductor nanorods and wires can act to couple the electrical and optical properties of semiconductors with the unique properties of various metals. Here, we outline our progress on the solution-phase synthesis of metal-semiconductor heterojunctions formed by the growth of Au, Pt, or other binary catalytic metal systems on metal (Cd, Pb, Cu)-chalcogenide nanostructures. We show the ability to grow the metal on various shapes (spherical, rods, hexagonal prisms, and wires). Furthermore, manipulating the composition of the metal nanoparticles is also shown, where PtNi and PtCo alloys are our main focus. The magnetic and electrical properties of the developed hybrid nanostructures are shown. PMID:22110873

  8. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Congzhe; Fang, Yuefa; Guo, Sheng

    2015-07-01

    Dimensional synthesis is one of the most difficult issues in the field of parallel robots with actuation redundancy. To deal with the optimal design of a redundantly actuated parallel robot used for ankle rehabilitation, a methodology of dimensional synthesis based on multi-objective optimization is presented. First, the dimensional synthesis of the redundant parallel robot is formulated as a nonlinear constrained multi-objective optimization problem. Then four objective functions, separately reflecting occupied space, input/output transmission and torque performances, and multi-criteria constraints, such as dimension, interference and kinematics, are defined. In consideration of the passive exercise of plantar/dorsiflexion requiring large output moment, a torque index is proposed. To cope with the actuation redundancy of the parallel robot, a new output transmission index is defined as well. The multi-objective optimization problem is solved by using a modified Differential Evolution(DE) algorithm, which is characterized by new selection and mutation strategies. Meanwhile, a special penalty method is presented to tackle the multi-criteria constraints. Finally, numerical experiments for different optimization algorithms are implemented. The computation results show that the proposed indices of output transmission and torque, and constraint handling are effective for the redundant parallel robot; the modified DE algorithm is superior to the other tested algorithms, in terms of the ability of global search and the number of non-dominated solutions. The proposed methodology of multi-objective optimization can be also applied to the dimensional synthesis of other redundantly actuated parallel robots only with rotational movements.

  9. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  10. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  11. Template matching on parallel architectures

    SciTech Connect

    Sher

    1985-07-01

    Many important problems in computer vision can be characterized as template-matching problems on edge images. Some examples are circle detection and line detection. Two techniques for template matching are the Hough transform and correlation. There are two algorithms for correlation: a shift-and-add-based technique and a Fourier-transform-based technique. The most efficient algorithm of these three varies depending on the size of the template and the structure of the image. On different parallel architectures, the choice of algorithms for a specific problem is different. This paper describes two parallel architectures: the WARP and the Butterfly and describes why and how the criterion for making the choice of algorithms differs between the two machines.

  12. Parallel supercomputing with commodity components

    SciTech Connect

    Warren, M.S.; Goda, M.P.; Becker, D.J.

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  13. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  14. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  15. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  16. A parallel graph coloring heuristic

    SciTech Connect

    Jones, M.T.; Plassmann, P.E. )

    1993-05-01

    The problem of computing good graph colorings arises in many diverse applications, such as in the estimation of sparse Jacobians and in the development of efficient, parallel iterative methods for solving sparse linear systems. This paper presents an asynchronous graph coloring heuristic well suited to distributed memory parallel computers. Experimental results obtained on an Intel iPSC/860 are presented, which demonstrate that, for graphs arising from finite element applications, the heuristic exhibits scalable performance and generates colorings usually within three or four colors of the best-known linear time sequential heuristics. For bounded degree graphs, it is shown that the expected running time of the heuristic under the P-Ram computation model is bounded by EO(log(n)/log log(n)). This bound is an improvement over the previously known best upper bound for the expected running time of a random heuristic for the graph coloring problem.

  17. Instruction-level parallel processing.

    PubMed

    Fisher, J A; Rau, R

    1991-09-13

    The performance of microprocessors has increased steadily over the past 20 years at a rate of about 50% per year. This is the cumulative result of architectural improvements as well as increases in circuit speed. Moreover, this improvement has been obtained in a transparent fashion, that is, without requiring programmers to rethink their algorithms and programs, thereby enabling the tremendous proliferation of computers that we see today. To continue this performance growth, microprocessor designers have incorporated instruction-level parallelism (ILP) into new designs. ILP utilizes the parallel execution ofthe lowest level computer operations-adds, multiplies, loads, and so on-to increase performance transparently. The use of ILP promises to make possible, within the next few years, microprocessors whose performance is many times that of a CRAY-IS. This article provides an overview of ILP, with an emphasis on ILP architectures-superscalar, VLIW, and dataflow processors-and the compiler techniques necessary to make ILP work well. PMID:17831442

  18. Parallel supercomputing with commodity components

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Goda, M. P.; Becker, D. J.

    1997-01-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10(sup 15) floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  19. A new parallel simulation technique

    NASA Astrophysics Data System (ADS)

    Blanco-Pillado, Jose J.; Olum, Ken D.; Shlaer, Benjamin

    2012-01-01

    We develop a "semi-parallel" simulation technique suggested by Pretorius and Lehner, in which the simulation spacetime volume is divided into a large number of small 4-volumes that have only initial and final surfaces. Thus there is no two-way communication between processors, and the 4-volumes can be simulated independently and potentially at different times. This technique allows us to simulate much larger volumes than we otherwise could, because we are not limited by total memory size. No processor time is lost waiting for other processors. We compare a cosmic string simulation we developed using the semi-parallel technique with our previous MPI-based code for several test cases and find a factor of 2.6 improvement in the total amount of processor time required to accomplish the same job for strings evolving in the matter-dominated era.

  20. Scans as primitive parallel operations

    SciTech Connect

    Blelloch, G.E. . Dept. of Computer Science)

    1989-11-01

    In most parallel random access machine (PRAM) models, memory references are assumed to take unit time. In practice, and in theory, certain scan operations, also known as prefix computations, can execute in no more time than these parallel memory references. This paper outlines an extensive study of the effect of including, in the PRAM models, such scan operations as unit-time primitives. The study concludes that the primitives improve the asymptotic running time of many algorithms by an O(log n) factor greatly simplify the description of many algorithms, and are significantly easier to implement than memory references. The authors argue that the algorithm designer should feel free to use these operations as if they were as cheap as a memory reference. This paper describes five algorithms that clearly illustrate how the scan primitives can be used in algorithm design. These all run on an EREW PRAM with the addition of two scan primitives.

  1. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  2. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  3. Massively parallel femtosecond laser processing.

    PubMed

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  4. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  5. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  6. NWChem: scalable parallel computational chemistry

    SciTech Connect

    van Dam, Hubertus JJ; De Jong, Wibe A.; Bylaska, Eric J.; Govind, Niranjan; Kowalski, Karol; Straatsma, TP; Valiev, Marat

    2011-11-01

    NWChem is a general purpose computational chemistry code specifically designed to run on distributed memory parallel computers. The core functionality of the code focuses on molecular dynamics, Hartree-Fock and density functional theory methods for both plane-wave basis sets as well as Gaussian basis sets, tensor contraction engine based coupled cluster capabilities and combined quantum mechanics/molecular mechanics descriptions. It was realized from the beginning that scalable implementations of these methods required a programming paradigm inherently different from what message passing approaches could offer. In response a global address space library, the Global Array Toolkit, was developed. The programming model it offers is based on using predominantly one-sided communication. This model underpins most of the functionality in NWChem and the power of it is exemplified by the fact that the code scales to tens of thousands of processors. In this paper the core capabilities of NWChem are described as well as their implementation to achieve an efficient computational chemistry code with high parallel scalability. NWChem is a modern, open source, computational chemistry code1 specifically designed for large scale parallel applications2. To meet the challenges of developing efficient, scalable and portable programs of this nature a particular code design was adopted. This code design involved two main features. First of all, the code is build up in a modular fashion so that a large variety of functionality can be integrated easily. Secondly, to facilitate writing complex parallel algorithms the Global Array toolkit was developed. This toolkit allows one to write parallel applications in a shared memory like approach, but offers additional mechanisms to exploit data locality to lower communication overheads. This framework has proven to be very successful in computational chemistry but is applicable to any engineering domain. Within the context created by the features

  7. Parallel micromanipulation method for microassembly

    NASA Astrophysics Data System (ADS)

    Sin, Jeongsik; Stephanou, Harry E.

    2001-09-01

    Microassembly deals with micron or millimeter scale objects where the tolerance requirements are in the micron range. Typical applications include electronics components (silicon fabricated circuits), optoelectronics components (photo detectors, emitters, amplifiers, optical fibers, microlenses, etc.), and MEMS (Micro-Electro-Mechanical-System) dies. The assembly processes generally require not only high precision but also high throughput at low manufacturing cost. While conventional macroscale assembly methods have been utilized in scaled down versions for microassembly applications, they exhibit limitations on throughput and cost due to the inherently serialized process. Since the assembly process depends heavily on the manipulation performance, an efficient manipulation method for small parts will have a significant impact on the manufacturing of miniaturized products. The objective of this study on 'parallel micromanipulation' is to achieve these three requirements through the handling of multiple small parts simultaneously (in parallel) with high precision (micromanipulation). As a step toward this objective, a new manipulation method is introduced. The method uses a distributed actuation array for gripper free and parallel manipulation, and a centralized, shared actuator for simplified controls. The method has been implemented on a testbed 'Piezo Active Surface (PAS)' in which an actively generated friction force field is the driving force for part manipulation. Basic motion primitives, such as translation and rotation of objects, are made possible with the proposed method. This study discusses the design of the proposed manipulation method PAS, and the corresponding manipulation mechanism. The PAS consists of two piezoelectric actuators for X and Y motion, two linear motion guides, two sets of nozzle arrays, and solenoid valves to switch the pneumatic suction force on and off in individual nozzles. One array of nozzles is fixed relative to the surface on

  8. Bioinspired synthesis of magnetic nanoparticles

    SciTech Connect

    David, Anand

    2009-01-01

    goal of this project is to understand the mechanism of magnetite particle synthesis in the presence of the biomineralization proteins, mms6 and C25. Previous work has hypothesized that the mms6 protein helps to template magnetite and cobalt ferrite particle synthesis and that the C25 protein templates cobalt ferrite formation. However, the effect of parameters such as the protein concentration on the particle formation is still unknown. It is expected that the protein concentration significantly affects the nucleation and growth of magnetite. Since the protein provides iron-binding sites, it is expected that magnetite crystals would nucleate at those sites. In addition, in the previous work, the reaction medium after completion of the reaction was in the solution phase, and magnetic particles had a tendency to fall to the bottom of the medium and aggregate. The research presented in this thesis involves solid Pluronic gel phase reactions, which can be studied readily using small-angle x-ray scattering, which is not possible for the solution phase experiments. In addition, the concentration effect of both of the proteins on magnetite crystal formation was studied.

  9. GPU-based skin texture synthesis for digital human model.

    PubMed

    Shen, Zhe; Wang, Lili; Zhao, Yaqian; Zhao, Qinping; Zhao, Meng

    2014-01-01

    Skin synthesis is important for the actual appearance of digital human models. However, it is difficult to design a general algorithm to efficiently produce high quality results. This paper proposes a parallel texture synthesis method for large scale skin of digital human models. Two major procedures are included in this method, a parallel matching procedure and a multi-pass optimizing procedure. Compared with other methods, this algorithm is easy to use, requires only a small size of skin image as input, and generates an arbitrary size of skin texture with high quality. As demonstrated by experiments, the effectiveness of this skin texture synthesis method is confirmed. PMID:25226921

  10. Design and implementation of highly parallel pipelined VLSI systems

    NASA Astrophysics Data System (ADS)

    Delange, Alphonsus Anthonius Jozef

    A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.

  11. Bottom-up synthesis of chemically precise graphene nanoribbons.

    PubMed

    Narita, Akimitsu; Feng, Xinliang; Müllen, Klaus

    2015-02-01

    In this article, we describe our chemical approach, developed over the course of a decade, towards the bottom-up synthesis of structurally well-defined graphene nanoribbons (GNRs). GNR synthesis can be achieved through two different methods, one being a solution-phase process based on conventional organic chemistry and the other invoking surface-assisted fabrication, employing modern physics methodologies. In both methods, rationally designed monomers are polymerized to form non-planar polyphenylene precursors, which are "graphitized" and "planarized" by solution-mediated or surface-assisted cyclodehydrogenation. Through these methods, a variety of GNRs have been synthesized with different widths, lengths, edge structures, and degrees of heteroatom doping, featuring varying (opto)electronic properties. The ability to chemically tailor GNRs with tuned properties in a well-defined manner will contribute to the elucidation of the fundamental physics of GNRs, as well as pave the way for the development of GNR-based nanoelectronics and optoelectronics. PMID:25414146

  12. CdS and Cd-Free Buffer Layers on Solution Phase Grown Cu2ZnSn(SxSe1- x)4 :Band Alignments and Electronic Structure Determined with Femtosecond Ultraviolet Photoemission Spectroscopy

    SciTech Connect

    Haight, Richard; Barkhouse, Aaron; Wang, Wei; Yu, Luo; Shao, Xiaoyan; Mitzi, David; Hiroi, Homare; Sugimoto, Hiroki

    2013-12-02

    The heterojunctions formed between solution phase grown Cu2ZnSn(SxSe1- x)4(CZTS,Se) and a number of important buffer materials including CdS, ZnS, ZnO, and In2S3, were studied using femtosecond ultraviolet photoemission spectroscopy (fs-UPS) and photovoltage spectroscopy. With this approach we extract the magnitude and direction of the CZTS,Se band bending, locate the Fermi level within the band gaps of absorber and buffer and measure the absorber/buffer band offsets under flatband conditions. We will also discuss two-color pump/probe experiments in which the band bending in the buffer layer can be independently determined. Finally, studies of the bare CZTS,Se surface will be discussed including our observation of mid-gap Fermi level pinning and its relation to Voc limitations and bulk defects.

  13. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  14. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  15. Global Arrays Parallel Programming Toolkit

    SciTech Connect

    Nieplocha, Jaroslaw; Krishnan, Manoj Kumar; Palmer, Bruce J.; Tipparaju, Vinod; Harrison, Robert J.; Chavarría-Miranda, Daniel

    2011-01-01

    The two predominant classes of programming models for parallel computing are distributed memory and shared memory. Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this characteristic can have a negative impact on performance and scalability. Careful code restructuring to increase data reuse and replacing fine grain load/stores with block access to shared data can address the problem and yield performance for shared memory that is competitive with message-passing. However, this performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distributed memory models, such as message-passing or one-sided communication, offer performance and scalability but they are difficult to program. The Global Arrays toolkit attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed by the programmer. This management is achieved by calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be specified by the programmer and hence managed. GA is related to the global address space languages such as UPC, Titanium, and, to a lesser extent, Co-Array Fortran. In addition, by providing a set of data-parallel operations, GA is also related to data-parallel languages such as HPF, ZPL, and Data Parallel C. However, the Global Array programming model is implemented as a library that works with most languages used for technical computing and does not rely on compiler technology for achieving

  16. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  17. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  18. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  19. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  20. Scalable Parallel Algebraic Multigrid Solvers

    SciTech Connect

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  1. Fault-tolerant parallel processor

    SciTech Connect

    Harper, R.E.; Lala, J.H. )

    1991-06-01

    This paper addresses issues central to the design and operation of an ultrareliable, Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource that is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Redundant groups are synchronized solely by message transmissions and receptions, which aslo provide input data consistency and output voting. Reliability analysis results are presented that demonstrate the reduced failure probability of such a system. Performance analysis results are presented that quantify the temporal overhead involved in executing such fault-tolerance-specific operations. Empirical performance measurements of prototypes of the architecture are presented. 30 refs.

  2. Parallel Assembly of LIGA Components

    SciTech Connect

    Christenson, T.R.; Feddema, J.T.

    1999-03-04

    In this paper, a prototype robotic workcell for the parallel assembly of LIGA components is described. A Cartesian robot is used to press 386 and 485 micron diameter pins into a LIGA substrate and then place a 3-inch diameter wafer with LIGA gears onto the pins. Upward and downward looking microscopes are used to locate holes in the LIGA substrate, pins to be pressed in the holes, and gears to be placed on the pins. This vision system can locate parts within 3 microns, while the Cartesian manipulator can place the parts within 0.4 microns.

  3. True Shear Parallel Plate Viscometer

    NASA Technical Reports Server (NTRS)

    Ethridge, Edwin; Kaukler, William

    2010-01-01

    This viscometer (which can also be used as a rheometer) is designed for use with liquids over a large temperature range. The device consists of horizontally disposed, similarly sized, parallel plates with a precisely known gap. The lower plate is driven laterally with a motor to apply shear to the liquid in the gap. The upper plate is freely suspended from a double-arm pendulum with a sufficiently long radius to reduce height variations during the swing to negligible levels. A sensitive load cell measures the shear force applied by the liquid to the upper plate. Viscosity is measured by taking the ratio of shear stress to shear rate.

  4. Exploring Parallel Concordancing in English and Chinese.

    ERIC Educational Resources Information Center

    Lixun, Wang

    2001-01-01

    Investigates the value of computer technology as a medium for the delivery of parallel texts in English and Chinese for language learning. A English-Chinese parallel corpus was created for use in parallel concordancing--a technique that has been developed to respond to the desire to study language in its natural contexts of use. (Author/VWL)

  5. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  6. Parallel Computation Of Forward Dynamics Of Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1993-01-01

    Report presents parallel algorithms and special parallel architecture for computation of forward dynamics of robotics manipulators. Products of effort to find best method of parallel computation to achieve required computational efficiency. Significant speedup of computation anticipated as well as cost reduction.

  7. A parallel version of FORM 3

    NASA Astrophysics Data System (ADS)

    Fliegner, D.; Rétey, A.; Vermaseren, J. A. M.

    2001-08-01

    The parallel version of the symbolic manipulation program FORM for clusters of workstations and massive parallel systems is presented. We discuss various cluster architectures and the implementation of the parallel program using message passing (MPI). Performance results for real physics applications are shown.

  8. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  9. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  10. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  11. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  12. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  13. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  14. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  15. Xyce parallel electronic simulator design.

    SciTech Connect

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  16. Efficient parallel global garbage collection on massively parallel computers

    SciTech Connect

    Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

    1994-12-31

    On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

  17. A parallel execution model for Prolog

    SciTech Connect

    Fagin, B.

    1987-01-01

    In this thesis a new parallel execution model for Prolog is presented: The PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR- parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented and compilation to the PPP abstract instructions set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

  18. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  19. Information hiding in parallel programs

    SciTech Connect

    Foster, I.

    1992-01-30

    A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

  20. Nanocapillary Adhesion between Parallel Plates.

    PubMed

    Cheng, Shengfeng; Robbins, Mark O

    2016-08-01

    Molecular dynamics simulations are used to study capillary adhesion from a nanometer scale liquid bridge between two parallel flat solid surfaces. The capillary force, Fcap, and the meniscus shape of the bridge are computed as the separation between the solid surfaces, h, is varied. Macroscopic theory predicts the meniscus shape and the contribution of liquid/vapor interfacial tension to Fcap quite accurately for separations as small as two or three molecular diameters (1-2 nm). However, the total capillary force differs in sign and magnitude from macroscopic theory for h ≲ 5 nm (8-10 diameters) because of molecular layering that is not included in macroscopic theory. For these small separations, the pressure tensor in the fluid becomes anisotropic. The components in the plane of the surface vary smoothly and are consistent with theory based on the macroscopic surface tension. Capillary adhesion is affected by only the perpendicular component, which has strong oscillations as the molecular layering changes. PMID:27413872

  1. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things. PMID:27534347

  2. Parallel discovery of Alzheimer's therapeutics.

    PubMed

    Lo, Andrew W; Ho, Carole; Cummings, Jayna; Kosik, Kenneth S

    2014-06-18

    As the prevalence of Alzheimer's disease (AD) grows, so do the costs it imposes on society. Scientific, clinical, and financial interests have focused current drug discovery efforts largely on the single biological pathway that leads to amyloid deposition. This effort has resulted in slow progress and disappointing outcomes. Here, we describe a "portfolio approach" in which multiple distinct drug development projects are undertaken simultaneously. Although a greater upfront investment is required, the probability of at least one success should be higher with "multiple shots on goal," increasing the efficiency of this undertaking. However, our portfolio simulations show that the risk-adjusted return on investment of parallel discovery is insufficient to attract private-sector funding. Nevertheless, the future cost savings of an effective AD therapy to Medicare and Medicaid far exceed this investment, suggesting that government funding is both essential and financially beneficial. PMID:24944190

  3. Parallel Network Simulations with NEURON

    PubMed Central

    Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.

    2009-01-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

  4. A highly parallel signal processor

    NASA Astrophysics Data System (ADS)

    Bigham, Jackson D., Jr.

    There is an increasing need for signal processors functional across a broad range of problems, from radar systems to E-O and ESM applications. To meet this challenge, a signal processing system capable of efficiently meeting the processing requirements over a broad range of avionics sensor systems has been developed. The CDC Parallel Modular Signal Processor (PMSP) is a complete MIL/E-5400-qualified digital signal processing system capable of computation rates greater than 600 MOPS (million operations per second). The signal processing element of the PMSP is the Micro-AFP. It is an all-VLSI processor capable of executing multiple simultaneous operations. Up to five Micro-AFPs and 12 MB of main store memory (MSM), along with associated control and I/O functions, are contained in the PMSP's standard ATR enclosure.

  5. Self-testing in parallel

    NASA Astrophysics Data System (ADS)

    McKague, Matthew

    2016-04-01

    Self-testing allows us to determine, through classical interaction only, whether some players in a non-local game share particular quantum states. Most work on self-testing has concentrated on developing tests for small states like one pair of maximally entangled qubits, or on tests where there is a separate player for each qubit, as in a graph state. Here we consider the case of testing many maximally entangled pairs of qubits shared between two players. Previously such a test was shown where testing is sequential, i.e., one pair is tested at a time. Here we consider the parallel case where all pairs are tested simultaneously, giving considerably more power to dishonest players. We derive sufficient conditions for a self-test for many maximally entangled pairs of qubits shared between two players and also two constructions for self-tests where all pairs are tested simultaneously.

  6. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  7. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  8. Hybrid Optimization Parallel Search PACKage

    Energy Science and Technology Software Center (ESTSC)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  9. Parallel computing in enterprise modeling.

    SciTech Connect

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  10. Extended cooperative control synthesis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1994-01-01

    This paper reports on research for extending the Cooperative Control Synthesis methodology to include a more accurate modeling of the pilot's controller dynamics. Cooperative Control Synthesis (CCS) is a methodology that addresses the problem of how to design control laws for piloted, high-order, multivariate systems and/or non-conventional dynamic configurations in the absence of flying qualities specifications. This is accomplished by emphasizing the parallel structure inherent in any pilot-controlled, augmented vehicle. The original CCS methodology is extended to include the Modified Optimal Control Model (MOCM), which is based upon the optimal control model of the human operator developed by Kleinman, Baron, and Levison in 1970. This model provides a modeling of the pilot's compensation dynamics that is more accurate than the simplified pilot dynamic representation currently in the CCS methodology. Inclusion of the MOCM into the CCS also enables the modeling of pilot-observation perception thresholds and pilot-observation attention allocation affects. This Extended Cooperative Control Synthesis (ECCS) allows for the direct calculation of pilot and system open- and closed-loop transfer functions in pole/zero form and is readily implemented in current software capable of analysis and design for dynamic systems. Example results based upon synthesizing an augmentation control law for an acceleration command system in a compensatory tracking task using the ECCS are compared with a similar synthesis performed by using the original CCS methodology. The ECCS is shown to provide augmentation control laws that yield more favorable, predicted closed-loop flying qualities and tracking performance than those synthesized using the original CCS methodology.

  11. Parallel Combinatorial Esterification: A Simple Experiment for Use in the Second-Semester Organic Chemistry Laboratory

    NASA Astrophysics Data System (ADS)

    Birney, David M.; Starnes, Stephen D.

    1999-11-01

    Combinatorial chemistry has revolutionized the way potential new drugs are discovered. This simple experiment utilizes the Fischer esterification, a common reaction in second-semester organic laboratories, to demonstrate the fundamentals of combinatorial methods. These include simultaneous synthesis of numerous compounds, a selective assay for a desired activity, and an algorithm for identifying the active structure. Using a parallel synthesis combinatorial method, each student in a lab section prepares a different ester. The targeted activity (the characteristic odor of wintergreen) is easily detected by smell. The student's enjoyment of the lab is enhanced by the preparation of several other characteristic odors as well.

  12. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

  13. Aqueous – Phase Synthesis of PAA in PVDF Membrane Pores for Nanoparticle Synthesis and Dichlorobiphenyl Degradation

    PubMed Central

    Smuleac, V.; Bachas, L.; Bhattacharyya, D.

    2009-01-01

    This paper deals with bimetallic (Fe/Pd) nanoparticle synthesis inside the membrane pores and application for catalytic dechlorination of toxic organic compounds form aqueous streams. Membranes have been used as platforms for nanoparticle synthesis in order to reduce the agglomeration, encountered in solution phase synthesis which leads to a dramatic loss of reactivity. The membrane support, polyvinylidene fluoride (PVDF) was modified by in situ polymerization of acrylic acid in aqueous phase. Subsequent steps included ion exchange with Fe2+, reduction to Fe0 with sodium borohydride and Pd deposition. Various techniques, such as STEM, EDX, FTIR and permeability measurements, were used for membrane characterization and showed that bimetallic (Fe/Pd) nanoparticles with an average size of 20-30 nm have been incorporated inside of the PAA-coated membrane pores. The Fe/Pd–modified membranes showed a high reactivity toward a model compound, 2, 2′-dichlorobyphenyl and a strong dependence of degradation on Pd (hydrogenation catalyst) content. The use of convective flow substantially reduces the degradation time: 43% conversion of dichlorobiphenyl to biphenyl can be achieved in less than 40 s residence time. Another important aspect is the ability to regenerate and reuse the Fe/Pd bimetallic systems by washing with a solution of sodium borohydride, because the iron becomes inactivated (corroded) as the dechlorination reaction proceeds. PMID:20161475

  14. Polymer solution phase separation: Microgravity simulation

    NASA Technical Reports Server (NTRS)

    Cerny, Lawrence C.; Sutter, James K.

    1989-01-01

    In many multicomponent systems, a transition from a single phase of uniform composition to a multiphase state with separated regions of different composition can be induced by changes in temperature and shear. The density difference between the phase and thermal and/or shear gradients within the system results in buoyancy driven convection. These differences affect kinetics of the phase separation if the system has a sufficiently low viscosity. This investigation presents more preliminary developments of a theoretical model in order to describe effects of the buoyancy driven convection in phase separation kinetics. Polymer solutions were employed as model systems because of the ease with which density differences can be systematically varied and because of the importance of phase separation in the processing and properties of polymeric materials. The results indicate that the kinetics of the phase separation can be performed viscometrically using laser light scattering as a principle means of following the process quantitatively. Isopycnic polymer solutions were used to determine the viscosity and density difference limits for polymer phase separation.

  15. The ParaScope parallel programming environment

    NASA Technical Reports Server (NTRS)

    Cooper, Keith D.; Hall, Mary W.; Hood, Robert T.; Kennedy, Ken; Mckinley, Kathryn S.; Mellor-Crummey, John M.; Torczon, Linda; Warren, Scott K.

    1993-01-01

    The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and debug parallel programs. This paper focuses on ParaScope's compilation system, its parallel program editor, and its parallel debugging system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. Thus, ParaScope can support both traditional single-procedure optimization and optimization across procedure boundaries. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. It assists the knowledgeable user by displaying and managing analysis and by providing a variety of interactive program transformations that are effective in exposing parallelism. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. The system combines static analysis, program instrumentation, and run-time reporting to provide a mechanical system for isolating errors in parallel program executions. Finally, we describe a new project to extend ParaScope to support programming in FORTRAN D, a machine-independent parallel programming language intended for use with both distributed-memory and shared-memory parallel computers.

  16. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2014-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  17. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2013-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Preliminary results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  18. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  19. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  20. A generic fine-grained parallel C

    NASA Technical Reports Server (NTRS)

    Hamet, L.; Dorband, John E.

    1988-01-01

    With the present availability of parallel processors of vastly different architectures, there is a need for a common language interface to multiple types of machines. The parallel C compiler, currently under development, is intended to be such a language. This language is based on the belief that an algorithm designed around fine-grained parallelism can be mapped relatively easily to different parallel architectures, since a large percentage of the parallelism has been identified. The compiler generates a FORTH-like machine-independent intermediate code. A machine-dependent translator will reside on each machine to generate the appropriate executable code, taking advantage of the particular architectures. The goal of this project is to allow a user to run the same program on such machines as the Massively Parallel Processor, the CRAY, the Connection Machine, and the CYBER 205 as well as serial machines such as VAXes, Macintoshes and Sun workstations.

  1. Parallel automated adaptive procedures for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Shephard, M. S.; Flaherty, J. E.; Decougny, H. L.; Ozturan, C.; Bottasso, C. L.; Beall, M. W.

    1995-01-01

    Consideration is given to the techniques required to support adaptive analysis of automatically generated unstructured meshes on distributed memory MIMD parallel computers. The key areas of new development are focused on the support of effective parallel computations when the structure of the numerical discretization, the mesh, is evolving, and in fact constructed, during the computation. All the procedures presented operate in parallel on already distributed mesh information. Starting from a mesh definition in terms of a topological hierarchy, techniques to support the distribution, redistribution and communication among the mesh entities over the processors is given, and algorithms to dynamically balance processor workload based on the migration of mesh entities are given. A procedure to automatically generate meshes in parallel, starting from CAD geometric models, is given. Parallel procedures to enrich the mesh through local mesh modifications are also given. Finally, the combination of these techniques to produce a parallel automated finite element analysis procedure for rotorcraft aerodynamics calculations is discussed and demonstrated.

  2. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  3. Linearly exact parallel closures for slab geometry

    SciTech Connect

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-15

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients)

  4. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  5. Runtime volume visualization for parallel CFD

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    This paper discusses some aspects of design of a data distributed, massively parallel volume rendering library for runtime visualization of parallel computational fluid dynamics simulations in a message-passing environment. Unlike the traditional scheme in which visualization is a postprocessing step, the rendering is done in place on each node processor. Computational scientists who run large-scale simulations on a massively parallel computer can thus perform interactive monitoring of their simulations. The current library provides an interface to handle volume data on rectilinear grids. The same design principles can be generalized to handle other types of grids. For demonstration, we run a parallel Navier-Stokes solver making use of this rendering library on the Intel Paragon XP/S. The interactive visual response achieved is found to be very useful. Performance studies show that the parallel rendering process is scalable with the size of the simulation as well as with the parallel computer.

  6. Linearly exact parallel closures for slab geometry

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-01

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).

  7. Algorithmically Specialized Parallel Architecture For Robotics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Computing system called Robot Mathematics Processor (RMP) contains large number of processor elements (PE's) connected in various parallel and serial combinations reconfigurable via software. Special-purpose architecture designed for solving diverse computational problems in robot control, simulation, trajectory generation, workspace analysis, and like. System an MIMD-SIMD parallel architecture capable of exploiting parallelism in different forms and at several computational levels. Major advantage lies in design of cells, which provides flexibility and reconfigurability superior to previous SIMD processors.

  8. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

  9. Six-Degree-Of-Freedom Parallel Minimanipulator

    NASA Technical Reports Server (NTRS)

    Tahmasebi, Farhad; Tsai, Lung-Wen

    1994-01-01

    Six-degree-of-freedom parallel minimanipulator stiffer and simpler than earlier six-degree-of-freedom manipulators. Includes only three inextensible limbs with universal joints at ends. Limbs have equal lengths and act in parallel as they share load on manipulated platform. Designed to provide high resolution and high stiffness for fine control of position and force in hybrid serial/parallel-manipulator system.

  10. Running Geant on T. Node parallel computer

    SciTech Connect

    Jejcic, A.; Maillard, J.; Silva, J. ); Mignot, B. )

    1990-08-01

    AnInmos transputer-based computer has been utilized to overcome the difficulties due to the limitations on the processing abilities of event parallelism and multiprocessor farms (i.e., the so called bus-crisis) and the concern regarding the growing sizes of databases typical in High Energy Physics. This study was done on the T.Node parallel computer manufactured by TELMAT. Detailed figures are reported concerning the event parallelization. (AIP)

  11. Racing in parallel: Quantum versus Classical

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Troyer, Matthias

    In a fair comparison of the performance of a quantum algorithm to a classical one it is important to treat them on equal footing, both regarding resource usage and parallelism. We show how one may otherwise mistakenly attribute speedup due to parallelism as quantum speedup. We apply such an analysis both to analog quantum devices (quantum annealers) and gate model algorithms and give several examples where a careful analysis of parallelism makes a significant difference in the comparison between classical and quantum algorithms.

  12. Total Synthesis and Stereochemical Revision of the Anti-Tuberculosis Peptaibol Trichoderin A.

    PubMed

    Kavianinia, Iman; Kunalingam, Lavanya; Harris, Paul W R; Cook, Gregory M; Brimble, Margaret A

    2016-08-01

    The first total synthesis of the postulated structure of the aminolipopeptide trichoderin A and its epimer are reported. A late-stage solution phase C-terminal coupling was employed to introduce the C-terminal aminoalcohol moiety. This methodology provides a foundation to prepare analogues of trichoderin A to establish a structure-activity relationship. NMR spectroscopic analysis established that the C-6 position of the 2-amino-6-hydroxy-4-methyl-8-oxodecanoic acid (AHMOD) residue in trichoderin A possesses an (R)-configuration as opposed to the originally proposed (S)-configuration. PMID:27467118

  13. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  14. Parallel execution and scriptability in micromagnetic simulations

    NASA Astrophysics Data System (ADS)

    Fischbacher, Thomas; Franchin, Matteo; Bordignon, Giuliano; Knittel, Andreas; Fangohr, Hans

    2009-04-01

    We demonstrate the feasibility of an "encapsulated parallelism" approach toward micromagnetic simulations that combines offering a high degree of flexibility to the user with the efficient utilization of parallel computing resources. While parallelization is obviously desirable to address the high numerical effort required for realistic micromagnetic simulations through utilizing now widely available multiprocessor systems (including desktop multicore CPUs and computing clusters), conventional approaches toward parallelization impose strong restrictions on the structure of programs: numerical operations have to be executed across all processors in a synchronized fashion. This means that from the user's perspective, either the structure of the entire simulation is rigidly defined from the beginning and cannot be adjusted easily, or making modifications to the computation sequence requires advanced knowledge in parallel programming. We explain how this dilemma is resolved in the NMAG simulation package in such a way that the user can utilize without any additional effort on his side both the computational power of multiple CPUs and the flexibility to tailor execution sequences for specific problems: simulation scripts written for single-processor machines can just as well be executed on parallel machines and behave in precisely the same way, up to increased speed. We provide a simple instructive magnetic resonance simulation example that demonstrates utilizing both custom execution sequences and parallelism at the same time. Furthermore, we show that this strategy of encapsulating parallelism even allows to benefit from speed gains through parallel execution in simulations controlled by interactive commands given at a command line interface.

  15. MMS Observations of Parallel Electric Fields

    NASA Astrophysics Data System (ADS)

    Ergun, R.; Goodrich, K.; Wilder, F. D.; Sturner, A. P.; Holmes, J.; Stawarz, J. E.; Malaspina, D.; Usanova, M.; Torbert, R. B.; Lindqvist, P. A.; Khotyaintsev, Y. V.; Burch, J. L.; Strangeway, R. J.; Russell, C. T.; Pollock, C. J.; Giles, B. L.; Hesse, M.; Goldman, M. V.; Drake, J. F.; Phan, T.; Nakamura, R.

    2015-12-01

    Parallel electric fields are a necessary condition for magnetic reconnection with non-zero guide field and are ultimately accountable for topological reconfiguration of a magnetic field. Parallel electric fields also play a strong role in charged particle acceleration and turbulence. The Magnetospheric Multiscale (MMS) mission targets these three universal plasma processes. The MMS satellites have an accurate three-dimensional electric field measurement, which can identify parallel electric fields as low as 1 mV/m at four adjacent locations. We present preliminary observations of parallel electric fields from MMS and provide an early interpretation of their impact on magnetic reconnection, in particular, where the topological change occurs. We also examine the role of parallel electric fields in particle acceleration. Direct particle acceleration by parallel electric fields is well established in the auroral region. Observations of double layers in by the Van Allan Probes suggest that acceleration by parallel electric fields may be significant in energizing some populations of the radiation belts. THEMIS observations also indicate that some of the largest parallel electric fields are found in regions of strong field-aligned currents associated with turbulence, suggesting a highly non-linear dissipation mechanism. We discuss how the MMS observations extend our understanding of the role of parallel electric fields in some of the most critical processes in the magnetosphere.

  16. A Parallel Multigrid Method for Neutronics Applications

    SciTech Connect

    Alcouffe, Raymond E.

    2001-01-01

    The multigrid method has been shown to be the most effective general method for solving the multi-dimensional diffusion equation encountered in neutronics. This being the method of choice, we develop a strategy for implementing the multigrid method on computers of massively parallel architecture. This leads us to strategies for parallelizing the relaxation, contraction (interpolation), and prolongation operators involved in the method. We then compare the efficiency of our parallel multigrid with other parallel methods for solving the diffusion equation on selected problems encountered in reactor physics.

  17. Conformal pure radiation with parallel rays

    NASA Astrophysics Data System (ADS)

    Leistner, Thomas; Nurowski, Paweł

    2012-03-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves.

  18. Parallel auto-correlative statistics with VTK.

    SciTech Connect

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  19. Remarks on parallel computations in MATLAB environment

    NASA Astrophysics Data System (ADS)

    Opalska, Katarzyna; Opalski, Leszek

    2013-10-01

    The paper attempts to summarize author's investigation of parallel computation capability of MATLAB environment in solving large ordinary differential equations (ODEs). Two MATLAB versions were tested and two parallelization techniques: one used multiple processors-cores, the other - CUDA compatible Graphics Processing Units (GPUs). A set of parameterized test problems was specially designed to expose different capabilities/limitations of the different variants of the parallel computation environment tested. Presented results illustrate clearly the superiority of the newer MATLAB version and, elapsed time advantage of GPU-parallelized computations for large dimensionality problems over the multiple processor-cores (with speed-up factor strongly dependent on the problem structure).

  20. A scalable 2-D parallel sparse solver

    SciTech Connect

    Kothari, S.C.; Mitra, S.

    1995-12-01

    Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

  1. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  2. Applications of Parallel Processing to Astrodynamics

    NASA Astrophysics Data System (ADS)

    Coffey, S.; Healy, L.; Neal, H.

    1996-03-01

    Parallel processing is being used to improve the catalog of earth orbiting satellites and for problems associated with the catalog. Initial efforts centered around using SIMD parallel processors to perform debris conjunction analysis and satellite dynamics studies. More recently, the availability of cheap supercomputing processors and parallel processing software such as PVM have enabled the reutilization of existing astrodynamics software in distributed parallel processing environments, Computations once taking many days with traditional mainframes are now being performed in only a few hours. Efforts underway for the US Naval Space Command include conjunction prediction, uncorrelated target processing and a new space object catalog based on orbit determination and prediction with special perturbations methods.

  3. Photoelectrochemical synthesis of DNA microarrays

    PubMed Central

    Chow, Brian Y.; Emig, Christopher J.; Jacobson, Joseph M.

    2009-01-01

    Optical addressing of semiconductor electrodes represents a powerful technology that enables the independent and parallel control of a very large number of electrical phenomena at the solid-electrolyte interface. To date, it has been used in a wide range of applications including electrophoretic manipulation, biomolecule sensing, and stimulating networks of neurons. Here, we have adapted this approach for the parallel addressing of redox reactions, and report the construction of a DNA microarray synthesis platform based on semiconductor photoelectrochemistry (PEC). An amorphous silicon photoconductor is activated by an optical projection system to create virtual electrodes capable of electrochemically generating protons; these PEC-generated protons then cleave the acid-labile dimethoxytrityl protecting groups of DNA phosphoramidite synthesis reagents with the requisite spatial selectivity to generate DNA microarrays. Furthermore, a thin-film porous glass dramatically increases the amount of DNA synthesized per chip by over an order of magnitude versus uncoated glass. This platform demonstrates that PEC can be used toward combinatorial bio-polymer and small molecule synthesis. PMID:19706433

  4. Proteomic profiling combining solution-phase isoelectric fractionation with two-dimensional gel electrophoresis using narrow-pH-range immobilized pH gradient gels with slightly overlapping pH ranges.

    PubMed

    Lee, KiBeom; Pi, KyungBae

    2010-01-01

    This paper describes a simple new approach toward improving resolution of two-dimensional (2-D) protein gels used to explore the mammalian proteome. The method employs sample prefractionation using solution-phase isoelectric focusing (IEF) to split the mammalian proteome into well-resolved pools. As crude samples are thus prefractionated by pI range, very-narrow-pH-range 2-D gels can be subsequently employed for protein separation. Using custom pH partition membranes and commercially available immobilized pH gradient (IPG) strips, we maximized the total separation distance and throughput of seven samples obtained by prefractionation. Both protein loading capacity and separation quality were higher than the values obtained by separation of fractionated samples on narrow-pH-range 2-D gels; the total effective IEF separation distance was ~82 cm over the pH range pH 3-10. This improved method for analyzing prefractionated samples on narrow-pH-range 2-D gels allows high protein resolution without the use of large gels, resulting in decreased costs and run times. PMID:19813004

  5. Parallel pivoting combined with parallel reduction and fill-in control

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1989-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generation of fill-ins and check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel pivoting technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  6. Super-parallel MR microscope.

    PubMed

    Matsuda, Yoshimasa; Utsuzawa, Shin; Kurimoto, Takeaki; Haishi, Tomoyuki; Yamazaki, Yukako; Kose, Katsumi; Anno, Izumi; Marutani, Mitsuhiro

    2003-07-01

    A super-parallel MR microscope in which multiple (up to 100) samples can be imaged simultaneously at high spatial resolution is described. The system consists of a multichannel transmitter-receiver system and a gradient probe array housed in a large-bore magnet. An eight-channel MR microscope was constructed for verification of the system concept, and a four-channel MR microscope was constructed for a practical application. Eight chemically fixed mouse fetuses were simultaneously imaged at the 200 micro m(3) voxel resolution in a 1.5 T superconducting magnet of a whole-body MRI, and four chemically fixed human embryos were simultaneously imaged at 120 micro m(3) voxel resolution in a 2.35 T superconducting magnet. Although the spatial resolutions achieved were not strictly those of MR microscopy, the system design proposed here can be used to attain a much higher spatial resolution imaging of multiple samples, because higher magnetic field gradients can be generated at multiple positions in a homogeneous magnetic field. PMID:12815693

  7. Parallel processing in immune networks

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Bartolucci, Silvia; Galluzzi, Andrea; Guerra, Francesco; Moauro, Francesco

    2013-04-01

    In this work, we adopt a statistical-mechanics approach to investigate basic, systemic features exhibited by adaptive immune systems. The lymphocyte network made by B cells and T cells is modeled by a bipartite spin glass, where, following biological prescriptions, links connecting B cells and T cells are sparse. Interestingly, the dilution performed on links is shown to make the system able to orchestrate parallel strategies to fight several pathogens at the same time; this multitasking capability constitutes a remarkable, key property of immune systems as multiple antigens are always present within the host. We also define the stochastic process ruling the temporal evolution of lymphocyte activity and show its relaxation toward an equilibrium measure allowing statistical-mechanics investigations. Analytical results are compared with Monte Carlo simulations and signal-to-noise outcomes showing overall excellent agreement. Finally, within our model, a rationale for the experimentally well-evidenced correlation between lymphocytosis and autoimmunity is achieved; this sheds further light on the systemic features exhibited by immune networks.

  8. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  9. Vectoring of parallel synthetic jets

    NASA Astrophysics Data System (ADS)

    Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume

    2015-11-01

    A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

  10. Parallels between wind and crowd loading of bridges.

    PubMed

    McRobie, Allan; Morgenthal, Guido; Abrams, Danny; Prendergast, John

    2013-06-28

    Parallels between the dynamic response of flexible bridges under the action of wind and under the forces induced by crowds allow each field to inform the other. Wind-induced behaviour has been traditionally classified into categories such as flutter, galloping, vortex-induced vibration and buffeting. However, computational advances such as the vortex particle method have led to a more general picture where effects may occur simultaneously and interact, such that the simple semantic demarcations break down. Similarly, the modelling of individual pedestrians has progressed the understanding of human-structure interaction, particularly for large-amplitude lateral oscillations under crowd loading. In this paper, guided by the interaction of flutter and vortex-induced vibration in wind engineering, a framework is presented, which allows various human-structure interaction effects to coexist and interact, thereby providing a possible synthesis of previously disparate experimental and theoretical results. PMID:23690640

  11. Parallel genotypic adaptation: when evolution repeats itself

    PubMed Central

    Wood, Troy E.; Burke, John M.; Rieseberg, Loren H.

    2008-01-01

    Until recently, parallel genotypic adaptation was considered unlikely because phenotypic differences were thought to be controlled by many genes. There is increasing evidence, however, that phenotypic variation sometimes has a simple genetic basis and that parallel adaptation at the genotypic level may be more frequent than previously believed. Here, we review evidence for parallel genotypic adaptation derived from a survey of the experimental evolution, phylogenetic, and quantitative genetic literature. The most convincing evidence of parallel genotypic adaptation comes from artificial selection experiments involving microbial populations. In some experiments, up to half of the nucleotide substitutions found in independent lineages under uniform selection are the same. Phylogenetic studies provide a means for studying parallel genotypic adaptation in non-experimental systems, but conclusive evidence may be difficult to obtain because homoplasy can arise for other reasons. Nonetheless, phylogenetic approaches have provided evidence of parallel genotypic adaptation across all taxonomic levels, not just microbes. Quantitative genetic approaches also suggest parallel genotypic evolution across both closely and distantly related taxa, but it is important to note that this approach cannot distinguish between parallel changes at homologous loci versus convergent changes at closely linked non-homologous loci. The finding that parallel genotypic adaptation appears to be frequent and occurs at all taxonomic levels has important implications for phylogenetic and evolutionary studies. With respect to phylogenetic analyses, parallel genotypic changes, if common, may result in faulty estimates of phylogenetic relationships. From an evolutionary perspective, the occurrence of parallel genotypic adaptation provides increasing support for determinism in evolution and may provide a partial explanation for how species with low levels of gene flow are held together. PMID:15881688

  12. Solid Phase Synthesis of Helically Folded Aromatic Oligoamides.

    PubMed

    Dawson, S J; Hu, X; Claerhout, S; Huc, I

    2016-01-01

    Aromatic amide foldamers constitute a growing class of oligomers that adopt remarkably stable folded conformations. The folded structures possess largely predictable shapes and open the way toward the design of synthetic mimics of proteins. Important examples of aromatic amide foldamers include oligomers of 7- or 8-amino-2-quinoline carboxylic acid that have been shown to exist predominantly as well-defined helices, including when they are combined with α-amino acids to which they may impose their folding behavior. To rapidly iterate their synthesis, solid phase synthesis (SPS) protocols have been developed and optimized for overcoming synthetic difficulties inherent to these backbones such as low nucleophilicity of amine groups on electron poor aromatic rings and a strong propensity of even short sequences to fold on the solid phase during synthesis. For example, acid chloride activation and the use of microwaves are required to bring coupling at aromatic amines to completion. Here, we report detailed SPS protocols for the rapid production of: (1) oligomers of 8-amino-2-quinolinecarboxylic acid; (2) oligomers containing 7-amino-8-fluoro-2-quinolinecarboxylic acid; and (3) heteromeric oligomers of 8-amino-2-quinolinecarboxylic acid and α-amino acids. SPS brings the advantage to quickly produce sequences having varied main chain or side chain components without having to purify multiple intermediates as in solution phase synthesis. With these protocols, an octamer could easily be synthesized and purified within one to two weeks from Fmoc protected amino acid monomer precursors. PMID:27586338

  13. Infrared colloidal lead chalcogenide nanocrystals: synthesis, properties, and photovoltaic applications.

    PubMed

    Fu, Huiying; Tsang, Sai-Wing

    2012-04-01

    Simple solution phase, catalyst-free synthetic approaches that offer monodispersed, well passivated, and non-aggregated colloidal semiconductor nanocrystals have presented many research opportunities not only for fundamental science but also for technological applications. The ability to tune the electrical and optical properties of semiconductor nanocrystals by manipulating the size and shape of the crystals during the colloidal synthesis provides potential benefits to a variety of applications including photovoltaic devices, light-emitting diodes, field effect transistors, biological imaging/labeling, and more. Recent advances in the synthesis and characterization of colloidal lead chalcogenide nanocrystals and the achievements in colloidal PbS or PbSe nanocrystals solar cells have demonstrated the promising application of infrared-emitting colloidal lead chalcogenide nanocrystals in photovoltaic devices. Here, we review recent progress in the synthesis and optical properties of colloidal lead chalcogenide nanocrystals. We focus in particular upon the size- and shape-controlled synthesis of PbS, PbSe, and PbTe nanocrystals by using different precursors and various stabilizing surfactants for the growth of the colloidal nanocrystals. We also summarize recent advancements in the field of colloidal nanocrystals solar cells based on colloidal PbS and PbSe nanocrystals. PMID:22382898

  14. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  15. Parallel-End-Point Drafting Compass

    NASA Technical Reports Server (NTRS)

    Cronander, J.

    1986-01-01

    Parallelogram linkage ensures greater accuracy in drafting and scribing. Two members of arm of compass remain parallel for all angles pair makes with hub axis. They maintain opposing end members in parallelism. Parallelogram-linkage principle used on dividers as well as on compasses.

  16. EPIC: E-field Parallel Imaging Correlator

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Beardsley, Adam P.; Bowman, Judd D.; Morales, Miguel F.

    2015-11-01

    E-field Parallel Imaging Correlator (EPIC), a highly parallelized Object Oriented Python package, implements the Modular Optimal Frequency Fourier (MOFF) imaging technique. It also includes visibility-based imaging using the software holography technique and a simulator for generating electric fields from a sky model. EPIC can accept dual-polarization inputs and produce images of all four instrumental cross-polarizations.

  17. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  18. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  19. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  20. Design and implementation of the parallel processing system of multi-channel polarization images

    NASA Astrophysics Data System (ADS)

    Li, Zhi-yong; Huang, Qin-chao

    2013-08-01

    Compared with traditional optical intensity image processing, polarization images processing has two main problems. One is that the amount of data is larger. The other is that processing tasks is more complex. To resolve these problems, the parallel processing system of multi-channel polarization images is designed by the multi-DSP technique. It contains a communication control unit (CCU) and a data processing array (DPA). CCU controls communications inside and outside the system. Its logics are designed by a FPGA chip. DPA is made up of four Digital Signal Processor (DSP) chips, which are interlinked by the loose coupling method. DPA implements processing tasks including images registration and images synthesis by parallel processing methods. The polarization images parallel processing model is designed on multi levels including the system task, the algorithm and the operation. Its program is designed by the assemble language. While the polarization image resolution is 782x582 pixels, the pixel data length is 12 bits in the experiment. After it received 3 channels of polarization image simultaneously, this system implements parallel task to acquire the target polarization characteristics. Experimental results show that this system has good real-time and reliability. The processing time of images registration is 293.343ms while the registration accuracy achieves 0.5 pixel. The processing time of images synthesis is 3.199ms.

  1. LALPC: Exploiting Parallelism from FPGAs Using C Language

    NASA Astrophysics Data System (ADS)

    Porto, Lucas F.; Fernandes, Marcio M.; Bonato, Vanderlei; Menotti, Ricardo

    2015-10-01

    This paper presents LALPC, a prototype high-level synthesis tool, specialized in hardware generation for loop-intensive code segments. As demonstrated in a previous work, the underlying hardware components target by LALPC are highly specialized for loop pipeline execution, resulting in efficient implementations, both in terms of performance and resources usage (silicon area). LALPC extends the functionality of a previous tool by using a subset of the C language as input code to describe computations, improving the usability and potential acceptance of the technique among developers. LALPC also enhances parallelism exploitation by applying loop unrolling, and providing support for automatic generation and scheduling of parallel memory accesses. The combination of using the C language to automate the process of hardware design, with an efficient underlying scheme to support loop pipelining, constitutes the main goal and contribution of the work described in this paper. Experimental results have shown the effectiveness of those techniques to enhance performance, and also exemplifies how some of the LALPC compiler features may support performance-resources trade-off analysis tasks.

  2. National Combustion Code: Parallel Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.

    2000-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.

  3. Parallel language support on shared memory multiprocessors

    SciTech Connect

    Sah, A.

    1991-01-01

    The study of general purpose parallel computing requires efficient and inexpensive platforms for parallel program execution. This helps in ascertaining tradeoff choices between hardware complexity and software solutions for massively parallel systems design. In this paper, the authors present an implementation of an efficient parallel execution model on shared memory multiprocessors based on a Threaded Abstract Machine. The authors discuss a k-way generalized locking strategy suitable for our model. The authors study the performance gains obtained by a queuing strategy which uses multiple gueues with reduced access contention. The authors also present performance models in shared memory machines, related to lock contention and serialization in shared memory allocation. A bin-based memory management technique which reduces the serialization is presented. These issues are critical for obtaining an efficient parallel execution environment.

  4. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  5. Differences Between Distributed and Parallel Systems

    SciTech Connect

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  6. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  7. Configuration space representation in parallel coordinates

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Inselberg, Alfred

    1989-01-01

    By means of a system of parallel coordinates, a nonprojective mapping from R exp N to R squared is obtained for any positive integer N. In this way multivariate data and relations can be represented in the Euclidean plane (embedded in the projective plane). Basically, R squared with Cartesian coordinates is augmented by N parallel axes, one for each variable. The N joint variables of a robotic device can be represented graphically by using parallel coordinates. It is pointed out that some properties of the relation are better perceived visually from the parallel coordinate representation, and that new algorithms and data structures can be obtained from this representation. The main features of parallel coordinates are described, and an example is presented of their use for configuration space representation of a mechanical arm (where Cartesian coordinates cannot be used).

  8. A paradigm for parallel unstructured grid generation

    SciTech Connect

    Gaither, A.; Marcum, D.; Reese, D.

    1996-12-31

    In this paper, a sequential 2D unstructured grid generator based on iterative point insertion and local reconnection is coupled with a Delauney tessellation domain decomposition scheme to create a scalable parallel unstructured grid generator. The Message Passing Interface (MPI) is used for distributed communication in the parallel grid generator. This work attempts to provide a generic framework to enable the parallelization of fast sequential unstructured grid generators in order to compute grand-challenge scale grids for Computational Field Simulation (CFS). Motivation for moving from sequential to scalable parallel grid generation is presented. Delaunay tessellation and iterative point insertion and local reconnection (advancing front method only) unstructured grid generation techniques are discussed with emphasis on how these techniques can be utilized for parallel unstructured grid generation. Domain decomposition techniques are discussed for both Delauney and advancing front unstructured grid generation with emphasis placed on the differences needed for both grid quality and algorithmic efficiency.

  9. Broadcasting a message in a parallel computer

    DOEpatents

    Berg, Jeremy E.; Faraj, Ahmad A.

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  10. Randomized parallel speedups for list ranking

    SciTech Connect

    Vishkin, U.

    1987-06-01

    The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O(n log n)/ rho + log n) time parallel algorithm using rho processors. The authors present a randomized parallel algorithm for the problem. The algorithm is designed for an exclusive-read exclusive-write parallel random access machine (EREW PRAM). It runs almost surely in time O(n/rho + log n log* n) using rho processors. Using a recently published parallel prefix sums algorithm the list-ranking algorithm can be adapted to run on a concurrent-read concurrent-write parallel random access machine (CRCW PRAM) almost surely in time O(n/rho + log n) using rho processors.

  11. Implementation and performance of parallel Prolog interpreter

    SciTech Connect

    Wei, S.; Kale, L.V.; Balkrishna, R. . Dept. of Computer Science)

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  12. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  13. Alternative Fuels and Chemicals from Synthesis Gas

    SciTech Connect

    1998-12-02

    The overall objectives of this program are to investigate potential technologies for the conversion of synthesis gas to oxygenated and hydrocarbon fuels and industrial chemicals, and to demonstrate the most promising technologies at DOE�s LaPorte, Texas, Slurry Phase Alternative Fuels Development Unit (AFDU). The program will involve a continuation of the work performed under the Alternative Fuels from Coal-Derived Synthesis Gas Program and will draw upon information and technologies generated in parallel current and future DOE-funded contracts.

  14. Parallelism extraction and program restructuring for parallel simulation of digital systems

    SciTech Connect

    Vellandi, B.L.

    1990-01-01

    Two topics currently of interest to the computer aided design (CADF) for the very-large-scale integrated circuit (VLSI) community are using the VHSIC Hardware Description Language (VHDL) effectively and decreasing simulation times of VLSI designs through parallel execution of the simulator. The goal of this research is to increase the degree of parallelism obtainable in VHDL simulation, and consequently to decrease simulation times. The research targets simulation on massively parallel architectures. Experimentation and instrumentation were done on the SIMD Connection Machine. The author discusses her method used to extract parallelism and restructure a VHDL program, experimental results using this method, and requirements for a parallel architecture for fast simulation.

  15. Parallel computation of manipulator inverse dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    In this article, parallel computation of manipulator inverse dynamics is investigated. A hierarchical graph-based mapping approach is devised to analyze the inherent parallelism in the Newton-Euler formulation at several computational levels, and to derive the features of an abstract architecture for exploitation of parallelism. At each level, a parallel algorithm represents the application of a parallel model of computation that transforms the computation into a graph whose structure defines the features of an abstract architecture, i.e., number of processors, communication structure, etc. Data-flow analysis is employed to derive the time lower bound in the computation as well as the sequencing of the abstract architecture. The features of the target architecture are defined by optimization of the abstract architecture to exploit maximum parallelism while minimizing architectural complexity. An architecture is designed and implemented that is capable of efficient exploitation of parallelism at several computational levels. The computation time of the Newton-Euler formulation for a 6-degree-of-freedom (dof) general manipulator is measured as 187 microsec. The increase in computation time for each additional dof is 23 microsec, which leads to a computation time of less than 500 microsec, even for a 12-dof redundant arm.

  16. On the Scalability of Parallel UCT

    NASA Astrophysics Data System (ADS)

    Segal, Richard B.

    The parallelization of MCTS across multiple-machines has proven surprisingly difficult. The limitations of existing algorithms were evident in the 2009 Computer Olympiad where Zen using a single four-core machine defeated both Fuego with ten eight-core machines, and Mogo with twenty thirty-two core machines. This paper investigates the limits of parallel MCTS in order to understand why distributed parallelism has proven so difficult and to pave the way towards future distributed algorithms with better scaling. We first analyze the single-threaded scaling of Fuego and find that there is an upper bound on the play-quality improvements which can come from additional search. We then analyze the scaling of an idealized N-core shared memory machine to determine the maximum amount of parallelism supported by MCTS. We show that parallel speedup depends critically on how much time is given to each player. We use this relationship to predict parallel scaling for time scales beyond what can be empirically evaluated due to the immense computation required. Our results show that MCTS can scale nearly perfectly to at least 64 threads when combined with virtual loss, but without virtual loss scaling is limited to just eight threads. We also find that for competition time controls scaling to thousands of threads is impossible not necessarily due to MCTS not scaling, but because high levels of parallelism can start to bump up against the upper performance bound of Fuego itself.

  17. Performance of the Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  18. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  19. Run-time recognition of task parallelism within the P++ parallel array class library

    SciTech Connect

    Parsons, R.; Quinlan, D.

    1993-11-01

    This paper explores the use of a run-time system to recognize task parallelism with a C++ array class library. Run-time systems currently support data parallelism in P++, FORTRAN 90 D, and High Performance FORTRAN. But data parallelism in insufficient for many applications, including adaptive mesh refinement. Without access to both data and task parallelism such applications exhibit several orders of magnitude more message passing and poor performance. In this work, a C++ array class library is used to implement deferred evaluation and run-time dependence for task parallelism recognition, tp obtain task parallelism through a data flow interpretation of data parallel array statements. Performance results show that that analysis and optimizations are both efficient and practical, allowing us to consider more substantial optimizations.

  20. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  1. Xyce parallel electronic simulator : users' guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique

  2. SLAPP: A systolic linear algebra parallel processor

    SciTech Connect

    Drake, B.L.; Luk, F.T.; Speiser, J.M.; Symanski, J.J.

    1987-07-01

    Systolic array computer architectures provide a means for fast computation of the linear algebra algorithms that form the building blocks of many signal-processing algorithms, facilitating their real-time computation. For applications to signal processing, the systolic array operates on matrices, an inherently parallel view of the data, using numerical linear algebra algorithms that have been suitably parallelized to efficiently utilize the available hardware. This article describes work currently underway at the Naval Ocean Systems Center, San Diego, California, to build a two-dimensional systolic array, SLAPP, demonstrating efficient and modular parallelization of key matric computations for real-time signal- and image-processing problems.

  3. Language constructs for modular parallel programs

    SciTech Connect

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  4. Parallelization of the Implicit RPLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Orkwis, Paul D.

    1997-01-01

    The multiblock reacting Navier-Stokes flow solver RPLUS2D was modified for parallel implementation. Results for non-reacting flow calculations of this code indicate parallelization efficiencies greater than 84% are possible for a typical test problem. Results tend to improve as the size of the problem increases. The convergence rate of the scheme is degraded slightly when additional artificial block boundaries are included for the purpose of parallelization. However, this degradation virtually disappears if the solution is converged near to machine zero. Recommendations are made for further code improvements to increase efficiency, correct bugs in the original version, and study decomposition effectiveness.

  5. Parallelization of the Implicit RPLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Orkwis, Paul D.

    1994-01-01

    The multiblock reacting Navier-Stokes flow-solver RPLUS2D was modified for parallel implementation. Results for non-reacting flow calculations of this code indicate parallelization efficiencies greater than 84% are possible for a typical test problem. Results tend to improve as the size of the problem increases. The convergence rate of the scheme is degraded slightly when additional artificial block boundaries are included for the purpose of parallelization. However, this degradation virtually disappears if the solution is converged near to machine zero. Recommendations are made for further code improvements to increase efficiency, correct bugs in the original version, and study decomposition effectiveness.

  6. Massively parallel neurocomputing for aerospace applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barhen, Jacob; Toomarian, Nikzad

    1993-01-01

    An innovative hybrid, analog-digital charge-domain technology, for the massively parallel VLSI implementation of certain large scale matrix-vector operations, has recently been introduced. It employs arrays of Charge Coupled/Charge Injection Device cells holding an analog matrix of charge, which process digital vectors in parallel by means of binary, non-destructive charge transfer operations. The impact of this technology on massively parallel processing is discussed. Fundamentally new classes of algorithms, specifically designed for this emerging technology, as applied to signal processing, are derived.

  7. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  8. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  9. Parallel Climate Analysis Toolkit (ParCAT)

    Energy Science and Technology Software Center (ESTSC)

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  10. Distributed parallel messaging for multiprocessor systems

    DOEpatents

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  11. Parallel optical memories for very large databases

    NASA Astrophysics Data System (ADS)

    Mitkas, Pericles A.; Berra, P. B.

    1993-02-01

    The steady increase in volume of current and future databases dictates the development of massive secondary storage devices that allow parallel access and exhibit high I/O data rates. Optical memories, such as parallel optical disks and holograms, can satisfy these requirements because they combine high recording density and parallel one- or two-dimensional output. Several configurations for database storage involving different types of optical memory devices are investigated. All these approaches include some level of optical preprocessing in the form of data filtering in an attempt to reduce the amount of data per transaction that reach the electronic front-end.

  12. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  13. Parallel Implementation of the Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Baggag, Abdalkader; Atkins, Harold; Keyes, David

    1999-01-01

    This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.

  14. Improved chopper circuit uses parallel transistors

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Parallel transistor chopper circuit operates with one transistor in the forward mode and the other in the inverse mode. By using this method, it acts as a single, symmetrical, bidirectional transistor, and reduces and stabilizes the offset voltage.

  15. Parallel processor programs in the Federal Government

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  16. Parallelism In Rule-Based Systems

    NASA Astrophysics Data System (ADS)

    Sabharwal, Arvind; Iyengar, S. Sitharama; de Saussure, G.; Weisbin, C. R.

    1988-03-01

    Rule-based systems, which have proven to be extremely useful for several Artificial Intelligence and Expert Systems applications, currently face severe limitations due to the slow speed of their execution. To achieve the desired speed-up, this paper addresses the problem of parallelization of production systems and explores the various architectural and algorithmic possibilities. The inherent sources of parallelism in the production system structure are analyzed and the trade-offs, limitations and feasibility of exploitation of these sources of parallelism are presented. Based on this analysis, we propose a dedicated, coarse-grained, n-ary tree multiprocessor architecture for the parallel implementation of rule-based systems and then present algorithms for partitioning of rules in this architecture.

  17. Parallel programming with PCN. Revision 1

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  18. The PISCES 2 parallel programming environment

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.

    1987-01-01

    PISCES 2 is a programming environment for scientific and engineering computations on MIMD parallel computers. It is currently implemented on a flexible FLEX/32 at NASA Langley, a 20 processor machine with both shared and local memories. The environment provides an extended Fortran for applications programming, a configuration environment for setting up a run on the parallel machine, and a run-time environment for monitoring and controlling program execution. This paper describes the overall design of the system and its implementation on the FLEX/32. Emphasis is placed on several novel aspects of the design: the use of a carefully defined virtual machine, programmer control of the mapping of virtual machine to actual hardware, forces for medium-granularity parallelism, and windows for parallel distribution of data. Some preliminary measurements of storage use are included.

  19. Parallel line scanning ophthalmoscope for retinal imaging.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2015-11-15

    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of parallelism in illuminating the retina compared to traditional scanning laser ophthalmoscope systems utilizing scanning mirrors. The system operated at the shot-noise limit with a signal-to-noise ratio of 28 for an optical power measured at the cornea of 100 μW. To demonstrate the imaging capabilities of the system, the macula and the optic nerve head of a healthy volunteer were imaged. Confocal images show good contrast and lateral resolution with a 10°×10° field of view. PMID:26565868

  20. Parallel algorithms for dynamically partitioning unstructured grids

    SciTech Connect

    Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.

    1994-10-01

    Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.

  1. Massively Parallel Computing: A Sandia Perspective

    SciTech Connect

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  2. Social Problems and Deviance: Some Parallel Issues

    ERIC Educational Resources Information Center

    Kitsuse, John I.; Spector, Malcolm

    1975-01-01

    Explores parallel developments in labeling theory and in the value conflict approach to social problems. Similarities in their critiques of functionalism and etiological theory as well as their emphasis on the definitional process are noted. (Author)

  3. Parallel supercomputing today and the cedar approach.

    PubMed

    Kuck, D J; Davidson, E S; Lawrie, D H; Sameh, A H

    1986-02-28

    More and more scientists and engineers are becoming interested in using supercomputers. Earlier barriers to using these machines are disappearing as software for their use improves. Meanwhile, new parallel supercomputer architectures are emerging that may provide rapid growth in performance. These systems may use a large number of processors with an intricate memory system that is both parallel and hierarchical; they will require even more advanced software. Compilers that restructure user programs to exploit the machine organization seem to be essential. A wide range of algorithms and applications is being developed in an effort to provide high parallel processing performance in many fields. The Cedar supercomputer, presently operating with eight processors in parallel, uses advanced system and applications software developed at the University of Illinois during the past 12 years. This software should allow the number of processors in Cedar to be doubled annually, providing rapid performance advances in the next decade. PMID:17740294

  4. Feature Clustering for Accelerating Parallel Coordinate Descent

    SciTech Connect

    Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh; Haglin, David J.

    2012-12-06

    We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.

  5. Finite element computation with parallel VLSI

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Salama, M.

    1983-01-01

    This paper describes a parallel processing computer consisting of a 16-bit microcomputer as a master processor which controls and coordinates the activities of 8086/8087 VLSI chip set slave processors working in parallel. The hardware is inexpensive and can be flexibly configured and programmed to perform various functions. This makes it a useful research tool for the development of, and experimentation with parallel mathematical algorithms. Application of the hardware to computational tasks involved in the finite element analysis method is demonstrated by the generation and assembly of beam finite element stiffness matrices. A number of possible schemes for the implementation of N-elements on N- or n-processors (N is greater than n) are described, and the speedup factors of their time consumption are determined as a function of the number of available parallel processors.

  6. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  7. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  8. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  9. The Nexus task-parallel runtime system

    SciTech Connect

    Foster, I.; Tuecke, S.; Kesselman, C.

    1994-12-31

    A runtime system provides a parallel language compiler with an interface to the low-level facilities required to support interaction between concurrently executing program components. Nexus is a portable runtime system for task-parallel programming languages. Distinguishing features of Nexus include its support for multiple threads of control, dynamic processor acquisition, dynamic address space creation, a global memory model via interprocessor references, and asynchronous events. In addition, it supports heterogeneity at multiple levels, allowing a single computation to utilize different programming languages, executables, processors, and network protocols. Nexus is currently being used as a compiler target for two task-parallel languages: Fortran M and Compositional C++. In this paper, we present the Nexus design, outline techniques used to implement Nexus on parallel computers, show how it is used in compilers, and compare its performance with that of another runtime system.

  10. Parallel processing of a rotating shaft simulation

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    A FORTRAN program describing the vibration modes of a rotor-bearing system is analyzed for parellelism in this simulation using a Pascal-like structured language. Potential vector operations are also identified. A critical path through the simulation is identified and used in conjunction with somewhat fictitious processor characteristics to determine the time to calculate the problem on a parallel processing system having those characteristics. A parallel processing overhead time is included as a parameter for proper evaluation of the gain over serial calculation. The serial calculation time is determined for the same fictitious system. An improvement of up to 640 percent is possible depending on the value of the overhead time. Based on the analysis, certain conclusions are drawn pertaining to the development needs of parallel processing technology, and to the specification of parallel processing systems to meet computational needs.

  11. Modified mesh-connected parallel computers

    SciTech Connect

    Carlson, D.A. )

    1988-10-01

    The mesh-connected parallel computer is an important parallel processing organization that has been used in the past for the design of supercomputing systems. In this paper, the authors explore modifications of a mesh-connected parallel computer for the purpose of increasing the efficiency of executing important application programs. These modifications are made by adding one or more global mesh structures to the processing array. They show how our modifications allow asymptotic improvements in the efficiency of executing computations having low to medium interprocessor communication requirements (e.g., tree computations, prefix computations, finding the connected components of a graph). For computations with high interprocessor communication requirements such as sorting, they show that they offer no speedup. They also compare the modified mesh-connected parallel computer to other similar organizations including the pyramid, the X-tree, and the mesh-of-trees.

  12. Fast and practical parallel polynomial interpolation

    SciTech Connect

    Egecioglu, O.; Gallopoulos, E.; Koc, C.K.

    1987-01-01

    We present fast and practical parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms make use of fast parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. For n + 1 given input pairs the proposed interpolation algorithm requires 2 (log (n + 1)) + 2 parallel arithmetic steps and circuit size O(n/sup 2/). The algorithms are numerically stable and their floating-point implementation results in error accumulation similar to that of the widely used serial algorithms. This is in contrast to other fast serial and parallel interpolation algorithms which are subject to much larger roundoff. We demonstrate that in a distributed memory environment context, a cube connected system is very suitable for the algorithms' implementation, exhibiting very small communication cost. As further advantages we note that our techniques do not require equidistant points, preconditioning, or use of the Fast Fourier Transform. 21 refs., 4 figs.

  13. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  14. Massive parallelism in the future of science

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of processing agents, one for each data item in the result will be designed. These agents collectively can solve problems thousands of times faster than current supercomputers. In the domain of distributed parallelism, computations comprising large numbers of resource attached to the world network will be designed. The network will support computations far beyond the power of any one machine. In the domain of people parallelism collaborations among large groups of scientists around the world who participate in projects that endure well past the sojourns of individuals within them will be designed. Computing and telecommunications technology will support the large, long projects that will characterize big science by the turn of the century. Scientists must become masters in these three domains during the coming decade.

  15. A join algorithm for combining AND parallel solutions in AND/OR parallel systems

    SciTech Connect

    Ramkumar, B. ); Kale, L.V. )

    1992-02-01

    When two or more literals in the body of a Prolog clause are solved in (AND) parallel, their solutions need to be joined to compute solutions for the clause. This is often a difficult problem in parallel Prolog systems that exploit OR and independent AND parallelism in Prolog programs. In several AND/OR parallel systems proposed recently, this problem is side-stepped at the cost of unexploited OR parallelism in the program, in part due to the complexity of the backtracking algorithm beneath AND parallel branches. In some cases, the data dependency graphs used by these systems cannot represent all the exploitable independent AND parallelism known at compile time. In this paper, we describe the compile time analysis for an optimized join algorithm for supporting independent AND parallelism in logic programs efficiently without leaving and OR parallelism unexploited. We then discuss how this analysis can be used to yield very efficient runtime behavior. We also discuss problems associated with a tree representation of the search space when arbitrarily complex data dependency graphs are permitted. We describe how these problems can be resolved by mapping the search space onto data dependency graphs themselves. The algorithm has been implemented in a compiler for parallel Prolog based on the reduce-OR process model. The algorithm is suitable for the implementation of AND/OR systems on both shared and nonshared memory machines. Performance on benchmark programs.

  16. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  17. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  18. LDV Measurement of Confined Parallel Jet Mixing

    SciTech Connect

    R.F. Kunz; S.W. D'Amico; P.F. Vassallo; M.A. Zaccaria

    2001-01-31

    Laser Doppler Velocimetry (LDV) measurements were taken in a confinement, bounded by two parallel walls, into which issues a row of parallel jets. Two-component measurements were taken of two mean velocity components and three Reynolds stress components. As observed in isolated three dimensional wall bounded jets, the transverse diffusion of the jets is quite large. The data indicate that this rapid mixing process is due to strong secondary flows, transport of large inlet intensities and Reynolds stress anisotropy effects.

  19. Enhancing Scalability of Parallel Structured AMR Calculations

    SciTech Connect

    Wissink, A M; Hysom, D; Hornung, R D

    2003-02-10

    This paper discusses parallel scaling performance of large scale parallel structured adaptive mesh refinement (SAMR) calculations in SAMRAI. Previous work revealed that poor scaling qualities in the adaptive gridding operations in SAMR calculations cause them to become dominant for cases run on up to 512 processors. This work describes algorithms we have developed to enhance the efficiency of the adaptive gridding operations. Performance of the algorithms is evaluated for two adaptive benchmarks run on up 512 processors of an IBM SP system.

  20. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4