Science.gov

Sample records for lines boost parallel

  1. Learning and Parallelization Boost Constraint Search

    ERIC Educational Resources Information Center

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  2. Development of a high speed parallel hybrid boost bearing

    NASA Technical Reports Server (NTRS)

    Winn, L. W.; Eusepi, M. W.

    1973-01-01

    The analysis, design, and testing of the hybrid boost bearing are discussed. The hybrid boost bearing consists of a fluid film bearing coupled in parallel with a rolling element bearing. This coupling arrangement makes use of the inherent advantages of both the fluid film and rolling element bearing and at the same time minimizes their disadvantages and limitations. The analytical optimization studies that lead to the final fluid film bearing design are reported. The bearing consisted of a centrifugally-pressurized planar fluid film thrust bearing with oil feed through the shaft center. An analysis of the test ball bearing is also presented. The experimental determination of the hybrid bearing characteristics obtained on the basis of individual bearing component tests and a combined hybrid bearing assembly is discussed and compared to the analytically determined performance characteristics.

  3. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  4. Boosting the bottom line of physician networks.

    PubMed

    Mertz, Greg

    2013-06-01

    To improve the bottom line of owned physician practices, hospitals should: Identify disparities between physician pay and performance, and understand the factors that are creating these disparities. Review fees to make sure they are aligned with insurer and Medicare fee schedules. Analyze the work load and job resposibilities of office staff and modify staffng levels and job descriptions, if needed. PMID:23795381

  5. A parallel dipole line system

    NASA Astrophysics Data System (ADS)

    Gunawan, Oki; Virgus, Yudistira; Tai, Kong Fai

    2015-02-01

    We present a study of a parallel linear distribution of dipole system, which can be realized using a pair of cylindrical diametric magnets and yields several interesting properties and applications. The system serves as a trap for cylindrical diamagnetic object, produces a fascinating one-dimensional camelback potential profile at its center plane, yields a technique for measuring magnetic susceptibility of the trapped object and serves as an ideal system to implement highly sensitive Hall measurement utilizing rotating magnetic field and lock-in detection. The latter application enables extraction of low carrier mobility in several materials of high interest such as the world-record-quality, earth abundant kesterite solar cell, and helps elucidate its fundamental performance limitation.

  6. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  7. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  8. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  9. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Loads parallel to hinge line. 25.393... § 25.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data,...

  10. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Loads parallel to hinge line. 25.393... § 25.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data,...

  11. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  12. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Structure Control Surface and System Loads... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data,...

  13. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  14. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  15. VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN QUARRY AREA, FACING NORTHEAST - Granite Hill Plantation, Quarry No. 2, South side of State Route 16, 1.3 miles northeast east of Sparta, Sparta, Hancock County, GA

  16. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  17. Parallel line analysis: multifunctional software for the biomedical sciences

    NASA Technical Reports Server (NTRS)

    Swank, P. R.; Lewis, M. L.; Damron, K. L.; Morrison, D. R.

    1990-01-01

    An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time.

  18. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    SciTech Connect

    Guo Zehua; Tang Xianzhu

    2012-06-15

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  19. Parallel algorithm of generating set points for a manipulator with straight line and circular motions

    NASA Astrophysics Data System (ADS)

    Lai, Jim Z. C.; Chao, Ming

    1992-06-01

    A parallel algorithm of generating set points in Cartesian space for a manipulator with straight-line and circular motions is described. This algorithm is developed for parallel computation and does not have the problem of the wobbling approach vector that affects many techniques. When the scheme is executed serially, the computing time is about two-thirds that of the conventional technique.

  20. Parallel line raster eliminates ambiguities in reading timing of pulses less than 500 microseconds apart

    NASA Technical Reports Server (NTRS)

    Horne, A. P.

    1966-01-01

    Parallel horizontal line raster is used for precision timing of events occurring less than 500 microseconds apart for observation of hypervelocity phenomena. The raster uses a staircase vertical deflection and eliminates ambiguities in reading timing of pulses close to the end of each line.

  1. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  2. Study of electric fields parallel to the magnetic lines of force using artificially injected energetic electrons

    NASA Technical Reports Server (NTRS)

    Wilhelm, K.; Bernstein, W.; Whalen, B. A.

    1980-01-01

    Electron beam experiments using rocket-borne instrumentation will be discussed. The observations indicate that reflections of energetic electrons may occur at possible electric field configurations parallel to the direction of the magnetic lines of force in an altitude range of several thousand kilometers above the ionosphere.

  3. Parallelization of a Transient Method of Lines Navier-Stokes Code

    NASA Astrophysics Data System (ADS)

    Erşahin, Cem; Tarhan, Tanil; Tuncer, Ismail H.; Selçuk, Nevin

    2004-01-01

    Parallel implementation of a serial code, namely method of lines (MOL) solution for momentum equations (MOLS4ME), previously developed for the solution of transient Navier-Stokes equations for incompressible separated internal flows in regular and complex geometries, is described.

  4. Designing linings of mutually influencing parallel shallow circular tunnels under seismic effects of earthquake

    NASA Astrophysics Data System (ADS)

    Sammal, A. S.; Antsiferov, S. V.; Deev, P. V.

    2016-09-01

    The paper deals with seismic design of parallel shallow tunnel linings, which is based on identifying the most unfavorable lining stress states under the effects of long longitudinal and shear seismic waves propagating through the cross section of the tunnel in different directions and combinations. For this purpose, the sum and difference of normal tangential stresses on lining internal outline caused by waves of different types are investigated on the extreme relative to the angle of incidence. The method allows analytic plotting of a curve illustrating structure stresses. The paper gives an example of design calculation.

  5. Parallel-plate transmission line type of EMP simulators: Systematic review and recommendations

    NASA Astrophysics Data System (ADS)

    Giri, D. V.; Liu, T. K.; Tesche, F. M.; King, R. W. P.

    1980-05-01

    This report presents various aspects of the two-parallel-plate transmission line type of EMP simulator. Much of the work is the result of research efforts conducted during the last two decades at the Air Force Weapons Laboratory, and in industries/universities as well. The principal features of individual simulator components are discussed. The report also emphasizes that it is imperative to hybridize our understanding of individual components so that we can draw meaningful conclusions of simulator performance as a whole.

  6. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  7. Global gene expression in pseudomyxoma peritonei, with parallel development of two immortalized cell lines.

    PubMed

    Roberts, Darren L; O'Dwyer, Sarah T; Stern, Peter L; Renehan, Andrew G

    2015-05-10

    Pseudomyxoma peritonei (PMP) is a rare tumor of appendiceal origin. Treatment is major cytoreductive surgery but morbidity is high. PMP is considered chemo-resistant; its molecular biology is understudied; and presently, there is no platform for pre-clinical drug testing. Here, we performed exon array analysis from laser micro-dissected PMP tissue and normal colonic epithelia. The array analysis identified 27 up-regulated and 34 down-regulated genes: candidate up-regulated genes included SLC16A4, DSC3, Aldolase B, EPHX4, and ARHGAP24; candidate down-regulated genes were MS4A12, TMIGD1 and Caspase-5. We confirmed differential expression of the candidate genes and their protein products using in-situ hybridization and immuno-histochemistry. In parallel, we established two primary PMP cell lines, N14A and N15A, and immortalized with an SV40 T-antigen lentiviral vector. We cross-checked for expression of the candidate genes (from the array analyses) using qPCR in the cell lines and demonstrated that the gene profiles were distinct from those of colorectal tumor libraries and commonly used colon cell lines. N14A and N15A were responsiveness to mitomycin and oxaliplatin. This study characterizes global gene expression in PMP, and the parallel development of the first immortalized PMP cell lines; fit for pre-clinical testing and PMP oncogene discovery.

  8. Global gene expression in pseudomyxoma peritonei, with parallel development of two immortalized cell lines

    PubMed Central

    Roberts, Darren L.; O'Dwyer, Sarah T.; Stern, Peter L.; Renehan, Andrew G.

    2015-01-01

    Pseudomyxoma peritonei (PMP) is a rare tumor of appendiceal origin. Treatment is major cytoreductive surgery but morbidity is high. PMP is considered chemo-resistant; its molecular biology is understudied; and presently, there is no platform for pre-clinical drug testing. Here, we performed exon array analysis from laser micro-dissected PMP tissue and normal colonic epithelia. The array analysis identified 27 up-regulated and 34 down-regulated genes: candidate up-regulated genes included SLC16A4, DSC3, Aldolase B, EPHX4, and ARHGAP24; candidate down-regulated genes were MS4A12, TMIGD1 and Caspase-5. We confirmed differential expression of the candidate genes and their protein products using in-situ hybridization and immuno-histochemistry. In parallel, we established two primary PMP cell lines, N14A and N15A, and immortalized with an SV40 T-antigen lentiviral vector. We cross-checked for expression of the candidate genes (from the array analyses) using qPCR in the cell lines and demonstrated that the gene profiles were distinct from those of colorectal tumor libraries and commonly used colon cell lines. N14A and N15A were responsiveness to mitomycin and oxaliplatin. This study characterizes global gene expression in PMP, and the parallel development of the first immortalized PMP cell lines; fit for pre-clinical testing and PMP oncogene discovery. PMID:25929336

  9. Parlin, a general microcomputer program for parallel-line analysis of bioassays.

    PubMed

    Jesty, J; Godfrey, H P

    1986-04-01

    Commonly used manual and calculator methods for analysis of clinically important parallel-line bioassays are subject to operator bias and provide neither confidence limits for the results nor any indication of their validity. To remedy this, the authors have written a general program for statistical analysis of these bioassays for the IBM Personal Computer and its compatibles. The program has been used for analysis of bioassays for specific coagulation factors and inflammatory lymphokines and for radioimmunoassays for prostaglandins. The program offers a choice of no transform, logarithmic, or logit transformation of data, which are fitted to parallel lines for standard and unknown. It analyzes the fit for parallelism and linearity with an F test, and calculates the best estimate of the result and its 95% confidence limits. Comparison of results calculated by PARLIN with those previously obtained manually shows excellent correlation (r greater than 0.99). Results obtained using PARLIN are quickly available with current assay technics and provide a complete evaluation of the bioassay at no increase in cost. PMID:3456698

  10. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  11. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  12. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  13. Acceleration on stretched meshes with line-implicit LU-SGS in parallel implementation

    NASA Astrophysics Data System (ADS)

    Otero, Evelyn; Eliasson, Peter

    2015-02-01

    The implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver is combined with the line-implicit technique to improve convergence on the very anisotropic grids necessary for resolving the boundary layers. The computational fluid dynamics code used is Edge, a Navier-Stokes flow solver for unstructured grids based on a dual grid and edge-based formulation. Multigrid acceleration is applied with the intention to accelerate the convergence to steady state. LU-SGS works in parallel and gives better linear scaling with respect to the number of processors, than the explicit scheme. The ordering techniques investigated have shown that node numbering does influence the convergence and that the orderings from Delaunay and advancing front generation were among the best tested. 2D Reynolds-averaged Navier-Stokes computations have clearly shown the strong efficiency of our novel approach line-implicit LU-SGS which is four times faster than implicit LU-SGS and line-implicit Runge-Kutta. Implicit LU-SGS for Euler and line-implicit LU-SGS for Reynolds-averaged Navier-Stokes are at least twice faster than explicit and line-implicit Runge-Kutta, respectively, for 2D and 3D cases. For 3D Reynolds-averaged Navier-Stokes, multigrid did not accelerate the convergence and therefore may not be needed.

  14. Line-field parallel swept source MHz OCT for structural and functional retinal imaging

    PubMed Central

    Fechtig, Daniel J.; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-01-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation. PMID:25798298

  15. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an

  16. Transmit and receive transmission line arrays for 7 Tesla parallel imaging.

    PubMed

    Adriany, Gregor; Van de Moortele, Pierre-Francois; Wiesinger, Florian; Moeller, Steen; Strupp, John P; Andersen, Peter; Snyder, Carl; Zhang, Xiaoliang; Chen, Wei; Pruessmann, Klaas P; Boesiger, Peter; Vaughan, Tommy; Uğurbil, Kāmil

    2005-02-01

    Transceive array coils, capable of RF transmission and independent signal reception, were developed for parallel, 1H imaging applications in the human head at 7 T (300 MHz). The coils combine the advantages of high-frequency properties of transmission lines with classic MR coil design. Because of the short wavelength at the 1H frequency at 300 MHz, these coils were straightforward to build and decouple. The sensitivity profiles of individual coils were highly asymmetric, as expected at this high frequency; however, the summed images from all coils were relatively uniform over the whole brain. Data were obtained with four- and eight-channel transceive arrays built using a loop configuration and compared to arrays built from straight stripline transmission lines. With both the four- and the eight-channel arrays, parallel imaging with sensitivity encoding with high reduction numbers was feasible at 7 T in the human head. A one-dimensional reduction factor of 4 was robustly achieved with an average g value of 1.25 with the eight-channel transmit/receive coils.

  17. An on-line learning tracking of non-rigid target combining multiple-instance boosting and level set

    NASA Astrophysics Data System (ADS)

    Chen, Mingming; Cai, Jingju

    2013-10-01

    Visual tracking algorithms based on online boosting generally use a rectangular bounding box to represent the position of the target, while actually the shape of the target is always irregular. This will cause the classifier to learn the features of the non-target parts in the rectangle region, thereby the performance of the classifier is reduced, and drift would happen. To avoid the limitations of the bounding-box, we propose a novel tracking-by-detection algorithm involving the level set segmentation, which ensures the classifier only learn the features of the real target area in the tracking box. Because the shape of the target only changes a little between two adjacent frames and the current level set algorithm can avoid the re-initialization of the signed distance function, it only takes a few iterations to converge to the position of the target contour in the next frame. We also make some improvement on the level set energy function so that the zero level set would have less possible to converge to the false contour. In addition, we use gradient boost to improve the original multi-instance learning (MIL) algorithm like the WMILtracker, which greatly speed up the tracker. Our algorithm outperforms the original MILtracker both on speed and precision. Compared with the WMILtracker, our algorithm runs at a almost same speed, but we can avoid the drift caused by background learning, so the precision is better.

  18. Parametric analysis of hollow conductor parallel and coaxial transmission lines for high frequency space power distribution

    NASA Technical Reports Server (NTRS)

    Jeffries, K. S.; Renz, D. D.

    1984-01-01

    A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.

  19. In-line print defect inspection system based on parallelized algorithms

    NASA Astrophysics Data System (ADS)

    Lv, Chao; Zhou, Hongjun

    2015-03-01

    The core algorithm of an on-line print defects detection system is template matching. In this paper, we introduce a kind of edge-based template matching based on Canny's edge detection method to find the edge information and do the matching work. Of all the detection algorithms, the most difficult problem is execution time, in order to reduce the execution time and improve the efficiency of execution, we introduce four different ways to solve and compare. They are Pyramidal algorithm, Multicore and Multi-Threading algorithm based on OpenMP, a Parallel algorithm based on Intel AVX Instruction Set, GPU computing based on OpenCL model. Through the results, we can find different characters of different ways, and then choose the best for your own system.

  20. A new cascaded control strategy for paralleled line-interactive UPS with LCL filter

    NASA Astrophysics Data System (ADS)

    Zhang, X. Y.; Zhang, X. H.; Li, L.; Luo, F.; Zhang, Y. S.

    2016-08-01

    Traditional uninterrupted power supply (UPS) is difficult to meet the output voltage quality and grid-side power quality requirements at the same time, and usually has some disadvantage, such as multi-stage conversion, complex structure, or harmonic current pollution to the utility grid and so on. A three-phase three-level paralleled line-interactive UPS with LCL filter is presented in this paper. It can achieve the output voltage quality and grid-side power quality control simultaneously with only single-conversion power stage, but the multi-objective control strategy design is difficult. Based on the detailed analysis of the circuit structure and operation mechanism, a new cascaded control strategy for the power, voltage, and current is proposed. An outer current control loop based on the resonant control theory is designed to ensure the grid-side power quality. An inner voltage control loop based on the capacitance voltage and capacitance current feedback is designed to ensure the output voltage quality and avoid the resonance peak of the LCL filter. Improved repetitive controller is added to reduce the distortion of the output voltage. The setting of the controller parameters is detailed discussed. A 100kVA UPS prototype is built and experiments under the unbalanced resistive load and nonlinear load are carried out. Theoretical analysis and experimental results show the effectiveness of the control strategy. The paralleled line-interactive UPS can not only remain constant three-phase balanced output voltage, but also has the comprehensive power quality management functions with three-phase balanced grid active power input, low THD of output voltage and grid current, and reactive power compensation. The UPS is a green friendly load to the utility.

  1. The new moon illusion and the role of perspective in the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2015-01-01

    In the new moon illusion, the sun does not appear to be in a direction perpendicular to the boundary between the lit and dark sides of the moon, and aircraft jet trails appear to follow curved paths across the sky. In both cases, lines that are physically straight and parallel to the horizon appear to be curved. These observations prompted us to investigate the neglected question of how we are able to judge the straightness and parallelism of extended lines. To do this, we asked observers to judge the 2-D alignment of three artificial "stars" projected onto the dome of the Saint Petersburg Planetarium that varied in both their elevation and their separation in horizontal azimuth. The results showed that observers make substantial, systematic errors, biasing their judgments away from the veridical great-circle locations and toward equal-elevation settings. These findings further demonstrate that whenever information about the distance of extended lines or isolated points is insufficient, observers tend to assume equidistance, and as a consequence, their straightness judgments are biased toward the angular separation of straight and parallel lines.

  2. Parallelism at CERN: real-time and off-line applications in the GP-MIMD2 project

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo

    1997-02-01

    A wide range of General Purpose High-Energy Physics applications, ranging from Monte carlo simulation to data acquisition, from interactive data analysis to on-line filtering, have been ported, or developed, and run in parallel on IBM SP-2 and Meiko CS-2 CERN large multi-processor machines. The ESPRIT project GP-MIMD2 has been a catalyst for the interest in parallel computing at CERN. The project provided the 128 processors Meiko CS-2 system that is now succesfully integrated in the CERN computing environment. The CERN experiment NA48 was involved in the GP-MIMD2 project since the beginning. NA48 physicists run, as part of their day-to-day work, simulation and analysis programs parallelized using the Message Passing Interface MPI. The CS-2 is also a vital component of the experiment Data Acquisition System and will be used to calibrate in real-time the 13 000 channels liquid krypton calorimeter.

  3. Parallel heat flux and flow acceleration in open field line plasmas with magnetic trapping

    SciTech Connect

    Guo, Zehua; Tang, Xian-Zhu; McDevitt, Chris

    2014-10-15

    The magnetic field strength modulation in a tokamak scrape-off layer (SOL) provides both flux expansion next to the divertor plates and magnetic trapping in a large portion of the SOL. Previously, we have focused on a flux expander with long mean-free-path, motivated by the high temperature and low density edge anticipated for an absorbing boundary enabled by liquid lithium surfaces. Here, the effects of magnetic trapping and a marginal collisionality on parallel heat flux and parallel flow acceleration are examined. The various transport mechanisms are captured by kinetic simulations in a simple but representative mirror-expander geometry. The observed parallel flow acceleration is interpreted and elucidated with a modified Chew-Goldberger-Low model that retains temperature anisotropy and finite collisionality.

  4. Resolving magnetic field line stochasticity and parallel thermal transport in MHD simulations

    SciTech Connect

    Nishimura, Y.; Callen, J.D.; Hegna, C.C.

    1998-12-31

    Heat transport along braided, or chaotic magnetic field lines is a key to understand the disruptive phase of tokamak operations, both the major disruption and the internal disruption (sawtooth oscillation). Recent sawtooth experimental results in the Tokamak Fusion Test Reactor (TFTR) have inferred that magnetic field line stochasticity in the vicinity of the q = 1 inversion radius plays an important role in rapid changes in the magnetic field structures and resultant thermal transport. In this study, the characteristic Lyapunov exponents and spatial correlation of field line behaviors are calculated to extract the characteristic scale length of the microscopic magnetic field structure (which is important for net radial global transport). These statistical values are used to model the effect of finite thermal transport along magnetic field lines in a physically consistent manner.

  5. The proposed planning method as a parallel element to a real service system for dynamic sharing of service lines.

    PubMed

    Klampfer, Saša; Chowdhury, Amor

    2015-07-01

    This paper presents a solution to the bottleneck problem with dynamic sharing or leasing of service capacities. From this perspective the use of the proposed method as a parallel element in service capacities sharing is very important, because it enables minimization of the number of interfaces, and consequently of the number of leased lines, with a combination of two service systems with time-opposite peak loads. In this paper we present a new approach, methodology, models and algorithms which solve the problems of dynamic leasing and sharing of service capacities.

  6. Wave-particle interaction in parallel transport of long mean-free-path plasmas along open field magnetic field lines

    NASA Astrophysics Data System (ADS)

    Guo, Zehua; Tang, Xianzhu

    2012-03-01

    A tokamak fusion reactor dumps a large amount of heat and particle flux to the divertor through the scrape-off plasma (SOL). Situation exists either by necessity or through deliberate design that the SOL plasma attains long mean-free-path along large segments of the open field lines. The rapid parallel streaming of electrons requires a large parallel electric field to maintain ambipolarity. The confining effect of the parallel electric field on electrons leads to a trap/passing boundary in the velocity space for electrons. In the normal situation where the upstream electron source populates both the trapped and passing region, a mechanism must exist to produce a flux across the electron trap/passing boundary. In a short mean-free-path plasma, this is provided by collisions. For long mean-free-path plasmas, wave-particle interaction is the primary candidate for detrapping the electrons. Here we present simulation results and a theoretical analysis using a model distribution function of trapped electrons. The dominating electromagnetic plasma instability and the associated collisionless scattering, that produces both particle and energy fluxes across the electron trap/passing boundary in velocity space, are discussed.

  7. High-voltage isolation transformer for sub-nanosecond rise time pulses constructed with annular parallel-strip transmission lines.

    PubMed

    Homma, Akira

    2011-07-01

    A novel annular parallel-strip transmission line was devised to construct high-voltage high-speed pulse isolation transformers. The transmission lines can easily realize stable high-voltage operation and good impedance matching between primary and secondary circuits. The time constant for the step response of the transformer was calculated by introducing a simple low-frequency equivalent circuit model. Results show that the relation between the time constant and low-cut-off frequency of the transformer conforms to the theory of the general first-order linear time-invariant system. Results also show that the test transformer composed of the new transmission lines can transmit about 600 ps rise time pulses across the dc potential difference of more than 150 kV with insertion loss of -2.5 dB. The measured effective time constant of 12 ns agreed exactly with the theoretically predicted value. For practical applications involving the delivery of synchronized trigger signals to a dc high-voltage electron gun station, the transformer described in this paper exhibited advantages over methods using fiber optic cables for the signal transfer system. This transformer has no jitter or breakdown problems that invariably occur in active circuit components.

  8. Parallel Configuration For Fast Superconducting Strip Line Detectors With Very Large Area In Time Of Flight Mass Spectrometry

    SciTech Connect

    Casaburi, A.; Zen, N.; Suzuki, K.; Ohkubo, M.; Ejrnaes, M.; Cristiano, R.; Pagano, S.

    2009-12-16

    We realized a very fast and large Superconducting Strip Line Detector based on a parallel configuration of nanowires. The detector with size 200x200 {mu}m{sup 2} recorded a sub-nanosecond pulse width of 700 ps in FWHM (400 ps rise time and 530 ps relaxation time) for lysozyme monomers/multimers molecules accelerated at 175 keV in a Time of Flight Mass Spectrometer. This record is the best in the class of superconducting detectors and comparable with the fastest NbN superconducting single photon detector of 10x10 {mu}m{sup 2}. We succeeded in acquiring mass spectra as the first step for a scale-up to {approx}mm pixel size for high throughput MS analysis, while keeping a fast response.

  9. A germ cell determinant reveals parallel pathways for germ line development in Caenorhabditis elegans.

    PubMed

    Mainpal, Rana; Nance, Jeremy; Yanowitz, Judith L

    2015-10-15

    Despite the central importance of germ cells for transmission of genetic material, our understanding of the molecular programs that control primordial germ cell (PGC) specification and differentiation are limited. Here, we present findings that X chromosome NonDisjunction factor-1 (XND-1), known for its role in regulating meiotic crossover formation, is an early determinant of germ cell fates in Caenorhabditis elegans. xnd-1 mutant embryos display a novel 'one PGC' phenotype as a result of G2 cell cycle arrest of the P4 blastomere. Larvae and adults display smaller germ lines and reduced brood size consistent with a role for XND-1 in germ cell proliferation. Maternal XND-1 proteins are found in the P4 lineage and are exclusively localized to the nucleus in PGCs, Z2 and Z3. Zygotic XND-1 turns on shortly thereafter, at the ∼300-cell stage, making XND-1 the earliest zygotically expressed gene in worm PGCs. Strikingly, a subset of xnd-1 mutants lack germ cells, a phenotype shared with nos-2, a member of the conserved Nanos family of germline determinants. We generated a nos-2 null allele and show that nos-2; xnd-1 double mutants display synthetic sterility. Further removal of nos-1 leads to almost complete sterility, with the vast majority of animals without germ cells. Sterility in xnd-1 mutants is correlated with an increase in transcriptional activation-associated histone modification and aberrant expression of somatic transgenes. Together, these data strongly suggest that xnd-1 defines a new branch for PGC development that functions redundantly with nos-2 and nos-1 to promote germline fates by maintaining transcriptional quiescence and regulating germ cell proliferation. PMID:26395476

  10. Bidirectional buck boost converter

    DOEpatents

    Esser, Albert Andreas Maria

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero.

  11. Bidirectional buck boost converter

    DOEpatents

    Esser, A.A.M.

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero. 20 figs.

  12. Performance Boosting Additive

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.

  13. Online Bagging and Boosting

    NASA Technical Reports Server (NTRS)

    Oza, Nikunji C.

    2005-01-01

    Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.

  14. Octane boosting catalyst

    SciTech Connect

    Miller, J.G.; Pellet, R.J.; Shamshoun, E.S.; Rabo, J.A

    1989-02-07

    The invention provides petroleum cracking and octane boosting catalysts containing a composite of an intermediate pore NZMS in combination with another non-zeolitic molecular sieve having the same framework structure, and processes for cracking of petroleum for the purpose of enhancing the octane rating of the gasoline produced.

  15. Stability of arsenic peptides in plant extracts: off-line versus on-line parallel elemental and molecular mass spectrometric detection for liquid chromatographic separation.

    PubMed

    Bluemlein, Katharina; Raab, Andrea; Feldmann, Jörg

    2009-01-01

    The instability of metal and metalloid complexes during analytical processes has always been an issue of an uncertainty regarding their speciation in plant extracts. Two different speciation protocols were compared regarding the analysis of arsenic phytochelatin (As(III)PC) complexes in fresh plant material. As the final step for separation/detection both methods used RP-HPLC simultaneously coupled to ICP-MS and ES-MS. However, one method was the often used off-line approach using two-dimensional separation, i.e. a pre-cleaning step using size-exclusion chromatography with subsequent fraction collection and freeze-drying prior to the analysis using RP-HPLC-ICP-MS and/or ES-MS. This approach revealed that less than 2% of the total arsenic was bound to peptides such as phytochelatins in the root extract of an arsenate exposed Thunbergia alata, whereas the direct on-line method showed that 83% of arsenic was bound to peptides, mainly as As(III)PC(3) and (GS)As(III)PC(2). Key analytical factors were identified which destabilise the As(III)PCs. The low pH of the mobile phase (0.1% formic acid) using RP-HPLC-ICP-MS/ES-MS stabilises the arsenic peptide complexes in the plant extract as well as the free peptide concentration, as shown by the kinetic disintegration study of the model compound As(III)(GS)(3) at pH 2.2 and 3.8. But only short half-lives of only a few hours were determined for the arsenic glutathione complex. Although As(III)PC(3) showed a ten times higher half-life (23 h) in a plant extract, the pre-cleaning step with subsequent fractionation in a mobile phase of pH 5.6 contributes to the destabilisation of the arsenic peptides in the off-line method. Furthermore, it was found that during a freeze-drying process more than 90% of an As(III)PC(3) complex and smaller free peptides such as PC(2) and PC(3) can be lost. Although the two-dimensional off-line method has been used successfully for other metal complexes, it is concluded here that the fractionation and

  16. Stability of arsenic peptides in plant extracts: off-line versus on-line parallel elemental and molecular mass spectrometric detection for liquid chromatographic separation.

    PubMed

    Bluemlein, Katharina; Raab, Andrea; Feldmann, Jörg

    2009-01-01

    The instability of metal and metalloid complexes during analytical processes has always been an issue of an uncertainty regarding their speciation in plant extracts. Two different speciation protocols were compared regarding the analysis of arsenic phytochelatin (As(III)PC) complexes in fresh plant material. As the final step for separation/detection both methods used RP-HPLC simultaneously coupled to ICP-MS and ES-MS. However, one method was the often used off-line approach using two-dimensional separation, i.e. a pre-cleaning step using size-exclusion chromatography with subsequent fraction collection and freeze-drying prior to the analysis using RP-HPLC-ICP-MS and/or ES-MS. This approach revealed that less than 2% of the total arsenic was bound to peptides such as phytochelatins in the root extract of an arsenate exposed Thunbergia alata, whereas the direct on-line method showed that 83% of arsenic was bound to peptides, mainly as As(III)PC(3) and (GS)As(III)PC(2). Key analytical factors were identified which destabilise the As(III)PCs. The low pH of the mobile phase (0.1% formic acid) using RP-HPLC-ICP-MS/ES-MS stabilises the arsenic peptide complexes in the plant extract as well as the free peptide concentration, as shown by the kinetic disintegration study of the model compound As(III)(GS)(3) at pH 2.2 and 3.8. But only short half-lives of only a few hours were determined for the arsenic glutathione complex. Although As(III)PC(3) showed a ten times higher half-life (23 h) in a plant extract, the pre-cleaning step with subsequent fractionation in a mobile phase of pH 5.6 contributes to the destabilisation of the arsenic peptides in the off-line method. Furthermore, it was found that during a freeze-drying process more than 90% of an As(III)PC(3) complex and smaller free peptides such as PC(2) and PC(3) can be lost. Although the two-dimensional off-line method has been used successfully for other metal complexes, it is concluded here that the fractionation and

  17. Line mixing in parallel and perpendicular bands of CO2: A further test of the refined Robert-Bonamy formalism

    NASA Astrophysics Data System (ADS)

    Boulet, C.; Ma, Q.; Tipping, R. H.

    2015-09-01

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modelling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modelling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (Σ → Σ and Σ → Π) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  18. Line Mixing in Parallel and Perpendicular Bands of CO2: A Further Test of the Refined Robert-Bonamy Formalism

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Qiancheng; Tipping, R. H.

    2015-01-01

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modeling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modeling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (sigma yields sigma and sigma yields pi) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  19. Gradient boosting machines, a tutorial

    PubMed Central

    Natekin, Alexey; Knoll, Alois

    2013-01-01

    Gradient boosting machines are a family of powerful machine-learning techniques that have shown considerable success in a wide range of practical applications. They are highly customizable to the particular needs of the application, like being learned with respect to different loss functions. This article gives a tutorial introduction into the methodology of gradient boosting methods with a strong focus on machine learning aspects of modeling. A theoretical information is complemented with descriptive examples and illustrations which cover all the stages of the gradient boosting model design. Considerations on handling the model complexity are discussed. Three practical examples of gradient boosting applications are presented and comprehensively analyzed. PMID:24409142

  20. Observation of hole injection boost via two parallel paths in Pentacene thin-film transistors by employing Pentacene: 4, 4″-tris(3-methylphenylphenylamino) triphenylamine: MoO{sub 3} buffer layer

    SciTech Connect

    Yan, Pingrui; Liu, Ziyang; Liu, Dongyang; Wang, Xuehui; Yue, Shouzhen; Zhao, Yi; Zhang, Shiming

    2014-11-01

    Pentacene organic thin-film transistors (OTFTs) were prepared by introducing 4, 4″-tris(3-methylphenylphenylamino) triphenylamine (m-MTDATA): MoO{sub 3}, Pentacene: MoO{sub 3}, and Pentacene: m-MTDATA: MoO{sub 3} as buffer layers. These OTFTs all showed significant performance improvement comparing to the reference device. Significantly, we observe that the device employing Pentacene: m-MTDATA: MoO{sub 3} buffer layer can both take advantage of charge transfer complexes formed in the m-MTDATA: MoO{sub 3} device and suitable energy level alignment existed in the Pentacene: MoO{sub 3} device. These two parallel paths led to a high mobility, low threshold voltage, and contact resistance of 0.72 cm{sup 2}/V s, −13.4 V, and 0.83 kΩ at V{sub ds} = − 100 V. This work enriches the understanding of MoO{sub 3} doped organic materials for applications in OTFTs.

  1. VERY STRONG EMISSION-LINE GALAXIES IN THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY AND IMPLICATIONS FOR HIGH-REDSHIFT GALAXIES

    SciTech Connect

    Atek, H.; Colbert, J.; Shim, H.; Siana, B.; Bridge, C.; Scarlata, C.; Malkan, M.; Ross, N. R.; McCarthy, P.; Dressler, A.; Hathi, N. P.; Teplitz, H.; Henry, A.; Martin, C.; Bunker, A. J.; Fosbury, R. A. E.

    2011-12-20

    The WFC3 Infrared Spectroscopic Parallel Survey uses the Hubble Space Telescope (HST) infrared grism capabilities to obtain slitless spectra of thousands of galaxies over a wide redshift range including the peak of star formation history of the universe. We select a population of very strong emission-line galaxies with rest-frame equivalent widths (EWs) higher than 200 A. A total of 176 objects are found over the redshift range 0.35 < z < 2.3 in the 180 arcmin{sup 2} area that we have analyzed so far. This population consists of young and low-mass starbursts with high specific star formation rates (sSFR). After spectroscopic follow-up of one of these galaxies with Keck/Low Resolution Imaging Spectrometer, we report the detection at z = 0.7 of an extremely metal-poor galaxy with 12 + log(O/H) =7.47 {+-} 0.11. After estimating the active galactic nucleus fraction in the sample, we show that the high-EW galaxies have higher sSFR than normal star-forming galaxies at any redshift. We find that the nebular emission lines can substantially affect the total broadband flux density with a median brightening of 0.3 mag, with some examples of line contamination producing brightening of up to 1 mag. We show that the presence of strong emission lines in low-z galaxies can mimic the color-selection criteria used in the z {approx} 8 dropout surveys. In order to effectively remove low-redshift interlopers, deep optical imaging is needed, at least 1 mag deeper than the bands in which the objects are detected. Without deep optical data, most of the interlopers cannot be ruled out in the wide shallow HST imaging surveys. Finally, we empirically demonstrate that strong nebular lines can lead to an overestimation of the mass and the age of galaxies derived from fitting of their spectral energy distribution (SED). Without removing emission lines, the age and the stellar mass estimates are overestimated by a factor of 2 on average and up to a factor of 10 for the high-EW galaxies

  2. Analytic boosted boson discrimination

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-01

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

  3. Analytic boosted boson discrimination

    DOE PAGESBeta

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits.more » By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less

  4. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGESBeta

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  5. Probing the Sizes of Absorbers: Correlations in the z 3.5 Lyman-alpha Forest Between Parallel Lines of Sight

    NASA Astrophysics Data System (ADS)

    Becker, G.; Sargent, W. L. W.; Rauch, M.

    2003-12-01

    Studies of the intergalactic medium along parallel lines of sight towards quasar pairs offer valuable information on the sizes of intervening absorbers as well as provide the basis for a test of the cosmological constant. We present a study of two high-redshift pairs with moderate separation observed with Keck ESI, Q1422+2309A/Q1424+2255 (z = 3.63, θ = 39'') and Q1439-0034A/B (z = 4.25, θ = 33''). The crosscorrelation of transmitted flux in the Lyα forest shows a strong peak at zero velocity lag in both pairs, suggesting that the Lyα absorbers are coherent over scales > 230-300 proper kpc. Two strong C IV systems at z = 3.4, closely separated along the line of sight, appear in Q1439B but not in Q1439A, consistent with the picture of outflowing material from an intervening galaxy. In contrast, a Mg II system at z = 1.68 does appear in both Q1439A and B. This suggests either a single absorber of size > 280 kpc or two separate, clustered absorbers. We additionally examine the impact of spectral characteristics on applying the Alcock-Paczynski test to quasar pairs, finding a strong dependence on resolution.

  6. Lines

    ERIC Educational Resources Information Center

    Mires, Peter B.

    2006-01-01

    National Geography Standards for the middle school years generally stress the teaching of latitude and longitude. There are many creative ways to explain the great grid that encircles our planet, but the author has found that students in his college-level geography courses especially enjoy human-interest stories associated with lines of latitude…

  7. An evaluation of relation between the relative parallelism of occlusal plane to ala-tragal line and variation in the angulation of Po-Na-ANS angle in dentulous subjects: A cephalometric study

    PubMed Central

    Shetty, Sanath; Shenoy, K. Kamalakanth; Ninan, Justin; Mahaseth, Pranay

    2015-01-01

    Aims: The aim was to evaluate if any correlation exists between variation in angulation of Po-Na-ANS angle and relative parallelism of the occlusal plane to the different tragal levels of the ear in dentulous subjects. Methodology: A total of 200 subjects were selected for the study. A custom made occlusal plane analyzer was used to determine the posterior point of the ala-tragal line. The lateral cephalogram was shot for each of the subjects. The points Porion, Nasion, and Anterior Nasal Spine were located and the angle formed between these points was measured. Statistical Analysis Used: Fischer's exact test was used to find the correlation between Po-Na-ANS angle and relative parallelism of the occlusal plane to the ala-tragal line at different tragal levels. Results: Statistical analysis showed no significant correlation between Po-Na-ANS angle and relative parallelism of an occlusal plane at different tragal levels, and an inferior point on the tragus was the most common. Conclusion: Irrespective of variations in the Po-Na-ANS angle, no correlation exists between the variation in the angulations of Po-Na-ANS angle and the relative parallelism of occlusal plane to the ala-tragal line at different tragal levels. Furthermore, in a large number of subjects (54%), the occlusal plane was found parallel to a line joining the inferior border of the ala of the nose and the inferior part of the tragus. PMID:26929506

  8. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  9. Long-term effectiveness of initiating non-nucleoside reverse transcriptase inhibitor- versus ritonavir-boosted protease inhibitor-based antiretroviral therapy: implications for first-line therapy choice in resource-limited settings

    PubMed Central

    Lima, Viviane D; Hull, Mark; McVea, David; Chau, William; Harrigan, P Richard; Montaner, Julio SG

    2016-01-01

    Introduction In many resource-limited settings, combination antiretroviral therapy (cART) failure is diagnosed clinically or immunologically. As such, there is a high likelihood that patients may stay on a virologically failing regimen for a substantial period of time. Here, we compared the long-term impact of initiating non-nucleoside reverse transcriptase inhibitor (NNRTI)- versus boosted protease inhibitor (bPI)-based cART in British Columbia (BC), Canada. Methods We followed prospectively 3925 ART-naïve patients who started NNRTIs (N=1963, 50%) or bPIs (N=1962; 50%) from 1 January 2000 until 30 June 2013 in BC. At six months, we assessed whether patients virologically failed therapy (a plasma viral load (pVL) >50 copies/mL), and we stratified them based on the pVL at the time of failure ≤500 versus >500 copies/mL. We then followed these patients for another six months and calculated their probability of achieving subsequent viral suppression (pVL <50 copies/mL twice consecutively) and of developing drug resistance. These probabilities were adjusted for fixed and time-varying factors, including cART adherence. Results At six months, virologic failure rates were 9.5 and 14.3 cases per 100 person-months for NNRTI and bPI initiators, respectively. NNRTI initiators who failed with a pVL ≤500 copies/mL had a 16% higher probability of achieving subsequent suppression at 12 months than bPI initiators (0.81 (25th–75th percentile 0.75–0.83) vs. 0.72 (0.61–0.75)). However, if failing NNRTI initiators had a pVL >500 copies/mL, they had a 20% lower probability of suppressing at 12 months than pVL-matched bPI initiators (0.37 (0.29–0.45) vs. 0.46 (0.38–0.54)). In terms of evolving HIV drug resistance, those who failed on NNRTI performed worse than bPI in all scenarios, especially if they failed with a viral load >500 copies/mL. Conclusions Our results show that patients who virologically failed at six months on NNRTI and continued on the same regimen had a

  10. On-line automated sample preparation for liquid chromatography using parallel supported liquid membrane extraction and microporous membrane liquid-liquid extraction.

    PubMed

    Sandahl, Margareta; Mathiasson, Lennart; Jönsson, Jan Ake

    2002-10-25

    An automated system was developed for analysis of non-polar and polar ionisable compounds at trace levels in natural water. Sample work-up was performed in a flow system using two parallel membrane extraction units. This system was connected on-line to a reversed-phase HPLC system for final determination. One of the membrane units was used for supported liquid membrane (SLM) extraction, which is suitable for ionisable or permanently charged compounds. The other unit was used for microporous membrane liquid-liquid extraction (MMLLE) suitable for uncharged compounds. The fungicide thiophanate methyl and its polar metabolites carbendazim and 2-aminobenzimidazole were used as model compounds. The whole system was controlled by means of four syringe pumps. While extracting one part of the sample using the SLM technique. the extract from the MMLLE extraction was analysed and vice versa. This gave a total analysis time of 63 min for each sample resulting in a sample throughput of 22 samples per 24 h.

  11. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  12. Early Boost and Slow Consolidation in Motor Skill Learning

    ERIC Educational Resources Information Center

    Hotermans, Christophe; Peigneux, Philippe; de Noordhout, Alain Maertens; Moonen, Gustave; Maquet, Pierre

    2006-01-01

    Motor skill learning is a dynamic process that continues covertly after training has ended and eventually leads to delayed increments in performance. Current theories suggest that this off-line improvement takes time and appears only after several hours. Here we show an early transient and short-lived boost in performance, emerging as early as…

  13. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  14. Early boost and slow consolidation in motor skill learning.

    PubMed

    Hotermans, Christophe; Peigneux, Philippe; Maertens de Noordhout, Alain; Moonen, Gustave; Maquet, Pierre

    2006-01-01

    Motorskill learning is a dynamic process that continues covertly after training has ended and eventually leads to delayed increments in performance. Current theories suggest that this off-line improvement takes time and appears only after several hours. Here we show an early transient and short-lived boost in performance, emerging as early as 5-30 min after training but no longer observed 4 h later. This early boost is predictive of the performance achieved 48 h later, suggesting its functional relevance for memory processes.

  15. Representing Arbitrary Boosts for Undergraduates.

    ERIC Educational Resources Information Center

    Frahm, Charles P.

    1979-01-01

    Presented is a derivation for the matrix representation of an arbitrary boost, a Lorentz transformation without rotation, suitable for undergraduate students with modest backgrounds in mathematics and relativity. The derivation uses standard vector and matrix techniques along with the well-known form for a special Lorentz transformation. (BT)

  16. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  17. Interferometric resolution boosting for spectrographs

    SciTech Connect

    Erskine, D J; Edelstein, J

    2004-05-25

    Externally dispersed interferometry (EDI) is a technique for enhancing the performance of spectrographs for wide bandwidth high resolution spectroscopy and Doppler radial velocimetry. By placing a small angle-independent interferometer near the slit of a spectrograph, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moir{acute e} pattern, which manifests high detailed spectral information heterodyned down to detectably low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry. Previous demonstrations of {approx}2.5x resolution boost used an interferometer having a single fixed delay. We report new data indicating {approx}6x Gaussian resolution boost (140,000 from a spectrograph with 25,000 native resolving power), taken by using multiple exposures at widely different interferometer delays.

  18. Boosting Shift-Invariant Features

    NASA Astrophysics Data System (ADS)

    Hörnlein, Thomas; Jähne, Bernd

    This work presents a novel method for training shift-invariant features using a Boosting framework. Features performing local convolutions followed by subsampling are used to achieve shift-invariance. Other systems using this type of features, e.g. Convolutional Neural Networks, use complex feed-forward networks with multiple layers. In contrast, the proposed system adds features one at a time using smoothing spline base classifiers. Feature training optimizes base classifier costs. Boosting sample-reweighting ensures features to be both descriptive and independent. Our system has a lower number of design parameters as comparable systems, so adapting the system to new problems is simple. Also, the stage-wise training makes it very scalable. Experimental results show the competitiveness of our approach.

  19. Online boosting for vehicle detection.

    PubMed

    Chang, Wen-Chung; Cho, Chih-Wei

    2010-06-01

    This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.

  20. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  1. Classification of airborne laser scanning data using JointBoost

    NASA Astrophysics Data System (ADS)

    Guo, Bo; Huang, Xianfeng; Zhang, Fan; Sohn, Gunho

    2015-02-01

    The demands for automatic point cloud classification have dramatically increased with the wide-spread use of airborne LiDAR. Existing research has mainly concentrated on a few dominant objects such as terrain, buildings and vegetation. In addition to those key objects, this paper proposes a supervised classification method to identify other types of objects including power-lines and pylons from point clouds using a JointBoost classifier. The parameters for the learning model are estimated with various features computed based on the geometry and echo information of a LiDAR point cloud. In order to overcome the shortcomings stemming from the inclusion of bare ground data before classification, the proposed classifier directly distinguishes terrain using a feature step-off count. Feature selection is conducted using JointBoost to evaluate feature correlations thus improving both classification accuracy and operational efficiency. In this paper, the contextual constraints for objects extracted by graph-cut segmentation are used to optimize the initial classification results obtained by the JointBoost classifier. Our experimental results show that the step-off count significantly contributes to classification. Seventeen effective features are selected for the initial classification results using the JointBoost classifier. Our experiments indicate that the proposed features and method are effective for classification of airborne LiDAR data from complex scenarios.

  2. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  3. Electric rockets get a boost

    SciTech Connect

    Ashley, S.

    1995-12-01

    This article reports that xenon-ion thrusters are expected to replace conventional chemical rockets in many nonlaunch propulsion tasks, such as controlling satellite orbits and sending space probes on long exploratory missions. The space age dawned some four decades ago with the arrival of powerful chemical rockets that could propel vehicles fast enough to escape the grasp of earth`s gravity. Today, chemical rocket engines still provide the only means to boost payloads into orbit and beyond. The less glamorous but equally important job of moving vessels around in space, however, may soon be assumed by a fundamentally different rocket engine technology that has been long in development--electric propulsion.

  4. Advanced parallel processing with supercomputer architectures

    SciTech Connect

    Hwang, K.

    1987-10-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers.

  5. Recursive bias estimation and L2 boosting

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  6. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  7. Estimate of avoidance maneuver rate for HASTOL tether boost facility

    NASA Astrophysics Data System (ADS)

    Forward, Robert L.

    2002-01-01

    The Hypersonic Airplane Space Tether Orbital Launch (HASTOL) Architecture uses a hypersonic airplane (or reusable launch vehicle) to carry a payload from the surface of the Earth to 150 km altitude and a speed of Mach 17. The hypersonic airplane makes a rendezvous with the grapple at the tip of a long, rotating, orbiting space tether boost facility, which picks up the payload from the airplane. Release of the payload at the proper point in the tether rotation boosts the payload into a higher orbit, typically into a Geosynchronous Transfer Orbit (GTO), with lower orbits and Earth escape other options. The HASTOL Tether Boost Facility will have a length of 636 km. Its center of mass will be in a 604 km by 890 km equatorial orbit. It is estimated that by the time of the start of operations of the HASTOL Tether Boost facility in the year 2020, there will be 500 operational spacecraft using the same volume of space as the HASTOL facility. These operational spacecraft would likely be made inoperative by an impact with one of the lines in the multiline HASTOL Hoytether™ and should be avoided. There will also be non-operational spacecraft and large pieces of orbital debris with effective size greater than five meters in diameter that could cut a number of lines in the HASTOL Hoytether™, and should also be avoided. It is estimated, using two different methods and combining them, that the HASTOL facility will need to make avoidance maneuvers about once every four days if the 500 operational spacecraft and large pieces of orbital debris greater than 5 m in diameter, were each protected by a 2 km diameter miss distance protection sphere. If by 2020, the ability to know the positions of operational spacecraft and large pieces of orbital debris improved to allow a 600 m diameter miss distance protection sphere around each object, then the number of HASTOL facility maneuvers needed drops to one every two weeks. .

  8. Series Connected Buck-Boost Regulator

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G. (Inventor)

    2006-01-01

    A Series Connected Buck-Boost Regulator (SCBBR) that switches only a fraction of the input power, resulting in relatively high efficiencies. The SCBBR has multiple operating modes including a buck, a boost, and a current limiting mode, so that an output voltage of the SCBBR ranges from below the source voltage to above the source voltage.

  9. GPU-based parallel clustered differential pulse code modulation

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Li, Wenze; Kong, Wanqiu

    2015-10-01

    Hyperspectral remote sensing technology is widely used in marine remote sensing, geological exploration, atmospheric and environmental remote sensing. Owing to the rapid development of hyperspectral remote sensing technology, resolution of hyperspectral image has got a huge boost. Thus data size of hyperspectral image is becoming larger. In order to reduce their saving and transmission cost, lossless compression for hyperspectral image has become an important research topic. In recent years, large numbers of algorithms have been proposed to reduce the redundancy between different spectra. Among of them, the most classical and expansible algorithm is the Clustered Differential Pulse Code Modulation (CDPCM) algorithm. This algorithm contains three parts: first clusters all spectral lines, then trains linear predictors for each band. Secondly, use these predictors to predict pixels, and get the residual image by subtraction between original image and predicted image. Finally, encode the residual image. However, the process of calculating predictors is timecosting. In order to improve the processing speed, we propose a parallel C-DPCM based on CUDA (Compute Unified Device Architecture) with GPU. Recently, general-purpose computing based on GPUs has been greatly developed. The capacity of GPU improves rapidly by increasing the number of processing units and storage control units. CUDA is a parallel computing platform and programming model created by NVIDIA. It gives developers direct access to the virtual instruction set and memory of the parallel computational elements in GPUs. Our core idea is to achieve the calculation of predictors in parallel. By respectively adopting global memory, shared memory and register memory, we finally get a decent speedup.

  10. Bagging, boosting, and C4.5

    SciTech Connect

    Quinlan, J.R.

    1996-12-31

    Breiman`s bagging and Freund and Schapire`s boosting are recent methods for improving the predictive power of classifier learning systems. Both form a set of classifiers that are combined by voting, bagging by generating replicated bootstrap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater benefit. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classifiers reduces this downside and also leads to slightly better results on most of the datasets considered.

  11. Boosting human learning by hypnosis.

    PubMed

    Nemeth, Dezso; Janacsek, Karolina; Polner, Bertalan; Kovacs, Zoltan Ambrus

    2013-04-01

    Human learning and memory depend on multiple cognitive systems related to dissociable brain structures. These systems interact not only in cooperative but also sometimes competitive ways in optimizing performance. Previous studies showed that manipulations reducing the engagement of frontal lobe-mediated explicit attentional processes could lead to improved performance in striatum-related procedural learning. In our study, hypnosis was used as a tool to reduce the competition between these 2 systems. We compared learning in hypnosis and in the alert state and found that hypnosis boosted striatum-dependent sequence learning. Since frontal lobe-dependent processes are primarily affected by hypnosis, this finding could be attributed to the disruption of the explicit attentional processes. Our result sheds light not only on the competitive nature of brain systems in cognitive processes but also could have important implications for training and rehabilitation programs, especially for developing new methods to improve human learning and memory performance.

  12. Advanced Airfoils Boost Helicopter Performance

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Carson Helicopters Inc. licensed the Langley RC4 series of airfoils in 1993 to develop a replacement main rotor blade for their Sikorsky S-61 helicopters. The company's fleet of S-61 helicopters has been rebuilt to include Langley's patented airfoil design, and the helicopters are now able to carry heavier loads and fly faster and farther, and the main rotor blades have twice the previous service life. In aerial firefighting, the performance-boosting airfoils have helped the U.S. Department of Agriculture's Forest Service control the spread of wildfires. In 2003, Carson Helicopters signed a contract with Ducommun AeroStructures Inc., to manufacture the composite blades for Carson Helicopters to sell

  13. Boost-phase discrimination research

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.; Feiereisen, William J.

    1993-01-01

    The final report describes the combined work of the Computational Chemistry and Aerothermodynamics branches within the Thermosciences Division at NASA Ames Research Center directed at understanding the signatures of shock-heated air. Considerable progress was made in determining accurate transition probabilities for the important band systems of NO that account for much of the emission in the ultraviolet region. Research carried out under this project showed that in order to reproduce the observed radiation from the bow shock region of missiles in their boost phase it is necessary to include the Burnett terms in the constituent equation, account for the non-Boltzmann energy distribution, correctly model the NO formation and rotational excitation process, and use accurate transition probabilities for the NO band systems. This work resulted in significant improvements in the computer code NEQAIR that models both the radiation and fluid dynamics in the shock region.

  14. Riemann curvature of a boosted spacetime geometry

    NASA Astrophysics Data System (ADS)

    Battista, Emmanuele; Esposito, Giampiero; Scudellaro, Paolo; Tramontano, Francesco

    2016-10-01

    The ultrarelativistic boosting procedure had been applied in the literature to map the metric of Schwarzschild-de Sitter spacetime into a metric describing de Sitter spacetime plus a shock-wave singularity located on a null hypersurface. This paper evaluates the Riemann curvature tensor of the boosted Schwarzschild-de Sitter metric by means of numerical calculations, which make it possible to reach the ultrarelativistic regime gradually by letting the boost velocity approach the speed of light. Thus, for the first time in the literature, the singular limit of curvature, through Dirac’s δ distribution and its derivatives, is numerically evaluated for this class of spacetimes. Moreover, the analysis of the Kretschmann invariant and the geodesic equation shows that the spacetime possesses a “scalar curvature singularity” within a 3-sphere and it is possible to define what we here call “boosted horizon”, a sort of elastic wall where all particles are surprisingly pushed away, as numerical analysis demonstrates. This seems to suggest that such “boosted geometries” are ruled by a sort of “antigravity effect” since all geodesics seem to refuse to enter the “boosted horizon” and are “reflected” by it, even though their initial conditions are aimed at driving the particles toward the “boosted horizon” itself. Eventually, the equivalence with the coordinate shift method is invoked in order to demonstrate that all δ2 terms appearing in the Riemann curvature tensor give vanishing contribution in distributional sense.

  15. Boosting domain wall propagation by notches

    NASA Astrophysics Data System (ADS)

    Yuan, H. Y.; Wang, X. R.

    2015-08-01

    We report a counterintuitive finding that notches in an otherwise homogeneous magnetic nanowire can boost current-induced domain wall (DW) propagation. DW motion in notch-modulated wires can be classified into three phases: (1) A DW is pinned around a notch when the current density is below the depinning current density. (2) DW propagation velocity is boosted by notches above the depinning current density and when nonadiabatic spin-transfer torque strength β is smaller than the Gilbert damping constant α . The boost can be multifold. (3) DW propagation velocity is hindered when β >α . The results are explained by using the Thiele equation.

  16. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    SciTech Connect

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  17. Processing Semblances Induced through Inter-Postsynaptic Functional LINKs, Presumed Biological Parallels of K-Lines Proposed for Building Artificial Intelligence

    PubMed Central

    Vadakkan, Kunjumon I.

    2011-01-01

    The internal sensation of memory, which is available only to the owner of an individual nervous system, is difficult to analyze for its basic elements of operation. We hypothesize that associative learning induces the formation of functional LINK between the postsynapses. During memory retrieval, the activation of either postsynapse re-activates the functional LINK evoking a semblance of sensory activity arriving at its opposite postsynapse, nature of which defines the basic unit of internal sensation – namely, the semblion. In neuronal networks that undergo continuous oscillatory activity at certain levels of their organization re-activation of functional LINKs is expected to induce semblions, enabling the system to continuously learn, self-organize, and demonstrate instantiation, features that can be utilized for developing artificial intelligence (AI). This paper also explains suitability of the inter-postsynaptic functional LINKs to meet the expectations of Minsky’s K-lines, basic elements of a memory theory generated to develop AI and methods to replicate semblances outside the nervous system. PMID:21845180

  18. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  19. Skip Dinner and Maybe Boost Your Metabolism

    MedlinePlus

    ... 161845.html Skip Dinner and Maybe Boost Your Metabolism But, study didn't show overall changes in ... has an internal clock, and many aspects of metabolism are working best in the morning, according to ...

  20. Old Drug Boosts Brain's Memory Centers

    MedlinePlus

    ... gov/news/fullstory_159605.html Old Drug Boosts Brain's Memory Centers But more research needed before recommending ... called methylene blue may rev up activity in brain regions involved in short-term memory and attention, ...

  1. Avoiding Anemia: Boost Your Red Blood Cells

    MedlinePlus

    ... link, please review our exit disclaimer . Subscribe Avoiding Anemia Boost Your Red Blood Cells If you’re ... and sluggish, you might have a condition called anemia. Anemia is a common blood disorder that many ...

  2. Tools to Boost Steam System Efficiency

    SciTech Connect

    2005-05-01

    The Steam System Scoping Tool quickly evaluates your entire steam system operation and spots the areas that are the best opportunities for improvement. The tool suggests a range of ways to save steam energy and boost productivity.

  3. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-01-01

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  4. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-12-31

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  5. Anemia Boosts Stroke Death Risk, Study Finds

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160476.html Anemia Boosts Stroke Death Risk, Study Finds Blood condition ... 2016 (HealthDay News) -- Older stroke victims suffering from anemia -- a lack of red blood cells -- may have ...

  6. Centaur liquid oxygen boost pump vibration test

    NASA Technical Reports Server (NTRS)

    Tang, H. M.

    1975-01-01

    The Centaur LOX boost pump was subjected to both the simulated Titan Centaur proof flight and confidence demonstration vibration test levels. For each test level, both sinusoidal and random vibration tests were conducted along each of the three orthogonal axes of the pump and turbine assembly. In addition to these tests, low frequency longitudinal vibration tests for both levels were conducted. All tests were successfully completed without damage to the boost pump.

  7. WIMPonium and boost factors for indirect dark matter detection

    NASA Astrophysics Data System (ADS)

    March-Russell, John; West, Stephen M.

    2009-06-01

    We argue that WIMP dark matter can annihilate via long-lived “WIMPonium” bound states in reasonable particle physics models of dark matter (DM). WIMPonium bound states can occur at or near threshold leading to substantial enhancements in the DM annihilation rate, closely related to the Sommerfeld effect. Large “boost factor” amplifications in the annihilation rate can thus occur without large density enhancements, possibly preferring colder less dense objects such as dwarf galaxies as locations for indirect DM searches. The radiative capture to and transitions among the WIMPonium states generically lead to a rich energy spectrum of annihilation products, with many distinct lines possible in the case of 2-body decays to γγ or γZ final states. The existence of multiple radiative capture modes further enhances the total annihilation rate, and the detection of the lines would give direct over-determined information on the nature and self-interactions of the DM particles.

  8. Tracking down hyper-boosted top quarks

    SciTech Connect

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  9. Tracking down hyper-boosted top quarks

    DOE PAGESBeta

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directlymore » employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.« less

  10. Tracking down hyper-boosted top quarks

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-01

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  11. Visual tracking by separability-maximum boosting

    NASA Astrophysics Data System (ADS)

    Hou, Jie; Mao, Yao-bin; Sun, Jin-sheng

    2013-10-01

    Recently, visual tracking has been formulated as a classification problem whose task is to detect the object from the scene with a binary classifier. Boosting based online feature selection methods, which adopt the classifier to appearance changes by choosing the most discriminative features, have been demonstrated to be effective for visual tracking. A major problem of such online feature selection methods is that an inaccurate classifier may give imprecise tracking windows. Tracking error accumulates when the tracker trains the classifier with misaligned samples and finally leads to drifting. Separability-maximum boosting (SMBoost), an alternative form of AdaBoost which characterizes the separability between the object and the scene by their means and covariance matrices, is proposed. SMBoost only needs the means and covariance matrices during training and can be easily adopted to online learning problems by estimating the statistics incrementally. Experiment on UCI machine learning datasets shows that SMBoost is as accurate as offline AdaBoost, and significantly outperforms Oza's online boosting. Accurate classifier stabilizes the tracker on challenging video sequences. Empirical results also demonstrate improvements in term of tracking precision and speed, comparing ours to those state-of-the-art ones.

  12. Centrifugal compressor design for electrically assisted boost

    NASA Astrophysics Data System (ADS)

    Y Yang, M.; Martinez-Botas, R. F.; Zhuge, W. L.; Qureshi, U.; Richards, B.

    2013-12-01

    Electrically assisted boost is a prominent method to solve the issues of transient lag in turbocharger and remains an optimized operation condition for a compressor due to decoupling from turbine. Usually a centrifugal compressor for gasoline engine boosting is operated at high rotational speed which is beyond the ability of an electric motor in market. In this paper a centrifugal compressor with rotational speed as 120k RPM and pressure ratio as 2.0 is specially developed for electrically assisted boost. A centrifugal compressor including the impeller, vaneless diffuser and the volute is designed by meanline method followed by 3D detailed design. Then CFD method is employed to predict as well as analyse the performance of the design compressor. The results show that the pressure ratio and efficiency at design point is 2.07 and 78% specifically.

  13. Modeling self-priming circuits for dielectric elastomer generators towards optimum voltage boost

    NASA Astrophysics Data System (ADS)

    Zanini, Plinio; Rossiter, Jonathan; Homer, Martin

    2016-04-01

    One of the main challenges for the practical implementation of dielectric elastomer generators (DEGs) is supplying high voltages. To address this issue, systems using self-priming circuits (SPCs) — which exploit the DEG voltage swing to increase its supplied voltage — have been used with success. A self-priming circuit consists of a charge pump implemented in parallel with the DEG circuit. At each energy harvesting cycle, the DEG receives a low voltage input and, through an almost constant charge cycle, generates a high voltage output. SPCs receive the high voltage output at the end of the energy harvesting cycle and supply it back as input for the following cycle, using the DEG as a voltage multiplier element. Although rules for designing self-priming circuits for dielectric elastomer generators exist, they have been obtained from intuitive observation of simulation results and lack a solid theoretical foundation. Here we report the development of a mathematical model to predict voltage boost using self-priming circuits. The voltage on the DEG attached to the SPC is described as a function of its initial conditions, circuit parameters/layout, and the DEG capacitance. Our mathematical model has been validated on an existing DEG implementation from the literature, and successfully predicts the voltage boost for each cycle. Furthermore, it allows us to understand the conditions for the boost to exist, and obtain the design rules that maximize the voltage boost.

  14. Boost Converters for Gas Electric and Fuel Cell Hybrid Electric Vehicles

    SciTech Connect

    McKeever, JW

    2005-06-16

    Hybrid electric vehicles (HEVs) are driven by at least two prime energy sources, such as an internal combustion engine (ICE) and propulsion battery. For a series HEV configuration, the ICE drives only a generator, which maintains the state-of-charge (SOC) of propulsion and accessory batteries and drives the electric traction motor. For a parallel HEV configuration, the ICE is mechanically connected to directly drive the wheels as well as the generator, which likewise maintains the SOC of propulsion and accessory batteries and drives the electric traction motor. Today the prime energy source is an ICE; tomorrow it will very likely be a fuel cell (FC). Use of the FC eliminates a direct drive capability accentuating the importance of the battery charge and discharge systems. In both systems, the electric traction motor may use the voltage directly from the batteries or from a boost converter that raises the voltage. If low battery voltage is used directly, some special control circuitry, such as dual mode inverter control (DMIC) which adds a small cost, is necessary to drive the electric motor above base speed. If high voltage is chosen for more efficient motor operation or for high speed operation, the propulsion battery voltage must be raised, which would require some type of two-quadrant bidirectional chopper with an additional cost. Two common direct current (dc)-to-dc converters are: (1) the transformer-based boost or buck converter, which inverts a dc voltage, feeds the resulting alternating current (ac) into a transformer to raise or lower the voltage, and rectifies it to complete the conversion; and (2) the inductor-based switch mode boost or buck converter [1]. The switch-mode boost and buck features are discussed in this report as they operate in a bi-directional chopper. A benefit of the transformer-based boost converter is that it isolates the high voltage from the low voltage. Usually the transformer is large, further increasing the cost. A useful feature

  15. The Attentional Boost Effect and Context Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Smith, S. Adam; Spataro, Pietro

    2016-01-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors--the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is…

  16. Schools Enlisting Defense Industry to Boost STEM

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2008-01-01

    Defense contractors Northrop Grumman Corp. and Lockheed Martin Corp. are joining forces in an innovative partnership to develop high-tech simulations to boost STEM--or science, technology, engineering, and mathematics--education in the Baltimore County schools. The Baltimore County partnership includes the local operations of two major military…

  17. Cleanouts boost Devonian shale gas flow

    SciTech Connect

    Not Available

    1991-02-04

    Cleaning shale debris from the well bores is an effective way to boost flow rates from old open hole Devonian shale gas wells, research on six West Virginia wells begun in 1985 has shown. Officials involved with the study say the Appalachian basin could see 20 year recoverable gas reserves hiked by 315 bcf if the process is used on a wide scale.

  18. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  19. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and

  20. Discussion of "the evolution of boosting algorithms" and "extending statistical boosting".

    PubMed

    Bühlmann, P; Gertheiss, J; Hieke, S; Kneib, T; Ma, S; Schumacher, M; Tutz, G; Wang, C-Y; Wang, Z; Ziegler, A

    2014-01-01

    This article is part of a For-Discussion-Section of Methods of Information in Medicine about the papers "The Evolution of Boosting Algorithms - From Machine Learning to Statistical Modelling" and "Extending Statistical Boosting - An Overview of Recent Methodological Developments", written by Andreas Mayr and co-authors. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the Mayr et al. papers. In subsequent issues the discussion can continue through letters to the editor.

  1. Occurrence of perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) in N.E. Spanish surface waters and their removal in a drinking water treatment plant that combines conventional and advanced treatments in parallel lines.

    PubMed

    Flores, Cintia; Ventura, Francesc; Martin-Alonso, Jordi; Caixach, Josep

    2013-09-01

    Perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) are two emerging contaminants that have been detected in all environmental compartments. However, while most of the studies in the literature deal with their presence or removal in wastewater treatment, few of them are devoted to their detection in treated drinking water and fate during drinking water treatment. In this study, analyses of PFOS and PFOA have been carried out in river water samples and in the different stages of a drinking water treatment plant (DWTP) which has recently improved its conventional treatment process by adding ultrafiltration and reverse osmosis in a parallel treatment line. Conventional and advanced treatments have been studied in several pilot plants and in the DWTP, which offers the opportunity to compare both treatments operating simultaneously. From the results obtained, neither preoxidation, sand filtration, nor ozonation, removed both perfluorinated compounds. As advanced treatments, reverse osmosis has proved more effective than reverse electrodialysis to remove PFOA and PFOS in the different configurations of pilot plants assayed. Granular activated carbon with an average elimination efficiency of 64±11% and 45±19% for PFOS and PFOA, respectively and especially reverse osmosis, which was able to remove ≥99% of both compounds, were the sole effective treatment steps. Trace levels of PFOS (3.0-21 ng/L) and PFOA (<4.2-5.5 ng/L) detected in treated drinking water were significantly lowered in comparison to those measured in precedent years. These concentrations represent overall removal efficiencies of 89±22% for PFOA and 86±7% for PFOS.

  2. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  3. Bioactive Molecule Prediction Using Extreme Gradient Boosting.

    PubMed

    Babajide Mustapha, Ismail; Saeed, Faisal

    2016-01-01

    Following the explosive growth in chemical and biological data, the shift from traditional methods of drug discovery to computer-aided means has made data mining and machine learning methods integral parts of today's drug discovery process. In this paper, extreme gradient boosting (Xgboost), which is an ensemble of Classification and Regression Tree (CART) and a variant of the Gradient Boosting Machine, was investigated for the prediction of biological activity based on quantitative description of the compound's molecular structure. Seven datasets, well known in the literature were used in this paper and experimental results show that Xgboost can outperform machine learning algorithms like Random Forest (RF), Support Vector Machines (LSVM), Radial Basis Function Neural Network (RBFN) and Naïve Bayes (NB) for the prediction of biological activities. In addition to its ability to detect minority activity classes in highly imbalanced datasets, it showed remarkable performance on both high and low diversity datasets. PMID:27483216

  4. Voltage-Boosting Driver For Switching Regulator

    NASA Technical Reports Server (NTRS)

    Trump, Ronald C.

    1990-01-01

    Driver circuit assures availability of 10- to 15-V gate-to-source voltage needed to turn on n-channel metal oxide/semiconductor field-effect transistor (MOSFET) acting as switch in switching voltage regulator. Includes voltage-boosting circuit efficiently providing gate voltage 10 to 15 V above supply voltage. Contains no exotic parts and does not require additional power supply. Consists of NAND gate and dual voltage booster operating in conjunction with pulse-width modulator part of regulator.

  5. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  6. Exposure fusion using boosting Laplacian pyramid.

    PubMed

    Shen, Jianbing; Zhao, Ying; Yan, Shuicheng; Li, Xuelong

    2014-09-01

    This paper proposes a new exposure fusion approach for producing a high quality image result from multiple exposure images. Based on the local weight and global weight by considering the exposure quality measurement between different exposure images, and the just noticeable distortion-based saliency weight, a novel hybrid exposure weight measurement is developed. This new hybrid weight is guided not only by a single image's exposure level but also by the relative exposure level between different exposure images. The core of the approach is our novel boosting Laplacian pyramid, which is based on the structure of boosting the detail and base signal, respectively, and the boosting process is guided by the proposed exposure weight. Our approach can effectively blend the multiple exposure images for static scenes while preserving both color appearance and texture structure. Our experimental results demonstrate that the proposed approach successfully produces visually pleasing exposure fusion images with better color appearance and more texture details than the existing exposure fusion techniques and tone mapping operators. PMID:25137687

  7. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  8. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  9. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  10. Hybrid Recovery-less Method Soft Switching Boost Chopper Circuit

    NASA Astrophysics Data System (ADS)

    Yamamoto, Masayoshi; Toda, Hirotaka; Kawashima, Takahiro; Yoshida, Toshiyuki

    The conventional recovery-less boost type converter cannot achieve the soft switching operation in case of the turn off transition. In this paper, the novel hybrid recovery-less boost type converter, which can achieve the soft switching turn off transition, is proposed. Furthermore, the proposed hybrid recovery-less boost type converter has the switch function between the conventional recovery-less mode and the proposed soft switching mode. In general, the efficiency in the soft switching converter is less than the hard switching in case of the lower output power condition. However, using the switch function of the proposed boost type converter, the hybrid recovery-less boost type converter can achieve the high efficiency performance in the whole output power area in spite of the soft switching operation. The proposed hybrid recovery-less boost type converter is evaluated and discussed from experimental point of view.

  11. Boost matrix converters in clean energy systems

    NASA Astrophysics Data System (ADS)

    Karaman, Ekrem

    This dissertation describes an investigation of novel power electronic converters, based on the ultra-sparse matrix topology and characterized by the minimum number of semiconductor switches. The Z-source, Quasi Z-source, Series Z-source and Switched-inductor Z-source networks were originally proposed for boosting the output voltage of power electronic inverters. These ideas were extended here on three-phase to three-phase and three-phase to single-phase indirect matrix converters. For the three-phase to three-phase matrix converters, the Z-source networks are placed between the three-switch input rectifier stage and the output six-switch inverter stage. A brief shoot-through state produces the voltage boost. An optimal pulse width modulation technique was developed to achieve high boosting capability and minimum switching losses in the converter. For the three-phase to single-phase matrix converters, those networks are placed similarly. For control purposes, a new modulation technique has been developed. As an example application, the proposed converters constitute a viable alternative to the existing solutions in residential wind-energy systems, where a low-voltage variable-speed generator feeds power to the higher-voltage fixed-frequency grid. Comprehensive analytical derivations and simulation results were carried out to investigate the operation of the proposed converters. Performance of the proposed converters was then compared between each other as well as with conventional converters. The operation of the converters was experimentally validated using a laboratory prototype.

  12. Boosting family income to promote child development.

    PubMed

    Duncan, Greg J; Magnuson, Katherine; Votruba-Drzal, Elizabeth

    2014-01-01

    Families who live in poverty face disadvantages that can hinder their children's development in many ways, write Greg Duncan, Katherine Magnuson, and Elizabeth Votruba-Drzal. As they struggle to get by economically, and as they cope with substandard housing, unsafe neighborhoods, and inadequate schools, poor families experience more stress in their daily lives than more affluent families do, with a host of psychological and developmental consequences. Poor families also lack the resources to invest in things like high-quality child care and enriched learning experiences that give more affluent children a leg up. Often, poor parents also lack the time that wealthier parents have to invest in their children, because poor parents are more likely to be raising children alone or to work nonstandard hours and have inflexible work schedules. Can increasing poor parents' incomes, independent of any other sort of assistance, help their children succeed in school and in life? The theoretical case is strong, and Duncan, Magnuson, and Votruba-Drzal find solid evidence that the answer is yes--children from poor families that see a boost in income do better in school and complete more years of schooling, for example. But if boosting poor parents' incomes can help their children, a crucial question remains: Does it matter when in a child's life the additional income appears? Developmental neurobiology strongly suggests that increased income should have the greatest effect during children's early years, when their brains and other systems are developing rapidly, though we need more evidence to prove this conclusively. The authors offer examples of how policy makers could incorporate the findings they present to create more effective programs for families living in poverty. And they conclude with a warning: if a boost in income can help poor children, then a drop in income--for example, through cuts to social safety net programs like food stamps--can surely harm them.

  13. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  14. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  15. Boosting salt resistance of short antimicrobial peptides.

    PubMed

    Chu, Hung-Lun; Yu, Hui-Yuan; Yip, Bak-Sau; Chih, Ya-Han; Liang, Chong-Wen; Cheng, Hsi-Tsung; Cheng, Jya-Wei

    2013-08-01

    The efficacies of many antimicrobial peptides are greatly reduced under high salt concentrations, therefore limiting their use as pharmaceutical agents. Here, we describe a strategy to boost salt resistance and serum stability of short antimicrobial peptides by adding the nonnatural bulky amino acid β-naphthylalanine to their termini. The activities of the short salt-sensitive tryptophan-rich peptide S1 were diminished at high salt concentrations, whereas the activities of its β-naphthylalanine end-tagged variants were less affected.

  16. Boost covariant gluon distributions in large nuclei

    NASA Astrophysics Data System (ADS)

    McLerran, Larry; Venugopalan, Raju

    1998-04-01

    It has been shown recently that there exist analytical solutions of the Yang-Mills equations for non-Abelian Weizsäcker-Williams fields which describe the distribution of gluons in large nuclei at small x. These solutions however depend on the color charge distribution at large rapidities. We here construct a model of the color charge distribution of partons in the fragmentation region and use it to compute the boost covariant momentum distributions of wee gluons. The phenomenological applications of our results are discussed.

  17. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  18. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  19. Series Transmission Line Transformer

    DOEpatents

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  20. Low temperature operation of a boost converter

    SciTech Connect

    Moss, B.S.; Boudreaux, R.R.; Nelms, R.M.

    1996-12-31

    The development of satellite power systems capable of operating at low temperatures on the order of 77K would reduce the heating system required on deep space vehicles. The power supplies in the satellite power system must be capable of operating at these temperatures. This paper presents the results of a study into the operation of a boost converter at temperatures close to 77K. The boost converter is designed to supply an output voltage and power of 42 V and 50 W from a 28 V input source. The entire system, except the 28 V source, is placed in the environmental chamber. This is important because the system does not require any manual adjustments to maintain a constant output voltage with a high efficiency. The constant 42 V output of this converter is a benefit of the application of a CMOS microcontroller in the feedback path. The switch duty cycle is adjusted by the microcontroller to maintain a constant output voltage. The efficiency of the system varied less than 1% over the temperature range of 22 C to {minus}184 C and was approximately 94.2% when the temperature was {minus}184 C.

  1. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  2. Jet substructures of boosted polarized top quarks

    NASA Astrophysics Data System (ADS)

    Kitadono, Yoshio; Li, Hsiang-nan

    2014-06-01

    We study jet substructures of a boosted polarized top quark, which undergoes the semileptonic decay t→bℓν, in the perturbative QCD framework. The jet mass distribution (energy profile) is factorized into the convolution of a hard top-quark decay kernel with the bottom-quark jet function (jet energy function). Computing the hard kernel to the leading order in QCD and inputting the latter functions from the resummation formalism, we observe that the jet mass distribution is not sensitive to the helicity of the top quark, but the energy profile is: energy is accumulated faster within a left-hand top jet than within a right-hand one, a feature related to the V-A structure of weak interaction. It is pointed out that the energy profile is a simple and useful jet observable for helicity discrimination of a boosted top quark, which helps identification of physics beyond the standard model at the Large Hadron Collider. The extension of our analysis to other jet substructures, including those associated with a hadronically decaying polarized top quark, is proposed.

  3. Brain glucosamine boosts protective glucoprivic feeding.

    PubMed

    Osundiji, Mayowa A; Zhou, Ligang; Shaw, Jill; Moore, Stephen P; Yueh, Chen-Yu; Sherwin, Robert; Heisler, Lora K; Evans, Mark L

    2010-04-01

    The risk of iatrogenic hypoglycemia is increased in diabetic patients who lose defensive glucoregulatory responses, including the important warning symptom of hunger. Protective hunger symptoms during hypoglycemia may be triggered by hypothalamic glucose-sensing neurons by monitoring changes downstream of glucose phosphorylation by the specialized glucose-sensing hexokinase, glucokinase (GK), during metabolism. Here we investigated the effects of intracerebroventricular (ICV) infusion of glucosamine (GSN), a GK inhibitor, on food intake at normoglycemia and protective feeding responses during glucoprivation and hypoglycemia in chronically catheterized rats. ICV infusion of either GSN or mannoheptulose, a structurally different GK inhibitor, dose-dependently stimulated feeding at normoglycemia. Consistent with an effect of GSN to inhibit competitively glucose metabolism, ICV coinfusion of d-glucose but not l-glucose abrogated the orexigenic effect of ICV GSN at normoglycemia. Importantly, ICV infusion of a low GSN dose (15 nmol/min) that was nonorexigenic at normoglycemia boosted feeding responses to glucoprivation in rats with impaired glucose counterregulation. ICV infusion of 15 nmol/min GSN also boosted feeding responses to threatened hypoglycemia in rats with defective glucose counterregulation. Altogether our findings suggest that GSN may be a potential therapeutic candidate for enhancing defensive hunger symptoms during hypoglycemia.

  4. Parallel Climate Analysis Toolkit (ParCAT)

    SciTech Connect

    Smith, Brian Edward

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  5. Parallelizing the XSTAR Photoionization Code

    NASA Astrophysics Data System (ADS)

    Noble, M. S.; Ji, L.; Young, A.; Lee, J. C.

    2009-09-01

    We describe two means by which XSTAR, a code which computes physical conditions and emission spectra of photoionized gases, has been parallelized. The first is pvmxstar, a wrapper which can be used in place of the serial xstar2xspec script to foster concurrent execution of the XSTAR command line application on independent sets of parameters. The second is pmodel, a plugin for the Interactive Spectral Interpretation System (ISIS) which allows arbitrary components of a broad range of astrophysical models to be distributed across processors during fitting and confidence limits calculations, by scientists with little training in parallel programming. Plugging the XSTAR family of analytic models into pmodel enables multiple ionization states (e.g., of a complex absorber/emitter) to be computed simultaneously, alleviating the often prohibitive expense of the traditional serial approach. Initial performance results indicate that these methods substantially enlarge the problem space to which XSTAR may be applied within practical timeframes.

  6. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  7. Boost in radiotherapy: external beam sunset, brachytherapy sunrise

    PubMed Central

    2009-01-01

    Radiobiological limitations for dose escalation in external radiotherapy are presented. Biological and clinical concept of brachytherapy boost to increase treatment efficacy is discussed, and different methods are compared. Oncentra Prostate 3D conformal real-time ultrasound-guided brachytherapy is presented as a solution for boost or sole therapy.

  8. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  9. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  10. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  11. Boosting jet power in black hole spacetimes

    PubMed Central

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W.; Liebling, Steven L.; Motl, Patrick M.; Garrett, Travis

    2011-01-01

    The extraction of rotational energy from a spinning black hole via the Blandford–Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux. PMID:21768341

  12. Hydrodynamic approach to boost invariant free streaming

    NASA Astrophysics Data System (ADS)

    Calzetta, E.

    2015-08-01

    We consider a family of exact boost invariant solutions of the transport equation for free-streaming massless particles, where the one-particle distribution function is defined in terms of a function of a single variable. The evolution of second and third moments of the one-particle distribution function [the second moment being the energy momentum tensor (EMT) and the third moment the nonequilibrium current (NEC)] depends only on two moments of that function. Given those two moments, we show how to build a nonlinear hydrodynamic theory which reproduces the early time evolution of the EMT and the NEC. The structure of these theories may give insight on nonlinear hydrodynamic phenomena on short time scales.

  13. Boosting low-mass hadronic resonances

    NASA Astrophysics Data System (ADS)

    Shimmin, Chase; Whiteson, Daniel

    2016-09-01

    Searches for new hadronic resonances typically focus on high-mass spectra due to overwhelming QCD backgrounds and detector trigger rates. We present a study of searches for relatively low-mass hadronic resonances at the LHC in the case that the resonance is boosted by recoiling against a well-measured high-pT probe such as a muon, photon or jet. The hadronic decay of the resonance is then reconstructed either as a single large-radius jet or as a resolved pair of standard narrow-radius jets, balanced in transverse momentum to the probe. We show that the existing 2015 LHC data set of p p collisions with ∫L d t =4 fb-1 should already have powerful sensitivity to a generic Z' model which couples only to quarks, for Z' masses ranging from 20 - 500 GeV /c2 .

  14. Boosted X Waves in Nonlinear Optical Systems

    SciTech Connect

    Arevalo, Edward

    2010-01-15

    X waves are spatiotemporal optical waves with intriguing superluminal and subluminal characteristics. Here we theoretically show that for a given initial carrier frequency of the system localized waves with genuine superluminal or subluminal group velocity can emerge from initial X waves in nonlinear optical systems with normal group velocity dispersion. Moreover, we show that this temporal behavior depends on the wave detuning from the carrier frequency of the system and not on the particular X-wave biconical form. A spatial counterpart of this behavior is also found when initial X waves are boosted in the plane transverse to the direction of propagation, so a fully spatiotemporal motion of localized waves can be observed.

  15. Boosting jet power in black hole spacetimes.

    PubMed

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W; Liebling, Steven L; Motl, Patrick M; Garrett, Travis

    2011-08-01

    The extraction of rotational energy from a spinning black hole via the Blandford-Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux.

  16. Boosted X waves in nonlinear optical systems.

    PubMed

    Arévalo, Edward

    2010-01-15

    X waves are spatiotemporal optical waves with intriguing superluminal and subluminal characteristics. Here we theoretically show that for a given initial carrier frequency of the system localized waves with genuine superluminal or subluminal group velocity can emerge from initial X waves in nonlinear optical systems with normal group velocity dispersion. Moreover, we show that this temporal behavior depends on the wave detuning from the carrier frequency of the system and not on the particular X-wave biconical form. A spatial counterpart of this behavior is also found when initial X waves are boosted in the plane transverse to the direction of propagation, so a fully spatiotemporal motion of localized waves can be observed.

  17. On the maximum regulation range in boost and buck-boost converters

    NASA Astrophysics Data System (ADS)

    Ninomiya, T.; Harada, K.; Nakahara, M.

    Two types of instability conditions in boost and buck-boost converters with a feedback loop are analyzed by means of the steady-state characteristic and dynamic small-signal modeling. Type I instability involves a drastic voltage drop, and in Type II instability, a limit-cycle oscillation arises and the output voltage oscillates at low frequencies. The maximum regulation range is derived analytically for the load variation and verified experimentally. For high feedback gain, it is determined by the Type II instability condition, whereas for low feedback gain, it is determined by the Type I instability condition. Type II instability can be suppressed by decreasing the reactor inductance or by increasing the capacitance of a smoothing capacitor. However, Type I instability is found to be independent of these values.

  18. Boosting for multi-graph classification.

    PubMed

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Cai, Zhihua

    2015-03-01

    In this paper, we formulate a novel graph-based learning problem, multi-graph classification (MGC), which aims to learn a classifier from a set of labeled bags each containing a number of graphs inside the bag. A bag is labeled positive, if at least one graph in the bag is positive, and negative otherwise. Such a multi-graph representation can be used for many real-world applications, such as webpage classification, where a webpage can be regarded as a bag with texts and images inside the webpage being represented as graphs. This problem is a generalization of multi-instance learning (MIL) but with vital differences, mainly because instances in MIL share a common feature space whereas no feature is available to represent graphs in a multi-graph bag. To solve the problem, we propose a boosting based multi-graph classification framework (bMGC). Given a set of labeled multi-graph bags, bMGC employs dynamic weight adjustment at both bag- and graph-levels to select one subgraph in each iteration as a weak classifier. In each iteration, bag and graph weights are adjusted such that an incorrectly classified bag will receive a higher weight because its predicted bag label conflicts to the genuine label, whereas an incorrectly classified graph will receive a lower weight value if the graph is in a positive bag (or a higher weight if the graph is in a negative bag). Accordingly, bMGC is able to differentiate graphs in positive and negative bags to derive effective classifiers to form a boosting model for MGC. Experiments and comparisons on real-world multi-graph learning tasks demonstrate the algorithm performance.

  19. Development of cassava periclinal chimera may boost production.

    PubMed

    Bomfim, N; Nassar, N M A

    2014-02-10

    Plant periclinal chimeras are genotypic mosaics arranged concentrically. Trials to produce them to combine different species have been done, but pratical results have not been achieved. We report for the second time the development of a very productive interspecific periclinal chimera in cassava. It has very large edible roots up to 14 kg per plant at one year old compared to 2-3 kg in common varieties. The epidermal tissue formed was from Manihot esculenta cultivar UnB 032, and the subepidermal and internal tissue from the wild species, Manihot fortalezensis. We determined the origin of tissues by meiotic and mitotic chromosome counts, plant anatomy and morphology. Epidermal features displayed useful traits to deduce tissue origin: cell shape and size, trichome density and stomatal length. Chimera roots had a wholly tuberous and edible constitution with smaller starch granule size and similar distribution compared to cassava. Root size enlargement might have been due to an epigenetic effect. These results suggest a new line of improved crop based on the development of interspecific chimeras composed of different combinations of wild and cultivated species. It promises boosting cassava production through exceptional root enlargement.

  20. Eclipse Parallel Tools Platform

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  1. HASPRNG: Hardware Accelerated Scalable Parallel Random Number Generators

    NASA Astrophysics Data System (ADS)

    Lee, JunKyu; Bi, Yu; Peterson, Gregory D.; Hinde, Robert J.; Harrison, Robert J.

    2009-12-01

    The Scalable Parallel Random Number Generators library (SPRNG) supports fast and scalable random number generation with good statistical properties for parallel computational science applications. In order to accelerate SPRNG in high performance reconfigurable computing systems, we present the Hardware Accelerated SPRNG library (HASPRNG). Ported to the Xilinx University Program (XUP) and Cray XD1 reconfigurable computing platforms, HASPRNG includes the reconfigurable logic for Field Programmable Gate Arrays (FPGAs) along with a programming interface which performs integer random number generation that produces identical results with SPRNG. This paper describes the reconfigurable logic of HASPRNG exploiting the mathematical properties and data parallelism residing in the SPRNG algorithms to produce high performance and also describes how to use the programming interface to minimize the communication overhead between FPGAs and microprocessors. The programming interface allows a user to be able to use HASPRNG the same way as SPRNG 2.0 on platforms such as the Cray XD1. We also describe how to install HASPRNG and use it. For HASPRNG usage we discuss a FPGA π-estimator for a High Performance Reconfigurable Computer (HPRC) sample application and compare to a software π-estimator. HASPRNG shows 1.7x speedup over SPRNG on the Cray XD1 and is able to obtain substantial speedup for a HPRC application. Program summaryProgram title: HASPRNG Catalogue identifier: AEER_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEER_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 594 928 No. of bytes in distributed program, including test data, etc.: 6 509 724 Distribution format: tar.gz Programming language: VHDL (XUP and Cray XD1), C++ (XUP), C (Cray XD1) Computer: PowerPC 405

  2. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  3. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  4. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  5. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  6. Series-Connected Buck Boost Regulators

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G.

    2005-01-01

    A series-connected buck boost regulator (SCBBR) is an electronic circuit that bucks a power-supply voltage to a lower regulated value or boosts it to a higher regulated value. The concept of the SCBBR is a generalization of the concept of the SCBR, which was reported in "Series-Connected Boost Regulators" (LEW-15918), NASA Tech Briefs, Vol. 23, No. 7 (July 1997), page 42. Relative to prior DC-voltage-regulator concepts, the SCBBR concept can yield significant reductions in weight and increases in power-conversion efficiency in many applications in which input/output voltage ratios are relatively small and isolation is not required, as solar-array regulation or battery charging with DC-bus regulation. Usually, a DC voltage regulator is designed to include a DC-to-DC converter to reduce its power loss, size, and weight. Advances in components, increases in operating frequencies, and improved circuit topologies have led to continual increases in efficiency and/or decreases in the sizes and weights of DC voltage regulators. The primary source of inefficiency in the DC-to-DC converter portion of a voltage regulator is the conduction loss and, especially at high frequencies, the switching loss. Although improved components and topology can reduce the switching loss, the reduction is limited by the fact that the converter generally switches all the power being regulated. Like the SCBR concept, the SCBBR concept involves a circuit configuration in which only a fraction of the power is switched, so that the switching loss is reduced by an amount that is largely independent of the specific components and circuit topology used. In an SCBBR, the amount of power switched by the DC-to-DC converter is only the amount needed to make up the difference between the input and output bus voltage. The remaining majority of the power passes through the converter without being switched. The weight and power loss of a DC-to-DC converter are determined primarily by the amount of power

  7. Linked-View Parallel Coordinate Plot Renderer

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  8. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  9. 39% access time improvement, 11% energy reduction, 32 kbit 1-read/1-write 2-port static random-access memory using two-stage read boost and write-boost after read sensing scheme

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yasue; Moriwaki, Shinichi; Kawasumi, Atsushi; Miyano, Shinji; Shinohara, Hirofumi

    2016-04-01

    We propose novel circuit techniques for 1 clock (1CLK) 1 read/1 write (1R/1W) 2-port static random-access memories (SRAMs) to improve read access time (tAC) and write margins at low voltages. Two-stage read boost (TSR-BST) and write word line boost (WWL-BST) after the read sensing schemes have been proposed. TSR-BST reduces the worst read bit line (RBL) delay by 61% and RBL amplitude by 10% at V DD = 0.5 V, which improves tAC by 39% and reduces energy dissipation by 11% at V DD = 0.55 V. WWL-BST after read sensing scheme improves minimum operating voltage (V min) by 140 mV. A 32 kbit 1CLK 1R/1W 2-port SRAM with TSR-BST and WWL-BST has been developed using a 40 nm CMOS.

  10. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  11. Novel Control for Voltage Boosted Matrix Converter based Wind Energy Conversion System with Practicality

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod; Joshi, Raghuveer Raj; Yadav, Dinesh Kumar; Garg, Rahul Kumar

    2016-06-01

    This paper presents the implementation and investigation of novel voltage boosted matrix converter (MC) based permanent magnet wind energy conversion system (WECS). In this paper, on-line tuned adaptive fuzzy control algorithm cooperated with reversed MC is proposed to yield maximum energy. The control system is implemented on a dSPACE DS1104 real time board. Feasibility of the proposed system has been experimentally verified using a laboratory 1.2 kW prototype of WECS under steady-state and dynamic conditions.

  12. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  13. Exploiting tRNAs to Boost Virulence.

    PubMed

    Albers, Suki; Czech, Andreas

    2016-01-01

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage-either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5'- and 3'-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification. PMID:26797637

  14. Exploiting tRNAs to Boost Virulence

    PubMed Central

    Albers, Suki; Czech, Andreas

    2016-01-01

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage—either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5′- and 3′-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification. PMID:26797637

  15. Refiners boost crude capacity; Petrochemical production up

    SciTech Connect

    Corbett, R.A.

    1988-03-21

    Continuing demand strength in refined products and petrochemical markets caused refiners to boost crude-charging capacity slightly again last year, and petrochemical producers to increase production worldwide. Product demand strength is, in large part, due to stable product prices resulting from a stabilization of crude oil prices. Crude prices strengthened somewhat in 1987. That, coupled with fierce product competition, unfortunately drove refining margins negative in many regions of the U.S. during the last half of 1987. But with continued strong demand for gasoline, and an increased demand for higher octane gasoline, margins could turn positive by 1989 and remain so for a few years. U.S. refiners also had to have facilities in place to meet the final requirements of the U.S. Environmental Protection Agency's lead phase-down rules on Jan. 1, 1988. In petrochemicals, plastics demand dept basic petrochemical plants at good utilization levels worldwide. U.S. production of basics such as ethylene and propylene showed solid increases. Many of the derivatives of the basic petrochemical products also showed good production gains. Increased petrochemical production and high plant utilization rates didn't spur plant construction projects, however. Worldwide petrochemical plant projects declined slightly from 1986 figures.

  16. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  17. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  18. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  19. Every Day in The Womb Boosts Babies' Brain Development

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_161778.html Every Day in the Womb Boosts Babies' Brain Development: Study ... What this study shows us is that every day and every week of in utero development is ...

  20. Could Slight Brain Zap During Sleep Boost Memory?

    MedlinePlus

    ... medlineplus.gov/news/fullstory_160135.html Could Slight Brain Zap During Sleep Boost Memory? Small study says ... HealthDay News) -- Stimulating a targeted area of the brain with small doses of weak electricity while you ...

  1. Zika's Delivery Via Mosquito Bite May Boost Its Effect

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_159484.html Zika's Delivery Via Mosquito Bite May Boost Its Effect ... The inflammation caused by a mosquito bite helps Zika and other viruses spread through the body more ...

  2. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur

    2015-10-01

    Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics. The proposed method is employed for the exemplary problem of artificial area identification. Classification of IKONOS multispectral data results in short computational time and overall accuracy of 94.4% comparing to 94.0% obtained by using AdaBoost.M1 with trees and 93.8% achieved using Random Forest. The influence of a manipulation of the final threshold of the strong classifier on classification results is reported.

  3. Insurance Mandates Boost U.S. Autism Diagnoses

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_159812.html Insurance Mandates Boost U.S. Autism Diagnoses Early treatment provides ... the Penn researchers analyzed inpatient and outpatient health insurance claims from 2008 through 2012 for more than ...

  4. Testosterone Therapy May Boost Older Men's Sex Lives

    MedlinePlus

    ... 159622.html Testosterone Therapy May Boost Older Men's Sex Lives Gel hormone treatment led to improved libido ... experienced a moderate but significant improvement in their sex drive, sexual activity and erectile function compared to ...

  5. High-temperature alloys: Single-crystal performance boost

    NASA Astrophysics Data System (ADS)

    Schütze, Michael

    2016-08-01

    Titanium aluminide alloys are lightweight and have attractive properties for high-temperature applications. A new growth method that enables single-crystal production now boosts their mechanical performance.

  6. Artificial intelligence in parallel

    SciTech Connect

    Waldrop, M.M.

    1984-08-10

    The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

  7. Breakdown of Spatial Parallel Coding in Children's Drawing

    ERIC Educational Resources Information Center

    De Bruyn, Bart; Davis, Alyson

    2005-01-01

    When drawing real scenes or copying simple geometric figures young children are highly sensitive to parallel cues and use them effectively. However, this sensitivity can break down in surprisingly simple tasks such as copying a single line where robust directional errors occur despite the presence of parallel cues. Before we can conclude that this…

  8. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than line relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. The parallel implementation of a V-cycle multiple semi-coarsened grid (MSG) algorithm or distributed-memory architectures such as the Intel iPSC/860 and Paragon computers is addressed. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. A mapping of an MSG algorithm to distributed-memory architectures that demonstrate how both levels of parallelism can be exploited is described. The results is a robust and effective multigrid algorithm for distributed-memory machines.

  9. Forward vehicle detection using cluster-based AdaBoost

    NASA Astrophysics Data System (ADS)

    Baek, Yeul-Min; Kim, Whoi-Yul

    2014-10-01

    A camera-based forward vehicle detection method with range estimation for forward collision warning system (FCWS) is presented. Previous vehicle detection methods that use conventional classifiers are not robust in a real driving environment because they lack the effectiveness of classifying vehicle samples with high intraclass variation and noise. Therefore, an improved AdaBoost, named cluster-based AdaBoost (C-AdaBoost), for classifying noisy samples along with a forward vehicle detection method are presented in this manuscript. The experiments performed consist of two parts: performance evaluations of C-AdaBoost and forward vehicle detection. The proposed C-AdaBoost shows better performance than conventional classification algorithms on the synthetic as well as various real-world datasets. In particular, when the dataset has more noisy samples, C-AdaBoost outperforms conventional classification algorithms. The proposed method is also tested with an experimental vehicle on a proving ground and on public roads, ˜62 km in length. The proposed method shows a 97% average detection rate and requires only 9.7 ms per frame. The results show the reliability of the proposed method FCWS in terms of both detection rate and processing time.

  10. Our intraoperative boost radiotherapy experience and applications

    PubMed Central

    Günay, Semra; Alan, Ömür; Yalçın, Orhan; Türkmen, Aygen; Dizdar, Nihal

    2016-01-01

    Objective: To present our experience since November 2013, and case selection criteria for intraoperative boost radiotherapy (IObRT) that significantly reduces the local recurrence rate after breast conserving surgery in patients with breast cancer. Material and Methods: Patients who were suitable for IObRT were identified within the group of patients who were selected for breast conserving surgery at our breast council. A MOBETRON (mobile linear accelerator for IObRT) was used for IObRt during surgery. Results: Patients younger than 60 years old with <3 cm invasive ductal cancer in one focus (or two foci within 2 cm), with a histologic grade of 2–3, and a high possibility of local recurrence were admitted for IObRT application. Informed consent was obtained from all participants. Lumpectomy and sentinel lymph node biopsy was performed and advancement flaps were prepared according to the size and inclination of the conus following evaluation of tumor size and surgical margins by pathology. Distance to the thoracic wall was measured, and a radiation oncologist and radiation physicist calculated the required dose. Anesthesia was regulated with slower ventilation frequency, without causing hypoxia. The skin and incision edges were protected, the field was radiated (with 6 MeV electron beam of 10 Gy) and the incision was closed. In our cases, there were no major postoperative surgical or early radiotherapy related complications. Conclusion: The completion of another stage of local therapy with IObRT during surgery positively effects sequencing of other treatments like chemotherapy, hormonotherapy and radiotherapy, if required. IObRT increases disease free and overall survival, as well as quality of life in breast cancer patients. PMID:26985156

  11. Engineering RENTA, a DNA prime-MVA boost HIV vaccine tailored for Eastern and Central Africa.

    PubMed

    Nkolola, J P; Wee, E G-T; Im, E-J; Jewell, C P; Chen, N; Xu, X-N; McMichael, A J; Hanke, T

    2004-07-01

    For the development of human immunodeficiency virus type 1 (HIV-1) vaccines, traditional approaches inducing virus-neutralizing antibodies have so far failed. Thus the effort is now focused on elicitation of cellular immunity. We are currently testing in clinical trials in the United Kingdom and East Africa a T-cell vaccine consisting of HIV-1 clade A Gag-derived immunogen HIVA delivered in a prime-boost regimen by a DNA plasmid and modified vaccinia virus Ankara (MVA). Here, we describe engineering and preclinical development of a second immunogen RENTA, which will be used in combination with the present vaccine in a four-component DNA/HIVA-RENTA prime-MVA/HIVA-RENTA boost formulation. RENTA is a fusion protein derived from consensus HIV clade A sequences of Tat, reverse transcriptase, Nef and gp41. We inactivated the natural biological activities of the HIV components and confirmed immunogenicities of the pTHr.RENTA and MVA.RENTA vaccines in mice. Furthermore, we demonstrated in mice and rhesus monkeys broadening of HIVA-elicited T-cell responses by a parallel induction of HIVA- and RENTA-specific responses recognizing multiple HIV epitopes.

  12. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching Yuen

    1991-01-01

    A new Lagrangian formulation of the Euler equation is adopted for the calculation of 2-D supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, a better than six times speed-up was achieved on a 8192-processor CM-2 over a single processor of a CRAY-2.

  13. Flow cytometric application of helper adenovirus (HAd) containing GFP gene flanked by two parallel loxP sites to evaluation of 293 cre-complementing cell line and monitoring of HAd in Gutless Ad production.

    PubMed

    Park, Min Tae; Hwang, Su-Jeong; Lee, Gyun Min

    2004-01-01

    Gutless adenoviruses (GAds), namely, all gene-deleted adenoviruses, were developed to minimize their immune responses and toxic effects for a successful gene delivery tool in gene therapy. The Cre/loxP system has been widely used for GAd production. To produce GAd with a low amount of helper adenovirus (HAd) as byproduct, it is indispensable to use 293Cre cells expressing a high level of Cre for GAd production. In this study, we constructed the HAd containing enhanced green fluorescent protein gene flanked by two parallel loxP sites (HAd/GFP). The use of HAd/GFP with flow cytometry allows one to select 293Cre cells expressing a high level of Cre without using conventional Western blot analysis. Unlike conventional HAd titration methods such as plaque assay and end-point dilution assay, it also allows one to monitor rapidly the HAd as byproduct in earlier stages of GAd amplification. Taken together, the use of HAd/GFP with flow cytometry facilitates bioprocess development for efficient GAd production.

  14. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  15. Parallel time integration software

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  16. Speckle Reduction and Structure Enhancement by Multichannel Median Boosted Anisotropic Diffusion

    NASA Astrophysics Data System (ADS)

    Yang, Zhi; Fox, Martin D.

    2004-12-01

    We propose a new approach to reduce speckle noise and enhance structures in speckle-corrupted images. It utilizes a median-anisotropic diffusion compound scheme. The median-filter-based reaction term acts as a guided energy source to boost the structures in the image being processed. In addition, it regularizes the diffusion equation to ensure the existence and uniqueness of a solution. We also introduce a decimation and back reconstruction scheme to further enhance the processing result. Before the iteration of the diffusion process, the image is decimated and a subpixel shifted image set is formed. This allows a multichannel parallel diffusion iteration, and more importantly, the speckle noise is broken into impulsive or salt-pepper noise, which is easy to remove by median filtering. The advantage of the proposed technique is clear when it is compared to other diffusion algorithms and the well-known adaptive weighted median filtering (AWMF) scheme in both simulation and real medical ultrasound images.

  17. Parallel programming interface for distributed data

    NASA Astrophysics Data System (ADS)

    Wang, Manhui; May, Andrew J.; Knowles, Peter J.

    2009-12-01

    The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform. Program summaryProgram title: PPIDD Catalogue identifier: AEEF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEF_1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 698 No. of bytes in distributed program, including test data, etc.: 166 173 Distribution format: tar.gz Programming language: Fortran, C Computer: Many parallel systems Operating system: Various Has the code been vectorised or parallelized?: Yes. 2-256 processors used RAM: 50 Mbytes Classification: 6.5 External routines: Global Arrays or MPI-2 Nature of problem: Many scientific applications require management and communication of data that is global, and the standard MPI-2 protocol provides only low-level methods for the required one-sided remote memory access. Solution method: The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform. Running time: Problem dependent. The test provided with

  18. A Data Parallel Algorithm for XML DOM Parsing

    NASA Astrophysics Data System (ADS)

    Shah, Bhavik; Rao, Praveen R.; Moon, Bongki; Rajagopalan, Mohan

    The extensible markup language XML has become the de facto standard for information representation and interchange on the Internet. XML parsing is a core operation performed on an XML document for it to be accessed and manipulated. This operation is known to cause performance bottlenecks in applications and systems that process large volumes of XML data. We believe that parallelism is a natural way to boost performance. Leveraging multicore processors can offer a cost-effective solution, because future multicore processors will support hundreds of cores, and will offer a high degree of parallelism in hardware. We propose a data parallel algorithm called ParDOM for XML DOM parsing, that builds an in-memory tree structure for an XML document. ParDOM has two phases. In the first phase, an XML document is partitioned into chunks and parsed in parallel. In the second phase, partial DOM node tree structures created during the first phase, are linked together (in parallel) to build a complete DOM node tree. ParDOM offers fine-grained parallelism by adopting a flexible chunking scheme - each chunk can contain an arbitrary number of start and end XML tags that are not necessarily matched. ParDOM can be conveniently implemented using a data parallel programming model that supports map and sort operations. Through empirical evaluation, we show that ParDOM yields better scalability than PXP [23] - a recently proposed parallel DOM parsing algorithm - on commodity multicore processors. Furthermore, ParDOM can process a wide-variety of XML datasets with complex structures which PXP fails to parse.

  19. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  20. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  1. Improvements on Pulsed Current Sharing in Driving Parallel MOSFETs

    NASA Astrophysics Data System (ADS)

    Takagi, Hajime; Orihara, Masato; Yamada, Tsutomu; Yanagidaira, Takeshi

    To switch high-voltage and high-current pulses by using MOS (Metal Oxide Semiconductor) transistors, it is necessary to distribute evenly the voltage and current to each element connected in series and parallel. In parallel connection, the current flowing in each element is different depending on the series resistance and wiring inductance. We verified improvements on pulsed current sharing in parallel transistors which were arranged in line on a printed circuit board. Although Gate and Drain wirings are different in length, pulsed current was evenly distributed by using transmission line transformers. Dissipation in transistors were equalized and four transistors were driven simultaneously near the rated current.

  2. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  3. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  4. SPINning parallel systems software.

    SciTech Connect

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  5. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  6. Boosted Fast Flux Loop Final Report

    SciTech Connect

    Boosted Fast Flux Loop Project Staff

    2009-09-01

    The Boosted Fast Flux Loop (BFFL) project was initiated to determine basic feasibility of designing, constructing, and installing in a host irradiation facility, an experimental vehicle that can replicate with reasonable fidelity the fast-flux test environment needed for fuels and materials irradiation testing for advanced reactor concepts. Originally called the Gas Test Loop (GTL) project, the activity included (1) determination of requirements that must be met for the GTL to be responsive to potential users, (2) a survey of nuclear facilities that may successfully host the GTL, (3) conceptualizing designs for hardware that can support the needed environments for neutron flux intensity and energy spectrum, atmosphere, flow, etc. needed by the experimenters, and (4) examining other aspects of such a system, such as waste generation and disposal, environmental concerns, needs for additional infrastructure, and requirements for interfacing with the host facility. A revised project plan included requesting an interim decision, termed CD-1A, that had objectives of' establishing the site for the project at the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL), deferring the CD 1 application, and authorizing a research program that would resolve the most pressing technical questions regarding GTL feasibility, including issues relating to the use of booster fuel in the ATR. Major research tasks were (1) hydraulic testing to establish flow conditions through the booster fuel, (2) mini-plate irradiation tests and post-irradiation examination to alleviate concerns over corrosion at the high heat fluxes planned, (3) development and demonstration of booster fuel fabrication techniques, and (4) a review of the impact of the GTL on the ATR safety basis. A revised cooling concept for the apparatus was conceptualized, which resulted in renaming the project to the BFFL. Before the subsequent CD-1 approval request could be made, a decision was made in April 2006

  7. Gene network-based cancer prognosis analysis with sparse boosting

    PubMed Central

    Ma, Shuangge; Huang, Yuan; Huang, Jian; Fang, Kuangnan

    2013-01-01

    Summary High-throughput gene profiling studies have been extensively conducted, searching for markers associated with cancer development and progression. In this study, we analyse cancer prognosis studies with right censored survival responses. With gene expression data, we adopt the weighted gene co-expression network analysis (WGCNA) to describe the interplay among genes. In network analysis, nodes represent genes. There are subsets of nodes, called modules, which are tightly connected to each other. Genes within the same modules tend to have co-regulated biological functions. For cancer prognosis data with gene expression measurements, our goal is to identify cancer markers, while properly accounting for the network module structure. A two-step sparse boosting approach, called Network Sparse Boosting (NSBoost), is proposed for marker selection. In the first step, for each module separately, we use a sparse boosting approach for within-module marker selection and construct module-level ‘super markers ’. In the second step, we use the super markers to represent the effects of all genes within the same modules and conduct module-level selection using a sparse boosting approach. Simulation study shows that NSBoost can more accurately identify cancer-associated genes and modules than alternatives. In the analysis of breast cancer and lymphoma prognosis studies, NSBoost identifies genes with important biological implications. It outperforms alternatives including the boosting and penalization approaches by identifying a smaller number of genes/modules and/or having better prediction performance. PMID:22950901

  8. Parallel Total Energy

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  9. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  10. Parallel Multigrid Equation Solver

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  11. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  12. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  13. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  14. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  15. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  16. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  17. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  18. Parallel Dislocation Simulator

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  19. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  20. (In)Direct detection of boosted dark matter

    NASA Astrophysics Data System (ADS)

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse

    2016-05-01

    We present a new multi-component dark matter model with a novel experimental signature that mimics neutral current interactions at neutrino detectors. In our model, the dark matter is composed of two particles, a heavier dominant component that annihilates to produce a boosted lighter component that we refer to as boosted dark matter. The lighter component is relativistic and scatters off electrons in neutrino experiments to produce Cherenkov light. This model combines the indirect detection of the dominant component with the direct detection of the boosted dark matter. Directionality can be used to distinguish the dark matter signal from the atmospheric neutrino background. We discuss the viable region of parameter space in current and future experiments.

  1. Behavior of entanglement and Cooper pairs under relativistic boosts

    SciTech Connect

    Palge, Veiko; Dunningham, Jacob A.; Vedral, Vlatko

    2011-10-15

    Recent work [J. A. Dunningham, V. Palge, and V. Vedral, Phys. Rev. A 80, 044302 (2009)] has shown how single-particle entangled states are transformed when boosted in relativistic frames for certain restricted geometries. Here we extend that work to consider completely general inertial boosts. We then apply our single-particle results to multiparticle entanglements by focusing on Cooper pairs of electrons. We show that a standard Cooper pair state consisting of a spin-singlet acquires spin-triplet components in a relativistically boosted inertial frame, regardless of the geometry. We also show that, if we start with a spin-triplet pair, two out of the three triplet states acquire a singlet component, the size of which depends on the geometry. This transformation between the different singlet and triplet superconducting pairs may lead to a better understanding of unconventional superconductivity.

  2. A feedforward compensation design in critical conduction mode boost power factor correction for low-power low totalharmonic distortion

    NASA Astrophysics Data System (ADS)

    Yani, Li; Yintang, Yang; Zhangming, Zhu; Wei, Qiang

    2012-03-01

    For low-power low total harmonic distortion (THD), based on the CSMC 0.5 μm BCD process, a novel boost power factor correction (PFC) converter in critical conduction mode is discussed and analyzed. Feedforward compensation design is introduced in order to increase the PWM duty cycle and supply more conversion energy near the input voltage zero-crossing points, thus regulating the inductor current of the PFC converter and compensating the system loop gain change with ac line voltage. Both theoretical and practical results reveal that the proposed PFC converter with feedforward compensation cell has better power factor and THD performance, and is suitable for low-power low THD design applications. The experimental THD of the boost PFC converter is 4.5%, the start-up current is 54 μA, the stable operating current is 3.85 mA, the power factor is 0.998 and the efficiency is 95.2%.

  3. Digital parallel-to-series pulse-train converter

    NASA Technical Reports Server (NTRS)

    Hussey, J.

    1971-01-01

    Circuit converts number represented as two level signal on n-bit lines to series of pulses on one of two lines, depending on sign of number. Converter accepts parallel binary input data and produces number of output pulses equal to number represented by input data.

  4. 10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND SOUTH PIER. LOOKING SOUTHEAST. - Route 31 Bridge, New Jersey Route 31, crossing disused main line of Central Railroad of New Jersey (C.R.R.N.J.) (New Jersey Transit's Raritan Valley Line), Hampton, Hunterdon County, NJ

  5. Boosted Fast Flux Loop Alternative Cooling Assessment

    SciTech Connect

    Glen R. Longhurst; Donna Post Guillen; James R. Parry; Douglas L. Porter; Bruce W. Wallace

    2007-08-01

    The Gas Test Loop (GTL) Project was instituted to develop the means for conducting fast neutron irradiation tests in a domestic radiation facility. It made use of booster fuel to achieve the high neutron flux, a hafnium thermal neutron absorber to attain the high fast-to-thermal flux ratio, a mixed gas temperature control system for maintaining experiment temperatures, and a compressed gas cooling system to remove heat from the experiment capsules and the hafnium thermal neutron absorber. This GTL system was determined to provide a fast (E > 0.1 MeV) flux greater than 1.0E+15 n/cm2-s with a fast-to-thermal flux ratio in the vicinity of 40. However, the estimated system acquisition cost from earlier studies was deemed to be high. That cost was strongly influenced by the compressed gas cooling system for experiment heat removal. Designers were challenged to find a less expensive way to achieve the required cooling. This report documents the results of the investigation leading to an alternatively cooled configuration, referred to now as the Boosted Fast Flux Loop (BFFL). This configuration relies on a composite material comprised of hafnium aluminide (Al3Hf) in an aluminum matrix to transfer heat from the experiment to pressurized water cooling channels while at the same time providing absorption of thermal neutrons. Investigations into the performance this configuration might achieve showed that it should perform at least as well as its gas-cooled predecessor. Physics calculations indicated that the fast neutron flux averaged over the central 40 cm (16 inches) relative to ATR core mid-plane in irradiation spaces would be about 1.04E+15 n/cm2-s. The fast-to-thermal flux ratio would be in excess of 40. Further, the particular configuration of cooling channels was relatively unimportant compared with the total amount of water in the apparatus in determining performance. Thermal analyses conducted on a candidate configuration showed the design of the water coolant and

  6. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  7. The Lateral Decubitus Breast Boost: Description, Rationale, and Efficacy

    SciTech Connect

    Ludwig, Michelle S.; McNeese, Marsha D.; Buchholz, Thomas A.; Perkins, George H.; Strom, Eric A.

    2010-01-15

    Purpose: To describe and evaluate the modified lateral decubitus boost, a breast irradiation technique. Patients are repositioned and resimulated for electron boost to minimize the necessary depth for the electron beam and optimize target volume coverage. Methods and Materials: A total of 2,606 patients were treated with post-lumpectomy radiation at our institution between January 1, 2000, and February 1, 2008. Of these, 231 patients underwent resimulation in the lateral decubitus position with electron boost. Distance from skin to the maximal depth of target volume was measured in both the original and boost plans. Age, body mass index (BMI), boost electron energy, and skin reaction were evaluated. Results: Resimulation in the lateral decubitus position reduced the distance from skin to maximal target volume depth in all patients. Average depth reduction by repositioning was 2.12 cm, allowing for an average electron energy reduction of approximately 7 MeV. Mean skin entrance dose was reduced from about 90% to about 85% (p < 0.001). Only 14 patients (6%) experienced moist desquamation in the boost field at the end of treatment. Average BMI of these patients was 30.4 (range, 17.8-50.7). BMI greater than 30 was associated with more depth reduction by repositioning and increased risk of moist desquamation. Conclusions: The lateral decubitus position allows for a decrease in the distance from the skin to the target volume depth, improving electron coverage of the tumor bed while reducing skin entrance dose. This is a well-tolerated regimen for a patient population with a high BMI or deep tumor location.

  8. Self-boosting vaccines and their implications for herd immunity.

    PubMed

    Arinaminpathy, Nimalan; Lavine, Jennie S; Grenfell, Bryan T

    2012-12-01

    Advances in vaccine technology over the past two centuries have facilitated far-reaching impact in the control of many infections, and today's emerging vaccines could likewise open new opportunities in the control of several diseases. Here we consider the potential, population-level effects of a particular class of emerging vaccines that use specific viral vectors to establish long-term, intermittent antigen presentation within a vaccinated host: in essence, "self-boosting" vaccines. In particular, we use mathematical models to explore the potential role of such vaccines in situations where current immunization raises only relatively short-lived protection. Vaccination programs in such cases are generally limited in their ability to raise lasting herd immunity. Moreover, in certain cases mass vaccination can have the counterproductive effect of allowing an increase in severe disease, through reducing opportunities for immunity to be boosted through natural exposure to infection. Such dynamics have been proposed, for example, in relation to pertussis and varicella-zoster virus. In this context we show how self-boosting vaccines could open qualitatively new opportunities, for example by broadening the effective duration of herd immunity that can be achieved with currently used immunogens. At intermediate rates of self-boosting, these vaccines also alleviate the potential counterproductive effects of mass vaccination, through compensating for losses in natural boosting. Importantly, however, we also show how sufficiently high boosting rates may introduce a new regime of unintended consequences, wherein the unvaccinated bear an increased disease burden. Finally, we discuss important caveats and data needs arising from this work.

  9. 2001 BUDGET: Research Gets Hefty Boost in 2001 Defense Budget.

    PubMed

    Malakoff, D

    2000-09-01

    Next year's $289 billion defense budget, which President Bill Clinton signed last month, includes big boosts for a host of science programs, from endangered species research to developing laser weapons. And with the two major presidential candidates pledging further boosts, the Pentagon's portfolio is attracting increasing attention from the life sciences community as well. But some analysts worry that Congress and the Pentagon may be shortchanging long-term, high-risk research in favor of projects with a more certain payoff. PMID:17811142

  10. Boosted Objects: A Probe of Beyond the Standard Model Physics

    SciTech Connect

    Abdesselam, A.; Kuutmann, E.Bergeaas; Bitenc, U.; Brooijmans, G.; Butterworth, J.; Bruckman de Renstrom, P.; Buarque Franzosi, D.; Buckingham, R.; Chapleau, B.; Dasgupta, M.; Davison, A.; Dolen, J.; Ellis, S.; Fassi, F.; Ferrando, J.; Frandsen, M.T.; Frost, J.; Gadfort, T.; Glover, N.; Haas, A.; Halkiadakis, E.; /more authors..

    2012-06-12

    We present the report of the hadronic working group of the BOOST2010 workshop held at the University of Oxford in June 2010. The first part contains a review of the potential of hadronic decays of highly boosted particles as an aid for discovery at the LHC and a discussion of the status of tools developed to meet the challenge of reconstructing and isolating these topologies. In the second part, we present new results comparing the performance of jet grooming techniques and top tagging algorithms on a common set of benchmark channels. We also study the sensitivity of jet substructure observables to the uncertainties in Monte Carlo predictions.

  11. Parallel Consensual Neural Networks

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

    1993-01-01

    A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

  12. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  13. Parallel grid population

    SciTech Connect

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  14. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  15. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  16. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  17. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  18. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  19. Collisionless parallel shocks

    NASA Technical Reports Server (NTRS)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  20. Early childhood investments substantially boost adult health.

    PubMed

    Campbell, Frances; Conti, Gabriella; Heckman, James J; Moon, Seong Hyeok; Pinto, Rodrigo; Pungello, Elizabeth; Pan, Yi

    2014-03-28

    High-quality early childhood programs have been shown to have substantial benefits in reducing crime, raising earnings, and promoting education. Much less is known about their benefits for adult health. We report on the long-term health effects of one of the oldest and most heavily cited early childhood interventions with long-term follow-up evaluated by the method of randomization: the Carolina Abecedarian Project (ABC). Using recently collected biomedical data, we find that disadvantaged children randomly assigned to treatment have significantly lower prevalence of risk factors for cardiovascular and metabolic diseases in their mid-30s. The evidence is especially strong for males. The mean systolic blood pressure among the control males is 143 millimeters of mercury (mm Hg), whereas it is only 126 mm Hg among the treated. One in four males in the control group is affected by metabolic syndrome, whereas none in the treatment group are affected. To reach these conclusions, we address several statistical challenges. We use exact permutation tests to account for small sample sizes and conduct a parallel bootstrap confidence interval analysis to confirm the permutation analysis. We adjust inference to account for the multiple hypotheses tested and for nonrandom attrition. Our evidence shows the potential of early life interventions for preventing disease and promoting health. PMID:24675955

  1. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  2. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  3. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  4. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT

  5. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT

  6. Secondary School Mathematics, Chapter 17, Perpendiculars and Parallels (II), Chapter 18, Coordinate Geometry. Student's Text.

    ERIC Educational Resources Information Center

    Stanford Univ., CA. School Mathematics Study Group.

    The first chapter, Perpendiculars and Parallels (II), of the ninth unit in this SMSG series includes a discussion of the properties of triangles, circles and perpendiculars, parallels in space, perpendicular lines and planes, and parallel planes. The next chapter, on coordinate geometry, covers distance; midpoints; algebraic descriptions of…

  7. OKVAR-Boost: a novel boosting algorithm to infer nonlinear dynamics and interactions in gene regulatory networks

    PubMed Central

    Lim, Néhémy; Şenbabaoğlu, Yasin; Michailidis, George; d’Alché-Buc, Florence

    2013-01-01

    Motivation: Reverse engineering of gene regulatory networks remains a central challenge in computational systems biology, despite recent advances facilitated by benchmark in silico challenges that have aided in calibrating their performance. A number of approaches using either perturbation (knock-out) or wild-type time-series data have appeared in the literature addressing this problem, with the latter using linear temporal models. Nonlinear dynamical models are particularly appropriate for this inference task, given the generation mechanism of the time-series data. In this study, we introduce a novel nonlinear autoregressive model based on operator-valued kernels that simultaneously learns the model parameters, as well as the network structure. Results: A flexible boosting algorithm (OKVAR-Boost) that shares features from L2-boosting and randomization-based algorithms is developed to perform the tasks of parameter learning and network inference for the proposed model. Specifically, at each boosting iteration, a regularized Operator-valued Kernel-based Vector AutoRegressive model (OKVAR) is trained on a random subnetwork. The final model consists of an ensemble of such models. The empirical estimation of the ensemble model’s Jacobian matrix provides an estimation of the network structure. The performance of the proposed algorithm is first evaluated on a number of benchmark datasets from the DREAM3 challenge and then on real datasets related to the In vivo Reverse-Engineering and Modeling Assessment (IRMA) and T-cell networks. The high-quality results obtained strongly indicate that it outperforms existing approaches. Availability: The OKVAR-Boost Matlab code is available as the archive: http://amis-group.fr/sourcecode-okvar-boost/OKVARBoost-v1.0.zip. Contact: florence.dalche@ibisc.univ-evry.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23574736

  8. 46 CFR 69.181 - Locating the line of the second deck.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) If the second deck is not stepped, the line of the second deck is the longitudinal line of the... following this paragraph), the line of the second deck is a longitudinal line extended parallel to...

  9. Lock-in-detection-free line-scan stimulated Raman scattering microscopy for near video-rate Raman imaging.

    PubMed

    Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2016-09-01

    We report on the development of a unique lock-in-detection-free line-scan stimulated Raman scattering microscopy technique based on a linear detector with a large full well capacity controlled by a field-programmable gate array (FPGA) for near video-rate Raman imaging. With the use of parallel excitation and detection scheme, the line-scan SRS imaging at 20 frames per second can be acquired with a ∼5-fold lower excitation power density, compared to conventional point-scan SRS imaging. The rapid data communication between the FPGA and the linear detector allows a high line-scanning rate to boost the SRS imaging speed without the need for lock-in detection. We demonstrate this lock-in-detection-free line-scan SRS imaging technique using the 0.5 μm polystyrene and 1.0 μm poly(methyl methacrylate) beads mixed in water, as well as living gastric cancer cells.

  10. Lock-in-detection-free line-scan stimulated Raman scattering microscopy for near video-rate Raman imaging.

    PubMed

    Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2016-09-01

    We report on the development of a unique lock-in-detection-free line-scan stimulated Raman scattering microscopy technique based on a linear detector with a large full well capacity controlled by a field-programmable gate array (FPGA) for near video-rate Raman imaging. With the use of parallel excitation and detection scheme, the line-scan SRS imaging at 20 frames per second can be acquired with a ∼5-fold lower excitation power density, compared to conventional point-scan SRS imaging. The rapid data communication between the FPGA and the linear detector allows a high line-scanning rate to boost the SRS imaging speed without the need for lock-in detection. We demonstrate this lock-in-detection-free line-scan SRS imaging technique using the 0.5 μm polystyrene and 1.0 μm poly(methyl methacrylate) beads mixed in water, as well as living gastric cancer cells. PMID:27607947

  11. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  12. Elf Atochem boosts production of CFC substitutes

    SciTech Connect

    Not Available

    1992-05-01

    To carve out a larger share of the market for acceptable chlorofluorocarbon substitutes, Elf Atochem (Paris) is expanding its production of HFC-134a, HCFC-141b and HCFC-142b in the U.S. and in France. This paper reports that the company is putting the finishing touches on a plant at its Pierre-Benite (France) facility, to bring 9,000 m.t./yr (19.8 million lb) of HFC-134a capacity on-line by September. Construction is scheduled to begin next year at the company's Calvert City, Ky., plant, where a 15,000-m.t./yr (33-million-lb) unit for HFC-134a will come onstream by 1995.

  13. Boost compensator for use with internal combustion engine with supercharger

    SciTech Connect

    Asami, T.

    1988-04-12

    A boost compensator for controlling the position of a control rack of a fuel injection pump to supply fuel to an internal combustion with a supercharger in response to a boost pressure to be applied to the engine is described. The control rack is movable in a first direction increasing an amount of fuel to be supplied by the fuel injection pump to the engine and in a second direction, opposite to the first direction, decreasing the amount of fuel. The boost compensator comprises: a push rod disposed for forward and rearward movement in response to the boost pressure; a main lever disposed for angular movement about a first pivot; an auxiliary lever disposed for angular movement about a second pivot; return spring means associated with the first portion of the auxiliary lever for resiliently biasing same in one direction about the second pivot; and abutment means mounted on the second portion of the auxiliary lever and engageable with the second portion of the main lever.

  14. Boosting Teachers' Self-Esteem: A Dropout Prevention Strategy.

    ERIC Educational Resources Information Center

    Ruben, Ann Moliver

    Good teachers leave teaching not because pay is low but because of poor working conditions and too little recognition. Since students can be strongly affected by teachers, teachers who feel negatively about themselves can adversely affect students. A five-evening workshop was developed in Dade County, Florida to boost teachers' self-esteem and to…

  15. Balance-Boosting Footwear Tips for Older People

    MedlinePlus

    ... Home » Learn About Feet » Tips for Healthy Feet Balance-Boosting Footwear Tips for Older People Balance in all aspects of life is a good ... mental equilibrium isn't the only kind of balance that's important in life. Good physical balance can ...

  16. Gentle Nearest Neighbors Boosting over Proper Scoring Rules.

    PubMed

    Nock, Richard; Ali, Wafa Bel Haj; D'Ambrosio, Roberto; Nielsen, Frank; Barlaud, Michel

    2015-01-01

    Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, UNN, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the UNN algorithm, at the possible expense of the algorithm's convergence and performance. In this paper, we propose a lightweight Newton-Raphson alternative optimizing proper scoring rules from a very broad set, and establish formal convergence rates under the boosting framework that compete with those known for UNN. To the best of our knowledge, no such boosting-compliant convergence rates were previously known in the popular Gentle Adaboost's lineage. We provide experiments on a dozen domains, including Caltech and SUN computer vision databases, comparing our approach to major families including support vector machines, (Ada)boosting and stochastic gradient descent. They support three major conclusions: (i) GNNB significantly outperforms UNN, in terms of convergence rate and quality of the outputs, (ii) GNNB performs on par with or better than computationally intensive large margin approaches, (iii) on large domains that rule out those latter approaches for computational reasons, GNNB provides a simple and competitive contender to stochastic gradient descent. Experiments include a divide-and-conquer improvement of GNNB exploiting the link with proper scoring rules optimization. PMID:26353210

  17. Inverse ultravelocity slings for boost-phase defense

    SciTech Connect

    Canavan, G.H.

    1991-04-01

    Existing booster technology, brilliant pebble interceptors, survivable platforms, and developed warning, command, control, and communication could provide boost-phase defensives with the capability and flexibility required to significantly reduce the effectiveness of submarine and theater attacks. 7 refs., 2 figs.

  18. Boosting NAD(+) for the prevention and treatment of liver cancer.

    PubMed

    Djouder, Nabil

    2015-01-01

    Hepatocellular carcinoma (HCC) is the third leading cause of cancer death worldwide yet has limited therapeutic options. We recently demonstrated that inhibition of de novo nicotinamide adenine dinucleotide (NAD(+)) synthesis is responsible for DNA damage, thereby initiating hepatocarcinogenesis. We propose that boosting NAD(+) levels might be used as a prophylactic or therapeutic approach in HCC. PMID:27308492

  19. Real-World Connections Can Boost Journalism Program.

    ERIC Educational Resources Information Center

    Schrier, Kathy; Bott, Don; McGuire, Tim

    2001-01-01

    Describes various ways scholastic journalism advisers have attempted to make real-world connections to boost their journalism programs: critiques of student publications by invited guest speakers (professional journalists); regional workshops where professionals offer short presentations; local media offering programming or special sections aimed…

  20. Could Weight-Loss Surgery Boost Odds of Preemie Birth?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160596.html Could Weight-Loss Surgery Boost Odds of Preemie Birth? Monitoring is ... HealthDay News) -- Mothers-to-be who've had weight-loss surgery may have increased odds for premature delivery, ...

  1. Graph ensemble boosting for imbalanced noisy graph stream classification.

    PubMed

    Pan, Shirui; Wu, Jia; Zhu, Xingquan; Zhang, Chengqi

    2015-05-01

    Many applications involve stream data with structural dependency, graph representations, and continuously increasing volumes. For these applications, it is very common that their class distributions are imbalanced with minority (or positive) samples being only a small portion of the population, which imposes significant challenges for learning models to accurately identify minority samples. This problem is further complicated with the presence of noise, because they are similar to minority samples and any treatment for the class imbalance may falsely focus on the noise and result in deterioration of accuracy. In this paper, we propose a classification model to tackle imbalanced graph streams with noise. Our method, graph ensemble boosting, employs an ensemble-based framework to partition graph stream into chunks each containing a number of noisy graphs with imbalanced class distributions. For each individual chunk, we propose a boosting algorithm to combine discriminative subgraph pattern selection and model learning as a unified framework for graph classification. To tackle concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the boosting framework can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-life imbalanced graph streams demonstrate clear benefits of our boosting design for handling imbalanced noisy graph stream.

  2. Boosting Imagination: Incorporating Creative Play into the Writing Room

    ERIC Educational Resources Information Center

    Nelson, Angela; Schmidt, Jamie; Verbais, Chad

    2006-01-01

    Incorporating creative play in the writing lab or classroom is a unique way to pique students' interest and boost their imagination. Exercises varying from describing Hershey's Kisses®, to using tape recorders for discussing voice, to using magnetic poetry to practice grammar are all ways that stimulate learning through the lens of play. Play…

  3. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  4. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  5. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  6. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  7. Parallelized nested sampling

    NASA Astrophysics Data System (ADS)

    Henderson, R. Wesley; Goggans, Paul M.

    2014-12-01

    One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

  8. Benefit of Radiation Boost After Whole-Breast Radiotherapy

    SciTech Connect

    Livi, Lorenzo; Borghesi, Simona; Saieva, Calogero; Fambrini, Massimiliano; Iannalfi, Alberto; Greto, Daniela; Paiar, Fabiola; Scoccianti, Silvia; Simontacchi, Gabriele; Bianchi, Simonetta; Cataliotti, Luigi; Biti, Giampaolo

    2009-11-15

    Purpose: To determine whether a boost to the tumor bed after breast-conserving surgery (BCS) and radiotherapy (RT) to the whole breast affects local control and disease-free survival. Methods and Materials: A total of 1,138 patients with pT1 to pT2 breast cancer underwent adjuvant RT at the University of Florence. We analyzed only patients with a minimum follow-up of 1 year (range, 1-20 years), with negative surgical margins. The median age of the patient population was 52.0 years (+-7.9 years). The breast cancer relapse incidence probability was estimated by the Kaplan-Meier method, and differences between patient subgroups were compared by the log rank test. Cox regression models were used to evaluate the risk of breast cancer relapse. Results: On univariate survival analysis, boost to the tumor bed reduced breast cancer recurrence (p < 0.0001). Age and tamoxifen also significantly reduced breast cancer relapse (p = 0.01 and p = 0.014, respectively). On multivariate analysis, the boost and the medium age (45-60 years) were found to be inversely related to breast cancer relapse (hazard ratio [HR], 0.27; 95% confidence interval [95% CI], 0.14-0.52, and HR 0.61; 95% CI, 0.37-0.99, respectively). The effect of the boost was more evident in younger patients (HR, 0.15 and 95% CI, 0.03-0.66 for patients <45 years of age; and HR, 0.31 and 95% CI, 0.13-0.71 for patients 45-60 years) on multivariate analyses stratified by age, although it was not a significant predictor in women older than 60 years. Conclusion: Our results suggest that boost to the tumor bed reduces breast cancer relapse and is more effective in younger patients.

  9. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  10. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  11. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  12. Parallel Kinematic Machines (PKM)

    SciTech Connect

    Henry, R.S.

    2000-03-17

    The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

  13. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  14. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  15. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  16. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  17. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  18. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  19. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  20. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  1. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  2. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  3. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  4. Xyce parallel electronic simulator : reference guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  5. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  6. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  7. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  8. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  9. Some properties of a microwave boosted glow discharge source using neon as the operating gas.

    PubMed

    Leis, F; Steers, E B

    1996-07-01

    The use of neon as the operating gas for the analysis of aluminium samples with the microwave boosted glow discharge source has been studied. A new type of anode tube allowed the gas to enter the source near the sample surface so that more material was transported into the discharge. Erosion rates have been measured under conditions optimised for high line-to-background ratios and found to be lower than with argon (9 and 21 n/s, respectively). Despite the lower erosion rate the detection limits measured for a number of elements in aluminium are in the range 0.02-1 microg/g and comparable to those obtained with argon as the operating gas.

  10. Coronal Kink Instability With Parallel Thermal Conduction

    NASA Astrophysics Data System (ADS)

    Botha, Gert J. J.; Arber, Tony D.; Hood, Alan W.; Srivastava, A. K.

    2012-01-01

    Thermal conduction along magnetic field lines plays an important role in the evolution of the kink instability in coronal loops. In the nonlinear phase of the instability, local heating occurs due to reconnection, so that the plasma reaches high temperatures. To study the effect of parallel thermal conduction in this process, the 3D nonlinear magnetohydrodynamic (MHD) equations are solved for an initially unstable equilibrium. The initial state is a cylindrical loop with zero net current. Parallel thermal conduction reduces the local temperature, which leads to temperatures that are an order of magnitude lower than those obtained without thermal conduction. This process is important on the timescale of fast MHD phenomena; it reduces the kinetic energy released by an order of magnitude. The impact of this process on observational signatures is presented. Synthetic observables are generated that include spatial and temporal averaging to account for the resolution and exposure times of TRACE images. It was found that the inclusion of parallel thermal conductivity does not have as large an impact on observables as the order of magnitude reduction in the maximum temperature would suggest. The reason is that response functions sample a broad range of temperatures, so that the net effect of parallel thermal conduction is a blurring of internal features of the loop structure.

  11. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  12. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  13. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure...) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  14. Gain Purchasing Power the Newfangled Way--On-Line.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    1999-01-01

    Examines how San Diego State University uses computers to cut purchasing costs and boost efficiency and whether their solution can work for other business-to-business needs. How the school developed the totally self-sustaining, on-line and on-time purchasing system is discussed, including solutions to start-up problems. (GR)

  15. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  16. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  17. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  18. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  19. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  20. Unified Parallel Software

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  1. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  2. Spacecraft boost and abort guidance and control systems requirement study, boost dynamics and control analysis study. Exhibit A: Boost dynamics and control anlaysis

    NASA Technical Reports Server (NTRS)

    Williams, F. E.; Price, J. B.; Lemon, R. S.

    1972-01-01

    The simulation developments for use in dynamics and control analysis during boost from liftoff to orbit insertion are reported. Also included are wind response studies of the NR-GD 161B/B9T delta wing booster/delta wing orbiter configuration, the MSC 036B/280 inch solid rocket motor configuration, the MSC 040A/L0X-propane liquid injection TVC configuration, the MSC 040C/dual solid rocket motor configuration, and the MSC 049/solid rocket motor configuration. All of the latest math models (rigid and flexible body) developed for the MSC/GD Space Shuttle Functional Simulator, are included.

  3. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  4. Shortened Intervals during Heterologous Boosting Preserve Memory CD8 T Cell Function but Compromise Longevity.

    PubMed

    Thompson, Emily A; Beura, Lalit K; Nelson, Christine E; Anderson, Kristin G; Vezys, Vaiva

    2016-04-01

    Developing vaccine strategies to generate high numbers of Ag-specific CD8 T cells may be necessary for protection against recalcitrant pathogens. Heterologous prime-boost-boost immunization has been shown to result in large quantities of functional memory CD8 T cells with protective capacities and long-term stability. Completing the serial immunization steps for heterologous prime-boost-boost can be lengthy, leaving the host vulnerable for an extensive period of time during the vaccination process. We show in this study that shortening the intervals between boosting events to 2 wk results in high numbers of functional and protective Ag-specific CD8 T cells. This protection is comparable to that achieved with long-term boosting intervals. Short-boosted Ag-specific CD8 T cells display a canonical memory T cell signature associated with long-lived memory and have identical proliferative potential to long-boosted T cells Both populations robustly respond to antigenic re-exposure. Despite this, short-boosted Ag-specific CD8 T cells continue to contract gradually over time, which correlates to metabolic differences between short- and long-boosted CD8 T cells at early memory time points. Our studies indicate that shortening the interval between boosts can yield abundant, functional Ag-specific CD8 T cells that are poised for immediate protection; however, this is at the expense of forming stable long-term memory. PMID:26903479

  5. Bifurcation behaviours of peak current controlled PFC boost converter

    NASA Astrophysics Data System (ADS)

    Ren, Hai-Peng; Liu, Ding

    2005-07-01

    Bifurcation behaviours of the peak current controlled power-factor-correction (PFC) boost converter, including fast-scale instability and low-frequency bifurcation, are investigated in this paper. Conventionally, the PFC converter is analysed in continuous conduction mode (CCM). This prevents us from recognizing the overall dynamics of the converter. It has been pointed out that the discontinuous conduction mode (DCM) can occur in the PFC boost converter, especially in the light load condition. Therefore, the DCM model is employed to analyse the PFC converter to cover the possible DCM operation. By this way, the low-frequency bifurcation diagram is derived, which makes the route from period-double bifurcation to chaos clear. The bifurcation diagrams versus the load resistance and the output capacitance also indicate the stable operation boundary of the converter, which is useful for converter design.

  6. High Temperature Boost (HTB) Power Processing Unit (PPU) Formulation Study

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Bradley, Arthur T.; Iannello, Christopher J.; Carr, Gregory A.; Mohammad, Mojarradi M.; Hunter, Don J.; DelCastillo, Linda; Stell, Christopher B.

    2013-01-01

    This technical memorandum is to summarize the Formulation Study conducted during fiscal year 2012 on the High Temperature Boost (HTB) Power Processing Unit (PPU). The effort is authorized and supported by the Game Changing Technology Division, NASA Office of the Chief Technologist. NASA center participation during the formulation includes LaRC, KSC and JPL. The Formulation Study continues into fiscal year 2013. The formulation study has focused on the power processing unit. The team has proposed a modular, power scalable, and new technology enabled High Temperature Boost (HTB) PPU, which has 5-10X improvement in PPU specific power/mass and over 30% in-space solar electric system mass saving.

  7. Externally Dispersed Interferometry for Resolution Boosting and Doppler Velocimetry

    SciTech Connect

    Erskine, D J

    2003-12-01

    Externally dispersed interferometry (EDI) is a rapidly advancing technique for wide bandwidth spectroscopy and radial velocimetry. By placing a small angle-independent interferometer near the slit of an existing spectrograph system, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moire pattern, which manifests high detailed spectral information heterodyned down to low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry (under a Doppler shift the entire moir{acute e} pattern shifts in phase). A demonstration of {approx}2x resolution boosting (100,000 from 50,000) on the Lick Obs. echelle spectrograph is shown. Preliminary data indicating {approx}8x resolution boost (170,000 from 20,000) using multiple delays has been taken on a linear grating spectrograph.

  8. Boosting bonsai trees for handwritten/printed text discrimination

    NASA Astrophysics Data System (ADS)

    Ricquebourg, Yann; Raymond, Christian; Poirriez, Baptiste; Lemaitre, Aurélie; Coüasnon, Bertrand

    2013-12-01

    Boosting over decision-stumps proved its efficiency in Natural Language Processing essentially with symbolic features, and its good properties (fast, few and not critical parameters, not sensitive to over-fitting) could be of great interest in the numeric world of pixel images. In this article we investigated the use of boosting over small decision trees, in image classification processing, for the discrimination of handwritten/printed text. Then, we conducted experiments to compare it to usual SVM-based classification revealing convincing results with very close performance, but with faster predictions and behaving far less as a black-box. Those promising results tend to make use of this classifier in more complex recognition tasks like multiclass problems.

  9. The Voltage Boost Enabled by Luminescence Extraction in Solar Cells

    DOE PAGESBeta

    Ganapati, Vidya; Steiner, Myles A.; Yablonovitch, Eli

    2016-07-01

    Over the past few years, the application of the physical principle, i.e., 'luminescence extraction,' has produced record voltages and efficiencies in photovoltaic cells. Luminescence extraction is the use of optical design, such as a back mirror or textured surfaces, to help internal photons escape out of the front surface of a solar cell. The principle of luminescence extraction is exemplified by the mantra 'a good solar cell should also be a good LED.' Basic thermodynamics says that the voltage boost should be related to concentration ratio C of a resource by ΔV = (kT/q) ln{C}. In light trapping (i.e., when the solar cell is textured and has a perfect back mirror), the concentration ratio of photons C = {4n2}; therefore, one would expect a voltage boost of ΔV = (kT/q) ln{4n2} over a solar cell with no texture and zero back reflectivity, where n is the refractive index. Nevertheless, there has been ambiguity over the voltage benefit to be expected from perfect luminescence extraction. Do we gain an open-circuit voltage boost of ΔV = (kT/q) ln{n2}, ΔV = (kT/q) ln{2 n2}, or ΔV = (kT/q) ln{4 n2}? What is responsible for this voltage ambiguity ΔV = (kT/q) ln{4}more » $${\\asymp}$$ 36 mV? Finally, we show that different results come about, depending on whether the photovoltaic cell is optically thin or thick to its internal luminescence. In realistic intermediate cases of optical thickness, the voltage boost falls in between: ln{n2} < (qΔV/kT) < ln{4n 2}.« less

  10. Fast interceptors for theater boost-phase intercept

    SciTech Connect

    Canavan, G.H.

    1993-04-01

    Boost-phase theater intercept concepts are needed for known and existing countermeasures to current systems. Fast kinetic energy interceptors could be developed from existing and improved propulsion technology and miniaturized sensors to provide that capability. High velocity interceptors with achievable acceleration could achieve the ranges needed for protection of bases and populations, addressing most theater threats. Propulsion requires development. Drag and heating are largely predictable and controllable. Fast interceptors would also have useful applications in national and global missile defense.

  11. (In)direct detection of boosted dark matter

    SciTech Connect

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse E-mail: cuiyo@umd.edu E-mail: jthaler@mit.edu

    2014-10-01

    We initiate the study of novel thermal dark matter (DM) scenarios where present-day annihilation of DM in the galactic center produces boosted stable particles in the dark sector. These stable particles are typically a subdominant DM component, but because they are produced with a large Lorentz boost in this process, they can be detected in large volume terrestrial experiments via neutral-current-like interactions with electrons or nuclei. This novel DM signal thus combines the production mechanism associated with indirect detection experiments (i.e. galactic DM annihilation) with the detection mechanism associated with direct detection experiments (i.e. DM scattering off terrestrial targets). Such processes are generically present in multi-component DM scenarios or those with non-minimal DM stabilization symmetries. As a proof of concept, we present a model of two-component thermal relic DM, where the dominant heavy DM species has no tree-level interactions with the standard model and thus largely evades direct and indirect DM bounds. Instead, its thermal relic abundance is set by annihilation into a subdominant lighter DM species, and the latter can be detected in the boosted channel via the same annihilation process occurring today. Especially for dark sector masses in the 10 MeV–10 GeV range, the most promising signals are electron scattering events pointing toward the galactic center. These can be detected in experiments designed for neutrino physics or proton decay, in particular Super-K and its upgrade Hyper-K, as well as the PINGU/MICA extensions of IceCube. This boosted DM phenomenon highlights the distinctive signatures possible from non-minimal dark sectors.

  12. (In)direct detection of boosted dark matter

    NASA Astrophysics Data System (ADS)

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse

    2014-10-01

    We initiate the study of novel thermal dark matter (DM) scenarios where present-day annihilation of DM in the galactic center produces boosted stable particles in the dark sector. These stable particles are typically a subdominant DM component, but because they are produced with a large Lorentz boost in this process, they can be detected in large volume terrestrial experiments via neutral-current-like interactions with electrons or nuclei. This novel DM signal thus combines the production mechanism associated with indirect detection experiments (i.e. galactic DM annihilation) with the detection mechanism associated with direct detection experiments (i.e. DM scattering off terrestrial targets). Such processes are generically present in multi-component DM scenarios or those with non-minimal DM stabilization symmetries. As a proof of concept, we present a model of two-component thermal relic DM, where the dominant heavy DM species has no tree-level interactions with the standard model and thus largely evades direct and indirect DM bounds. Instead, its thermal relic abundance is set by annihilation into a subdominant lighter DM species, and the latter can be detected in the boosted channel via the same annihilation process occurring today. Especially for dark sector masses in the 10 MeV-10 GeV range, the most promising signals are electron scattering events pointing toward the galactic center. These can be detected in experiments designed for neutrino physics or proton decay, in particular Super-K and its upgrade Hyper-K, as well as the PINGU/MICA extensions of IceCube. This boosted DM phenomenon highlights the distinctive signatures possible from non-minimal dark sectors.

  13. Perception of straightness and parallelism with minimal distance information.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2016-07-01

    The ability of human observers to judge the straightness and parallelism of extended lines has been a neglected topic of study since von Helmholtz's initial observations 150 years ago. He showed that there were significant misperceptions of the straightness of extended lines seen in the peripheral visual field. The present study focused on the perception of extended lines (spanning 90° visual angle) that were directly fixated in the visual environment of a planetarium where there was only minimal information about the distance to the lines. Observers were asked to vary the curvature of 1 or more lines until they appeared to be straight and/or parallel, ignoring any perceived curvature in depth. When the horizon between the ground and the sky was visible, the results showed that observers' judgements of the straightness of a single line were significantly biased away from the veridical, great circle locations, and towards equal elevation settings. Similar biases can be seen in the jet trails of aircraft flying across the sky and in Rogers and Anstis's new moon illusion (Perception, 42(Abstract supplement) 18, 2013, 2016). The biasing effect of the horizon was much smaller when observers were asked to judge the straightness and parallelism of 2 or more extended lines. We interpret the results as showing that, in the absence of adequate distance information, observers tend to perceive the projected lines as lying on an approximately equidistant, hemispherical surface and that their judgements of straightness and parallelism are based on the perceived separation of the lines superimposed on that surface. PMID:27025213

  14. Masking reveals parallel form systems in the visual brain

    PubMed Central

    Lo, Yu Tung; Zeki, Semir

    2014-01-01

    It is generally supposed that there is a single, hierarchically organized pathway dedicated to form processing, in which complex forms are elaborated from simpler ones, beginning with the orientation-selective cells of V1. In this psychophysical study, we undertook to test another hypothesis, namely that the brain’s visual form system consists of multiple parallel systems and that complex forms are other than the sum of their parts. Inspired by imaging experiments which show that forms of increasing perceptual complexity (lines, angles, and rhombuses) constituted from the same elements (lines) activate the same visual areas (V1, V2, and V3) with the same intensity and latency (Shigihara and Zeki, 2013, 2014), we used backward masking to test the supposition that these forms are processed in parallel. We presented subjects with lines, angles, and rhombuses as different target-mask pairs. Evidence in favor of our supposition would be if masking is the most effective when target and mask are processed by the same system and least effective when they are processed in different systems. Our results showed that rhombuses were strongly masked by rhombuses but only weakly masked by lines or angles, but angles and lines were well masked by each other. The relative resistance of rhombuses to masking by low-level forms like lines and angles suggests that complex forms like rhombuses may be processed in a separate parallel system, whereas lines and angles are processed in the same one. PMID:25120460

  15. Parallelized dilate algorithm for remote sensing image.

    PubMed

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm.

  16. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  17. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  18. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  19. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  20. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  1. CS-Studio Scan System Parallelization

    SciTech Connect

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  2. Measure Lines

    ERIC Educational Resources Information Center

    Crissman, Sally

    2011-01-01

    One tool for enhancing students' work with data in the science classroom is the measure line. As a coteacher and curriculum developer for The Inquiry Project, the author has seen how measure lines--a number line in which the numbers refer to units of measure--help students not only represent data but also analyze it in ways that generate…

  3. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  4. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  5. Pharmacodynamics of long-acting folic acid-receptor targeted ritonavir boosted atazanavir nanoformulations

    PubMed Central

    Puligujja, Pavan; Balkundi, Shantanu; Kendrick, Lindsey; Baldridge, Hannah; Hilaire, James; Bade, Aditya N.; Dash, Prasanta K.; Zhang, Gang; Poluektova, Larisa; Gorantla, Santhi; Liu, Xin-Ming; Ying, Tianlei; Feng, Yang; Wang, Yanping; Dimitrov, Dimiter S.; McMillan, JoEllyn M.; Gendelman, Howard E.

    2014-01-01

    Long-acting nanoformulated antiretroviral therapy (nanoART) that target monocyte-macrophage could improve the drug’s half-life and protein binding capacities while facilitating cell and tissue depots. To this end, ART nanoparticles that target the folic acid (FA) receptor and permit cell-based drug depots were examined using pharmacokinetic and pharmacodynamic (PD) tests. FA receptor-targeted poloxamer 407 nanocrystals, containing ritonavir-boosted atazanavir (ATV/r), significantly affected several therapeutic factors: drug bioavailability increased as much as 5 times and PD activity improved as much as 100 times. Drug particles administered to human peripheral blood lymphocyte reconstituted NOD.Cg-PrkdcscidIl2rgtm1Wjl/SzJ mice and infected with HIV-1ADA at a tissue culture infective dose50 of 104 infectious viral particles/ml led to ATV/r drug concentrations that paralleled FA receptor beta staining in both the macrophage-rich parafollicular areas of spleen and lymph nodes. Drug levels were higher in these tissues than what could be achieved by either native drug or untargeted nanoART particles. The data also mirrored potent reductions in viral loads, tissue viral RNA and numbers of HIV-1p24+ cells in infected and treated animals. We conclude that FA-P407 coating of ART nanoparticles readily facilitate drug carriage and facilitate antiretroviral responses. PMID:25522973

  6. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  7. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  8. Illusory movement of dotted lines

    PubMed Central

    Ito, Hiroyuki; Anstis, Stuart; Cavanagh, Patrick

    2013-01-01

    When oblique rows of black and white dots drifted horizontally across a mid-grey surround, the perceived direction of motion was shifted to be almost parallel to the dotted lines and was often nearly orthogonal to the real motion. The reason is that the black/white contrast signals between adjacent dots along the length of the line are stronger than black/grey or white/grey contrast signals across the line, and the motion is computed as a vector sum of local contrast-weighted motion signals. PMID:19911636

  9. "With One Lip, With Two Lips"; Parallelism in Nahuatl.

    ERIC Educational Resources Information Center

    Bright, William

    1990-01-01

    Texts in Classical Nahuatl from 1524, in the genre of formal oratory, reveal extensive use of lines showing parallel morphosyntactic and semantic structure. Analysis and translation of a passage point to the applicability of structural analysis to "expressive" as well as "referential" texts; and the importance of understanding oral literatures in…

  10. Tumor bed boost radiotherapy in breast cancer. A review of current techniques.

    PubMed

    Bahadur, Yasir A; Constantinescu, Camelia T

    2012-04-01

    Various breast boost irradiation techniques were studied and compared. The most commonly used techniques are external beam radiation therapy (EBRT) (photons or electrons) and high dose rate (HDR) interstitial brachytherapy, but recent studies have also revealed the use of advanced radiotherapy techniques, such as intensity modulated radiation therapy (IMRT), intra-operative radiation therapy (IORT), tomotherapy, and protons. The purpose of this study is to systematically review the literature concerning breast boost radiotherapy techniques, and suggest evidence based guidelines for each. A search for literature was performed in the National Library of Medicine's (PubMed) database for English-language articles published from 1st January 1990 to 5th April 2011. The key words were `breast boost radiotherapy`, `breast boost irradiation`, and `breast boost irradiation AND techniques`. Randomized trials comparing the long-term results of boost irradiation techniques, balancing the local control, and cosmesis against logistic resources, and including cost-benefit analysis are further needed. PMID:22485229

  11. How citation boosts promote scientific paradigm shifts and nobel prizes.

    PubMed

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-01-01

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract. PMID:21573229

  12. How Citation Boosts Promote Scientific Paradigm Shifts and Nobel Prizes

    PubMed Central

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-01-01

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the “boosting effect” of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying “boost factor” is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract. PMID:21573229

  13. How citation boosts promote scientific paradigm shifts and nobel prizes.

    PubMed

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-05-04

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.

  14. HPC Infrastructure for Solid Earth Simulation on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Nakajima, K.; Chen, L.; Okuda, H.

    2004-12-01

    Recently, various types of parallel computers with various types of architectures and processing elements (PE) have emerged, which include PC clusters and the Earth Simulator. Moreover, users can easily access to these computer resources through network on Grid environment. It is well-known that thorough tuning is required for programmers to achieve excellent performance on each computer. The method for tuning strongly depends on the type of PE and architecture. Optimization by tuning is a very tough work, especially for developers of applications. Moreover, parallel programming using message passing library such as MPI is another big task for application programmers. In GeoFEM project (http://gefeom.tokyo.rist.or.jp), authors have developed a parallel FEM platform for solid earth simulation on the Earth Simulator, which supports parallel I/O, parallel linear solvers and parallel visualization. This platform can efficiently hide complicated procedures for parallel programming and optimization on vector processors from application programmers. This type of infrastructure is very useful. Source codes developed on PC with single processor is easily optimized on massively parallel computer by linking the source code to the parallel platform installed on the target computer. This parallel platform, called HPC Infrastructure will provide dramatic efficiency, portability and reliability in development of scientific simulation codes. For example, line number of the source codes is expected to be less than 10,000 and porting legacy codes to parallel computer takes 2 or 3 weeks. Original GeoFEM platform supports only I/O, linear solvers and visualization. In the present work, further development for adaptive mesh refinement (AMR) and dynamic load-balancing (DLB) have been carried out. In this presentation, examples of large-scale solid earth simulation using the Earth Simulator will be demonstrated. Moreover, recent results of a parallel computational steering tool using an

  15. An Electronic Ballast Using Back-boost Converter

    NASA Astrophysics Data System (ADS)

    Yokozeki, I.; Kato, Y.; Kuratani, T.; Okamura, Y.; Ohkita, M.; Takahashi, N.

    The new circuit of the electronic ballast for a fluorescent lamp, which is based on the back-boost converter, is proposed. This circuit has the advantage further than the circuit that combined the converter for rectification with the inverter for lighting in some points. Those advantages are reduction of the number of elements and improvement in the circuit efficiency, etc. And this circuit also satisfies regulation value of IEC class C on the relative harmonic contents of input current. From experimental results, the effectiveness of the proposed circuit is shown.

  16. Cylindrical array luminescent solar concentrators: performance boosts by geometric effects.

    PubMed

    Videira, Jose J H; Bilotti, Emiliano; Chatten, Amanda J

    2016-07-11

    This paper presents an investigation of the geometric effects within a cylindrical array luminescent solar concentrator (LSC). Photon concentration of a cylindrical LSC increases linearly with cylinder length up to 2 metres. Raytrace modelling on the shading effects of circles on their neighbours demonstrates effective incident light trapping in a cylindrical LSC array at angles of incidence between 60-70 degrees. Raytrace modelling with real-world lighting conditions shows optical efficiency boosts when the suns angle of incidence is within this angle range. On certain days, 2 separate times of peak optical efficiency can be attained over the course of sunrise-solar noon. PMID:27410904

  17. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  18. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  19. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  20. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  1. Experimental Parallel-Processing Computer

    NASA Technical Reports Server (NTRS)

    Mcgregor, J. W.; Salama, M. A.

    1986-01-01

    Master processor supervises slave processors, each with its own memory. Computer with parallel processing serves as inexpensive tool for experimentation with parallel mathematical algorithms. Speed enhancement obtained depends on both nature of problem and structure of algorithm used. In parallel-processing architecture, "bank select" and control signals determine which one, if any, of N slave processor memories accessible to master processor at any given moment. When so selected, slave memory operates as part of master computer memory. When not selected, slave memory operates independently of main memory. Slave processors communicate with each other via input/output bus.

  2. Final Technical Report for the BOOST2013 Workshop. Hosted by the University of Arizona

    SciTech Connect

    Johns, Kenneth

    2015-02-20

    BOOST 2013 was the 5th International Joint Theory/Experiment Workshop on Phenomenology, Reconstruction and Searches for Boosted Objects in High Energy Hadron Collisions. It was locally organized and hosted by the Experimental High Energy Physics Group at the University of Arizona and held at Flagstaff, Arizona on August 12-16, 2013. The workshop provided a forum for theorists and experimentalists to present and discuss the latest findings related to the reconstruction of boosted objects in high energy hadron collisions and their use in searches for new physics. This report gives the outcomes of the BOOST 2013 Workshop.

  3. Adaptive optics parallel near-confocal scanning ophthalmoscopy.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2016-08-15

    We present an adaptive optics parallel near-confocal scanning ophthalmoscope (AOPCSO) using a digital micromirror device (DMD). The imaging light is modulated to be a line of point sources by the DMD, illuminating the retina simultaneously. By using a high-speed line camera to acquire the image and using adaptive optics to compensate the ocular wave aberration, the AOPCSO can image the living human eye with cellular level resolution at the frame rate of 100 Hz. AOPCSO has been demonstrated with improved spatial resolution in imaging of the living human retina compared with adaptive optics line scan ophthalmoscopy.

  4. Adaptive optics parallel near-confocal scanning ophthalmoscopy.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2016-08-15

    We present an adaptive optics parallel near-confocal scanning ophthalmoscope (AOPCSO) using a digital micromirror device (DMD). The imaging light is modulated to be a line of point sources by the DMD, illuminating the retina simultaneously. By using a high-speed line camera to acquire the image and using adaptive optics to compensate the ocular wave aberration, the AOPCSO can image the living human eye with cellular level resolution at the frame rate of 100 Hz. AOPCSO has been demonstrated with improved spatial resolution in imaging of the living human retina compared with adaptive optics line scan ophthalmoscopy. PMID:27519106

  5. Real-Time Road Sign Detection Using Fuzzy-Boosting

    NASA Astrophysics Data System (ADS)

    Yoon, Changyong; Lee, Heejin; Kim, Euntai; Park, Mignon

    This paper describes a vision-based and real-time system for detecting road signs from within a moving vehicle. The system architecture which is proposed in this paper consists of two parts, the learning and the detection part of road sign images. The proposed system has the standard architecture with adaboost algorithm. Adaboost is a popular algorithm which used to detect an object in real time. To improve the detection rate of adaboost algorithm, this paper proposes a new combination method of classifiers in every stage. In the case of detecting road signs in real environment, it can be ambiguous to decide to which class input images belong. To overcome this problem, we propose a method that applies fuzzy measure and fuzzy integral which use the importance and the evaluated values of classifiers within one stage. It is called fuzzy-boosting in this paper. Also, to improve the speed of a road sign detection algorithm using adaboost at the detection step, we propose a method which chooses several candidates by using MC generator. In this paper, as the sub-windows of chosen candidates pass classifiers which are made from fuzzy-boosting, we decide whether a road sign is detected or not. Using experiment result, we analyze and compare the detection speed and the classification error rate of the proposed algorithm applied to various environment and condition.

  6. Jet substructures of boosted polarized hadronic top quarks

    NASA Astrophysics Data System (ADS)

    Kitadono, Yoshio; Li, Hsiang-nan

    2016-03-01

    We study jet substructures of a boosted polarized top quark, which undergoes the hadronic decay t →b u d ¯, in the perturbative QCD framework, focusing on the energy profile and the differential energy profile. These substructures are factorized into the convolution of a hard top-quark decay kernel with a bottom-quark jet function and a W -boson jet function, where the latter is further factorized into the convolution of a hard W -boson decay kernel with two light-quark jet functions. Computing the hard kernels to leading order in QCD and including the resummation effect in the jet functions, we show that the differential jet energy profile is a useful observable for differentiating the helicity of a boosted hadronic top quark: a right-handed top jet exhibits quick descent of the differential energy profile with the inner test cone radius r , which is attributed to the V -A structure of weak interaction and the dead-cone effect associated with the W -boson jet. The above helicity differentiation may help reveal the chiral structure of physics beyond the standard model at high energies.

  7. A TEG Efficiency Booster with Buck-Boost Conversion

    NASA Astrophysics Data System (ADS)

    Wu, Hongfei; Sun, Kai; Zhang, Junjun; Xing, Yan

    2013-07-01

    A thermoelectric generator (TEG) efficiency booster with buck-boost conversion and power management is proposed as a TEG battery power conditioner suitable for a wide TEG output voltage range. An inverse-coupled inductor is employed in the buck-boost converter, which is used to achieve smooth current with low ripple on both the TEG and battery sides. Furthermore, benefiting from the magnetic flux counteraction of the two windings on the coupled inductor, the core size and power losses of the filter inductor are reduced, which can achieve both high efficiency and high power density. A power management strategy is proposed for this power conditioning system, which involves maximum power point tracking (MPPT), battery voltage control, and battery current control. A control method is employed to ensure smooth switching among different working modes. A modified MPPT control algorithm with improved dynamic and steady-state characteristics is presented and applied to the TEG battery power conditioning system to maximize energy harvesting. A 500-W prototype has been built, and experimental tests carried out on it. The power efficiency of the prototype at full load is higher than 96%, and peak efficiency of 99% is attained.

  8. Hyperdynamics boost factor achievable with an ideal bias potential

    DOE PAGESBeta

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-20

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintainingmore » high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.« less

  9. Hyperdynamics boost factor achievable with an ideal bias potential

    NASA Astrophysics Data System (ADS)

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-01

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintaining high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Finally, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.

  10. Hyperdynamics boost factor achievable with an ideal bias potential

    SciTech Connect

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-20

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintaining high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.

  11. Boosting the Light: X-ray Physics in Confinement

    ScienceCinema

    Rhisberger, Ralf [HASYLAB/ DESY

    2016-07-12

    Remarkable effects are observed if light is confined to dimensions comparable to the wavelength of the light. The lifetime of atomic resonances excited by the radiation is strongly reduced in photonic traps, such as cavities or waveguides. Moreover, one observes an anomalous boost of the intensity scattered from the resonant atoms. These phenomena results from the strong enhancement of the photonic density of states in such geometries. Many of these effects are currently being explored in the regime of vsible light due to their relevance for optical information processing. It is thus appealing to study these phenomena also for much shorter wavelengths. This talk illuminates recent experiments where synchrotron x-rays were trapped in planar waveguides to resonantly excite atomos ([57]Fe nuclei_ embedded in them. In fact, one observes that the radiative decay of these excited atoms is strongly accelerated. The temporal acceleration of the decay goes along with a strong boost of the radiation coherently scattered from the confined atmos. This can be exploited to obtain a high signal-to-noise ratio from tiny quantities of material, leading to manifold applications in the investigation of nanostructured materials. One application is the use of ultrathin probe layers to image the internal structure of magnetic layer systems.

  12. Angular observables for spin discrimination in boosted diboson final states

    NASA Astrophysics Data System (ADS)

    Buschmann, Malte; Yu, Felix

    2016-09-01

    We investigate the prospects for spin determination of a heavy diboson resonance using angular observables. Focusing in particular on boosted fully hadronic final states, we detail both the differences in signal efficiencies and distortions of differential distributions resulting from various jet substructure techniques. We treat the 2 TeV diboson excess as a case study, but our results are generally applicable to any future discovery in the diboson channel. Scrutinizing ATLAS and CMS analyses at 8 TeV and 13 TeV, we find that the specific cuts employed in these analyses have a tremendous impact on the discrimination power between different signal hypotheses. We discuss modified cuts that can offer a significant boost to spin sensitivity in a post-discovery era. Even without altered cuts, we show that CMS, and partly also ATLAS, will be able to distinguish between spin 0, 1, or 2 new physics diboson resonances at the 2 σ level with 30 fb-1 of 13 TeV data, for our 2 TeV case study.

  13. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  14. IHPRPT Phase I Solid Boost Demonstrator: A Success Story

    NASA Astrophysics Data System (ADS)

    Glaittli, Steven R.

    2001-06-01

    The integrated High-Payoff Rocket Propulsion Technology or IHPRPT program seeks to double the launch capability of the United States by the year 2010. The program is organized into three phases, with a technology demonstrator at the end of each phase. The IHPRPT Phase I Solid Boost Demonstrator Program is presented. Materials and processing technologies developed under the IHPRPT program and on other contracted technology and privately funded programs were combined into one full-scale booster demonstrator culminating six years of new technology work. New materials and processes were used in all components of the demonstration motor to achieve the cost and performance goals identified for the Phase I Boost & Orbit Transfer Propulsion mission area in the IHPRPT program. New materials utilized in the motor included low cost high performance carbon fibers in the composite case energetic ingredients in the propellant. net molded structural parts in the nozzle. and an all-new electromechanical Thrust Vector Actuation (TVA) system. The demonstrator was successfully static tested on 16 November 2000 The static test has been heralded as a success by government and industry observers alike.

  15. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  16. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  17. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  18. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  19. Metal structures with parallel pores

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  20. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  1. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  2. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  3. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  4. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  5. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  6. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  7. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  8. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  9. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework

    PubMed Central

    Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155

  10. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  11. Locus stabilization with parallel rectilinear dot-progressions.

    PubMed

    Holden, E A; Chmiko, M A

    1975-02-01

    Illusory displacement measures ("straight" or "crooked") were compared for single vs parallel rectilinear light progressions in 24 educable mental retardates and 24 normals of equal MA. For both groups, frequency of perceived "straight" responses was greater for the parallel- than for the single-line progressions. It was concluded that concurrent stimulation by two proximate visual stimuli rather than the presence of supplementary interstimulus mediatory referents is sufficient to facilitate veridical perception of successive light positions. Comparable performance by the normals and retardates substantiates previous findings.

  12. Enhanced algorithm performance for land cover classification from remotely sensed data using bagging and boosting

    USGS Publications Warehouse

    Chan, J.C.-W.; Huang, C.; DeFries, R.

    2001-01-01

    Two ensemble methods, bagging and boosting, were investigated for improving algorithm performance. Our results confirmed the theoretical explanation [1] that bagging improves unstable, but not stable, learning algorithms. While boosting enhanced accuracy of a weak learner, its behavior is subject to the characteristics of each learning algorithm.

  13. Cost-Sensitive Boosting: Fitting an Additive Asymmetric Logistic Regression Model

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Jie; Mao, Yao-Bin; Wang, Zhi-Quan; Xiang, Wen-Bo

    Conventional machine learning algorithms like boosting tend to equally treat misclassification errors that are not adequate to process certain cost-sensitive classification problems such as object detection. Although many cost-sensitive extensions of boosting by directly modifying the weighting strategy of correspond original algorithms have been proposed and reported, they are heuristic in nature and only proved effective by empirical results but lack sound theoretical analysis. This paper develops a framework from a statistical insight that can embody almost all existing cost-sensitive boosting algorithms: fitting an additive asymmetric logistic regression model by stage-wise optimization of certain criterions. Four cost-sensitive versions of boosting algorithms are derived, namely CSDA, CSRA, CSGA and CSLB which respectively correspond to Discrete AdaBoost, Real AdaBoost, Gentle AdaBoost and LogitBoost. Experimental results on the application of face detection have shown the effectiveness of the proposed learning framework in the reduction of the cumulative misclassification cost.

  14. Breast Conserving Treatment for Breast Cancer: Dosimetric Comparison of Sequential versus Simultaneous Integrated Photon Boost

    PubMed Central

    Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark

    2014-01-01

    Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine. PMID:25162031

  15. Boost Your High: Cigarette Smoking to Enhance Alcohol and Drug Effects among Southeast Asian American Youth.

    PubMed

    Lipperman-Kreda, Sharon; Lee, Juliet P

    2011-01-01

    The current study examined: 1) whether using cigarettes to enhance the effects of other drugs (here referred to as "boosting") is a unique practice related to blunts (i.e., small cheap cigars hollowed out and filled with cannabis) or marijuana use only; 2) the prevalence of boosting among drug-using young people; and 3) the relationship between boosting and other drug-related risk behaviors. We present data collected from 89 Southeast Asian American youth and young adults in Northern California (35 females). 72% respondents reported any lifetime boosting. Controlling for gender, results of linear regression analyses show a significant positive relationship between frequency of boosting to enhance alcohol high and number of drinks per occasion. Boosting was also found to be associated with use of blunts but not other forms of marijuana and with the number of blunts on a typical day. The findings indicate that boosting may be common among drug-using Southeast Asian youths. These findings also indicate a need for further research on boosting as an aspect of cigarette uptake and maintenance among drug- and alcohol-involved youths.

  16. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... mechanical system. The power portion includes the power source (such as hydraulic pumps), and such items as... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system;......

  17. Parallel computation and computers for artificial intelligence

    SciTech Connect

    Kowalik, J.S. )

    1988-01-01

    This book discusses Parallel Processing in Artificial Intelligence; Parallel Computing using Multilisp; Execution of Common Lisp in a Parallel Environment; Qlisp; Restricted AND-Parallel Execution of Logic Programs; PARLOG: Parallel Programming in Logic; and Data-driven Processing of Semantic Nets. Attention is also given to: Application of the Butterfly Parallel Processor in Artificial Intelligence; On the Range of Applicability of an Artificial Intelligence Machine; Low-level Vision on Warp and the Apply Programming Mode; AHR: A Parallel Computer for Pure Lisp; FAIM-1: An Architecture for Symbolic Multi-processing; and Overview of Al Application Oriented Parallel Processing Research in Japan.

  18. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  19. Line-on-Line Coincidence: A New Type of Epitaxy Found in Organic-Organic Heterolayers

    NASA Astrophysics Data System (ADS)

    Mannsfeld, Stefan C.; Leo, Karl; Fritz, Torsten

    2005-02-01

    We propose a new type of epitaxy, line-on-line coincidence (LOL), which explains the ordering in the organic-organic heterolayer system PTCDA on HBC on graphite. LOL epitaxy is similar to point-on-line coincidence (POL) in the sense that all overlayer molecules lie on parallel, equally spaced lines. The key difference to POL is that these lines are not restricted to primitive lattice lines of the substrate lattice. Potential energy calculations demonstrate that this new type of epitaxy is indeed characterized by a minimum in the overlayer-substrate interaction potential.

  20. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  1. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  2. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  3. Writing about testing worries boosts exam performance in the classroom.

    PubMed

    Ramirez, Gerardo; Beilock, Sian L

    2011-01-14

    Two laboratory and two randomized field experiments tested a psychological intervention designed to improve students' scores on high-stakes exams and to increase our understanding of why pressure-filled exam situations undermine some students' performance. We expected that sitting for an important exam leads to worries about the situation and its consequences that undermine test performance. We tested whether having students write down their thoughts about an upcoming test could improve test performance. The intervention, a brief expressive writing assignment that occurred immediately before taking an important test, significantly improved students' exam scores, especially for students habitually anxious about test taking. Simply writing about one's worries before a high-stakes exam can boost test scores. PMID:21233387

  4. Revamp of Ukraine VCM plant will boost capacity, reduce emissions

    SciTech Connect

    1996-05-13

    Oriana Concern (formerly P.O. Chlorvinyl) is revamping its 250,000 metric ton/year (mty) vinyl chloride monomer (VCM) plant at Kalusch, Ukraine. At the core of the project area new ethylene dichloride (EDC) cracking furnace and direct chlorination unit, and revamp of an oxychlorination unit to use oxygen rather than air. The plant expansion and modernization will boost capacity to 370,000 mty. New facilities for by-product recycling and recovery, waste water treatment, and emissions reduction will improve the plant`s environmental performance. This paper shows expected feedstock and utility consumption for VCM production. Techmashimport and P.O. Chlorvinyl commissioned the Kalusch plant in 1975. The plant was built by Uhde GmbH, Dortmund, Germany. The paper also provides a schematic of the Hoechst/Uhde VCM process being used for the plant revamp. The diagram is divided into processing sections.

  5. Prediction and control of limit cycling motions in boosting rockets

    NASA Astrophysics Data System (ADS)

    Newman, Brett

    An investigation concerning the prediction and control of observed limit cycling behavior in a boosting rocket is considered. The suspected source of the nonlinear behavior is the presence of Coulomb friction in the nozzle pivot mechanism. A classical sinusoidal describing function analysis is used to accurately recreate and predict the observed oscillatory characteristic. In so doing, insight is offered into the limit cycling mechanism and confidence is gained in the closed-loop system design. Nonlinear simulation results are further used to support and verify the results obtained from describing function theory. Insight into the limit cycling behavior is, in turn, used to adjust control system parameters in order to passively control the oscillatory tendencies. Tradeoffs with the guidance and control system stability/performance are also noted. Finally, active control of the limit cycling behavior, using a novel feedback algorithm to adjust the inherent nozzle sticking-unsticking characteristics, is considered.

  6. An update on Shankhpushpi, a cognition-boosting Ayurvedic medicine.

    PubMed

    Sethiya, Neeraj Kumar; Nahata, Alok; Mishra, Sri Hari; Dixit, Vinod Kumar

    2009-11-01

    Shankhpushpi is an Ayurvedic drug used for its action on the central nervous system, especially for boosting memory and improving intellect. Quantum of information gained from Ayurvedic and other Sanskrit literature revealed the existence of four different plant species under the name of Shankhpushpi, which is used in various Ayurvedic prescriptions described in ancient texts, singly or in combination with other herbs. The sources comprise of entire herbs with following botanicals viz., Convulvulus pluricaulis Choisy. (Convulvulaceae), Evolvulus alsinoides Linn. (Convulvulaceae), Clitoria ternatea Linn. (Papilionaceae) and Canscora decussata Schult. (Gentianaceae). A review on the available scientific information in terms of pharmacognostical characteristics, chemical constituents, pharmacological activities, preclinical and clinical applications of controversial sources of Shankhpushpi is prepared with a view to review scientific work undertaken on Shankhpushpi. It may provide parameters of differentiation and permit appreciation of variability of drug action by use of different botanical sources. PMID:19912732

  7. Defined three-dimensional microenvironments boost induction of pluripotency

    NASA Astrophysics Data System (ADS)

    Caiazzo, Massimiliano; Okawa, Yuya; Ranga, Adrian; Piersigilli, Alessandra; Tabata, Yoji; Lutolf, Matthias P.

    2016-03-01

    Since the discovery of induced pluripotent stem cells (iPSCs), numerous approaches have been explored to improve the original protocol, which is based on a two-dimensional (2D) cell-culture system. Surprisingly, nothing is known about the effect of a more biologically faithful 3D environment on somatic-cell reprogramming. Here, we report a systematic analysis of how reprogramming of somatic cells occurs within engineered 3D extracellular matrices. By modulating microenvironmental stiffness, degradability and biochemical composition, we have identified a previously unknown role for biophysical effectors in the promotion of iPSC generation. We find that the physical cell confinement imposed by the 3D microenvironment boosts reprogramming through an accelerated mesenchymal-to-epithelial transition and increased epigenetic remodelling. We conclude that 3D microenvironmental signals act synergistically with reprogramming transcription factors to increase somatic plasticity.

  8. Boosting thermoelectric efficiency using time-dependent control.

    PubMed

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-01-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency.

  9. Emerging applications for the Peacekeeper Post Boost Vehicle

    NASA Astrophysics Data System (ADS)

    Blake, Jack

    1992-07-01

    The use of the Peacekeeper Post Boost Vehicle (PBV) is considered for applications beyond its original use as stage IV of the ICBM system. The PBV is described in the context of the Peacekeeper mission, and the axial engine and attitude-control engines are illustrated. The capability of the PBV is found to make the engine appropriate for use as a liquid-plume generator for the Space Pallet Satellite Experiment. The PBV could also be used as a system basis for transporting large payloads into LEO, and a PBV-based platform could remain in orbit to serve as an earth/payload communication link. The PBV offers technologies and capabilities required for such missions as a plume generator, space-experiment module, or as a transfer vehicle for Space Station logistics and resupply.

  10. Writing about testing worries boosts exam performance in the classroom.

    PubMed

    Ramirez, Gerardo; Beilock, Sian L

    2011-01-14

    Two laboratory and two randomized field experiments tested a psychological intervention designed to improve students' scores on high-stakes exams and to increase our understanding of why pressure-filled exam situations undermine some students' performance. We expected that sitting for an important exam leads to worries about the situation and its consequences that undermine test performance. We tested whether having students write down their thoughts about an upcoming test could improve test performance. The intervention, a brief expressive writing assignment that occurred immediately before taking an important test, significantly improved students' exam scores, especially for students habitually anxious about test taking. Simply writing about one's worries before a high-stakes exam can boost test scores.

  11. Traction drive for cryogenic boost pump. [hydrogen oxygen rocket engines

    NASA Technical Reports Server (NTRS)

    Meyer, S.; Connelly, R. E.

    1981-01-01

    Two versions of a Nasvytis multiroller traction drive were tested in liquid oxygen for possible application as cryogenic boost pump speed reduction drives for advanced hydrogen-oxygen rocket engines. The roller drive, with a 10.8:1 reduction ratio, was successfully run at up to 70,000 rpm input speed and up to 14.9 kW (20 hp) input power level. Three drive assemblies were tested for a total of about three hours of which approximately one hour was at nominal full speed and full power conditions. Peak efficiency of 60 percent was determined. There was no evidence of slippage between rollers for any of the conditions tested. The ball drive, a version using balls instead of one row of rollers, and having a 3.25:1 reduction ratio, failed to perform satisfactorily.

  12. Boosting thermoelectric efficiency using time-dependent control.

    PubMed

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-01-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency. PMID:26464021

  13. Usefulness of effective field theory for boosted Higgs production

    SciTech Connect

    Dawson, S.; Lewis, I. M.; Zeng, Mao

    2015-04-07

    The Higgs + jet channel at the LHC is sensitive to the effects of new physics both in the total rate and in the transverse momentum distribution at high pT. We examine the production process using an effective field theory (EFT) language and discussing the possibility of determining the nature of the underlying high-scale physics from boosted Higgs production. The effects of heavy color triplet scalars and top partner fermions with TeV scale masses are considered as examples and Higgs-gluon couplings of dimension-5 and dimension-7 are included in the EFT. As a byproduct of our study, we examine the region of validity of the EFT. Dimension-7 contributions in realistic new physics models give effects in the high pT tail of the Higgs signal which are so tiny that they are likely to be unobservable.

  14. Boosting thermoelectric efficiency using time-dependent control

    PubMed Central

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-01-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency. PMID:26464021

  15. Measuring Intuition: Nonconscious Emotional Information Boosts Decision Accuracy and Confidence.

    PubMed

    Lufityanto, Galang; Donkin, Chris; Pearson, Joel

    2016-05-01

    The long-held popular notion of intuition has garnered much attention both academically and popularly. Although most people agree that there is such a phenomenon as intuition, involving emotionally charged, rapid, unconscious processes, little compelling evidence supports this notion. Here, we introduce a technique in which subliminal emotional information is presented to subjects while they make fully conscious sensory decisions. Our behavioral and physiological data, along with evidence-accumulator models, show that nonconscious emotional information can boost accuracy and confidence in a concurrent emotion-free decision task, while also speeding up response times. Moreover, these effects were contingent on the specific predictive arrangement of the nonconscious emotional valence and motion direction in the decisional stimulus. A model that simultaneously accumulates evidence from both physiological skin conductance and conscious decisional information provides an accurate description of the data. These findings support the notion that nonconscious emotions can bias concurrent nonemotional behavior-a process of intuition.

  16. Metabolic engineering of resveratrol and other longevity boosting compounds.

    PubMed

    Wang, Yechun; Chen, Hui; Yu, Oliver

    2010-01-01

    Resveratrol, a compound commonly found in red wine, has attracted many attentions recently. It is a diphenolic natural product accumulated in grapes and a few other species under stress conditions. It possesses a special ability to increase the life span of eukaryotic organisms, ranging from yeast, to fruit fly, to obese mouse. The demand for resveratrol as a food and nutrition supplement has increased significantly in recent years. Extensive work has been carried out to increase the production of resveratrol in plants and microbes. In this review, we will discuss the biosynthetic pathway of resveratrol and engineering methods to heterologously express the pathway in various organisms. We will outline the shortcuts and limitations of common engineering efforts. We will also discuss briefly the features and engineering challenges of other longevity boosting compounds. PMID:20848556

  17. Boosting thermoelectric efficiency using time-dependent control

    NASA Astrophysics Data System (ADS)

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-10-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency.

  18. Metabolic engineering of resveratrol and other longevity boosting compounds.

    SciTech Connect

    Wang, Y; Chen, H; Yu, O

    2010-09-16

    Resveratrol, a compound commonly found in red wine, has attracted many attentions recently. It is a diphenolic natural product accumulated in grapes and a few other species under stress conditions. It possesses a special ability to increase the life span of eukaryotic organisms, ranging from yeast, to fruit fly, to obese mouse. The demand for resveratrol as a food and nutrition supplement has increased significantly in recent years. Extensive work has been carried out to increase the production of resveratrol in plants and microbes. In this review, we will discuss the biosynthetic pathway of resveratrol and engineering methods to heterologously express the pathway in various organisms. We will outline the shortcuts and limitations of common engineering efforts. We will also discuss briefly the features and engineering challenges of other longevity boosting compounds.

  19. Syntactic priming during sentence comprehension: evidence for the lexical boost.

    PubMed

    Traxler, Matthew J; Tooley, Kristen M; Pickering, Martin J

    2014-07-01

    Syntactic priming occurs when structural information from one sentence influences processing of a subsequently encountered sentence (Bock, 1986; Ledoux et al., 2007). This article reports 2 eye-tracking experiments investigating the effects of a prime sentence on the processing of a target sentence that shared aspects of syntactic form. The experiments were designed to determine the degree to which lexical overlap between prime and target sentences produced larger effects, comparable to the widely observed "lexical boost" in production experiments (Pickering & Branigan, 1998; Pickering & Ferreira, 2008). The current experiments showed that priming effects during online comprehension were in fact larger when a verb was repeated across the prime and target sentences (see also Tooley et al., 2009). The finding of larger priming effects with lexical repetition supports accounts under which syntactic form representations are connected to individual lexical items (e.g., Tomasello, 2003; Vosse & Kempen, 2000, 2009).

  20. A mechatronic power boosting design for piezoelectric generators

    SciTech Connect

    Liu, Haili; Liang, Junrui Ge, Cong

    2015-10-05

    It was shown that the piezoelectric power generation can be boosted by using the synchronized switch power conditioning circuits. This letter reports a self-powered and self-sensing mechatronic design in substitute of the auxiliary electronics towards a compact and universal synchronized switch solution. The design criteria are derived based on the conceptual waveforms and a two-degree-of-freedom analytical model. Experimental result shows that, compared to the standard bridge rectifier interface, the mechatronic design leads to an extra 111% increase of generated power from the prototyped piezoelectric generator under the same deflection magnitude excitation. The proposed design has introduced a valuable physical insight of electromechanical synergy towards the improvement of piezoelectric power generation.

  1. Boosting magnetic reconnection by viscosity and thermal conduction

    NASA Astrophysics Data System (ADS)

    Minoshima, Takashi; Miyoshi, Takahiro; Imada, Shinsuke

    2016-07-01

    Nonlinear evolution of magnetic reconnection is investigated by means of magnetohydrodynamic simulations including uniform resistivity, uniform viscosity, and anisotropic thermal conduction. When viscosity exceeds resistivity (the magnetic Prandtl number P r m > 1 ), the viscous dissipation dominates outflow dynamics and leads to the decrease in the plasma density inside a current sheet. The low-density current sheet supports the excitation of the vortex. The thickness of the vortex is broader than that of the current for P r m > 1 . The broader vortex flow more efficiently carries the upstream magnetic flux toward the reconnection region, and consequently, boosts the reconnection. The reconnection rate increases with viscosity provided that thermal conduction is fast enough to take away the thermal energy increased by the viscous dissipation (the fluid Prandtl number Pr < 1). The result suggests the need to control the Prandtl numbers for the reconnection against the conventional resistive model.

  2. A mechatronic power boosting design for piezoelectric generators

    NASA Astrophysics Data System (ADS)

    Liu, Haili; Liang, Junrui; Ge, Cong

    2015-10-01

    It was shown that the piezoelectric power generation can be boosted by using the synchronized switch power conditioning circuits. This letter reports a self-powered and self-sensing mechatronic design in substitute of the auxiliary electronics towards a compact and universal synchronized switch solution. The design criteria are derived based on the conceptual waveforms and a two-degree-of-freedom analytical model. Experimental result shows that, compared to the standard bridge rectifier interface, the mechatronic design leads to an extra 111% increase of generated power from the prototyped piezoelectric generator under the same deflection magnitude excitation. The proposed design has introduced a valuable physical insight of electromechanical synergy towards the improvement of piezoelectric power generation.

  3. Boosted di-boson from a mixed heavy stop

    SciTech Connect

    Ghosh, Diptimoy

    2013-12-01

    The lighter mass eigenstate ($\\widetilde{t}_1$) of the two top squarks, the scalar superpartners of the top quark, is extremely difficult to discover if it is almost degenerate with the lightest neutralino ($\\widetilde{\\chi}_1^0$), the lightest and stable supersymmetric particle in the R-parity conserving supersymmetry. The current experimental bound on $\\widetilde{t}_1$ mass in this scenario stands only around 200 GeV. For such a light $\\widetilde{t}_1$, the heavier top squark ($\\widetilde{t}_2$) can also be around the TeV scale. Moreover, the high value of the higgs ($h$) mass prefers the left and right handed top squarks to be highly mixed allowing the possibility of a considerable branching ratio for $\\widetilde{t}_2 \\to \\widetilde{t}_1 h$ and $\\widetilde{t}_2 \\to \\widetilde{t}_1 Z$. In this paper, we explore the above possibility together with the pair production of $\\widetilde{t}_2$ $\\widetilde{t}_2^*$ giving rise to the spectacular di-boson + missing transverse energy final state. For an approximately 1 TeV $\\widetilde{t}_2$ and a few hundred GeV $\\widetilde{t}_1$ the final state particles can be moderately boosted which encourages us to propose a novel search strategy employing the jet substructure technique to tag the boosted $h$ and $Z$. The reconstruction of the $h$ and $Z$ momenta also allows us to construct the stransverse mass $M_{T2}$ providing an additional efficient handle to fight the backgrounds. We show that a 4--5$\\sigma$ signal can be observed at the 14 TeV LHC for $\\sim$ 1 TeV $\\widetilde{t}_2$ with 100 fb$^{-1}$ integrated luminosity.

  4. Esophageal Cancer Dose Escalation Using a Simultaneous Integrated Boost Technique

    SciTech Connect

    Welsh, James; Palmer, Matthew B.; Ajani, Jaffer A.; Liao Zhongxing; Swisher, Steven G.; Hofstetter, Wayne L.; Allen, Pamela K.; Settle, Steven H.; Gomez, Daniel; Likhacheva, Anna; Cox, James D.; Komaki, Ritsuko

    2012-01-01

    Purpose: We previously showed that 75% of radiation therapy (RT) failures in patients with unresectable esophageal cancer are in the gross tumor volume (GTV). We performed a planning study to evaluate if a simultaneous integrated boost (SIB) technique could selectively deliver a boost dose of radiation to the GTV in patients with esophageal cancer. Methods and Materials: Treatment plans were generated using four different approaches (two-dimensional conformal radiotherapy [2D-CRT] to 50.4 Gy, 2D-CRT to 64.8 Gy, intensity-modulated RT [IMRT] to 50.4 Gy, and SIB-IMRT to 64.8 Gy) and optimized for 10 patients with distal esophageal cancer. All plans were constructed to deliver the target dose in 28 fractions using heterogeneity corrections. Isodose distributions were evaluated for target coverage and normal tissue exposure. Results: The 50.4 Gy IMRT plan was associated with significant reductions in mean cardiac, pulmonary, and hepatic doses relative to the 50.4 Gy 2D-CRT plan. The 64.8 Gy SIB-IMRT plan produced a 28% increase in GTV dose and comparable normal tissue doses as the 50.4 Gy IMRT plan; compared with the 50.4 Gy 2D-CRT plan, the 64.8 Gy SIB-IMRT produced significant dose reductions to all critical structures (heart, lung, liver, and spinal cord). Conclusions: The use of SIB-IMRT allowed us to selectively increase the dose to the GTV, the area at highest risk of failure, while simultaneously reducing the dose to the normal heart, lung, and liver. Clinical implications warrant systematic evaluation.

  5. Comparison of composite prostate radiotherapy plan doses with dependent and independent boost phases.

    PubMed

    Narayanasamy, Ganesh; Avila, Gabrielle; Mavroidis, Panayiotis; Papanikolaou, Niko; Gutierrez, Alonso; Baacke, Diana; Shi, Zheng; Stathakis, Sotirios

    2016-09-01

    Prostate cases commonly consist of dual phase planning with a primary plan followed by a boost. Traditionally, the boost phase is planned independently from the primary plan with the risk of generating hot or cold spots in the composite plan. Alternatively, boost phase can be planned taking into account the primary dose. The aim of this study was to compare the composite plans from independently and dependently planned boosts using dosimetric and radiobiological metrics. Ten consecutive prostate patients previously treated at our institution were used to conduct this study on the Raystation™ 4.0 treatment planning system. For each patient, two composite plans were developed: a primary plan with an independently planned boost and a primary plan with a dependently planned boost phase. The primary plan was prescribed to 54 Gy in 30 fractions to the primary planning target volume (PTV1) which includes prostate and seminal vesicles, while the boost phases were prescribed to 24 Gy in 12 fractions to the boost planning target volume (PTV2) that targets only the prostate. PTV coverage, max dose, median dose, target conformity, dose homogeneity, dose to OARs, and probabilities of benefit, injury, and complication-free tumor control (P+) were compared. Statistical significance was tested using either a 2-tailed Student's t-test or Wilcoxon signed-rank test. Dosimetrically, the composite plan with dependent boost phase exhibited smaller hotspots, lower maximum dose to the target without any significant change to normal tissue dose. Radiobiologically, for all but one patient, the percent difference in the P+ values between the two methods was not significant. A large percent difference in P+ value could be attributed to an inferior primary plan. The benefits of considering the dose in primary plan while planning the boost is not significant unless a poor primary plan was achieved.

  6. Anti-parallel and Component Reconnection at the Magnetopause

    NASA Astrophysics Data System (ADS)

    Trattner, K. J.; Mulcock, J. S.; Petrinec, S. M.; Fuselier, S. A.

    2007-05-01

    Reconnection at the magnetopause is clearly the dominant mechanism by which magnetic fields in different regions change topology to create open magnetic field lines that allow energy and momentum to flow into the magnetosphere. Observations and data analysis methods have reached the maturity to address one of the major outstanding questions about magnetic reconnection: The location of the reconnection site. There are two scenarios discussed in the literature, a) anti-parallel reconnection where shear angles between the magnetospheric field and the IMF are near 180 degrees, and b) component reconnection where shear angles are as low as 50 degrees. One popular component reconnection model is the tilted neutral line model. Both reconnection scenarios have a profound impact on the location of the X-line and plasma transfer into the magnetosphere. We have analyzed 3D plasma measurements observed by the Polar satellite in the northern hemisphere cusp region during southward IMF conditions. These 3D plasma measurements are used to estimate the distance to the reconnection line by using the low-velocity cutoff technique for precipitating and mirrored magnetosheath populations in the cusp. The calculated distances are subsequently traced back along geomagnetic field lines to the expected reconnection sites at the magnetopause. The Polar survey of northern cusp passes reveal that both reconnection scenarios occur at the magnetopause. The IMF clock angle appears to be the dominant parameter in causing either the anti-parallel or the tilted X-line reconnection scenario.

  7. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  8. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  9. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  10. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  11. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  12. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  13. Substrate oscillations boost recombinant protein release from Escherichia coli.

    PubMed

    Jazini, Mohammadhadi; Herwig, Christoph

    2014-05-01

    Intracellular production of recombinant proteins in prokaryotes necessitates subsequent disruption of cells for protein recovery. Since the cell disruption and subsequent purification steps largely contribute to the total production cost, scalable tools for protein release into the extracellular space is of utmost importance. Although there are several ways for enhancing protein release, changing culture conditions is rather a simple and scalable approach compared to, for example, molecular cell design. This contribution aimed at quantitatively studying process technological means to boost protein release of a periplasmatic recombinant protein (alkaline phosphatase) from E. coli. Quantitative analysis of protein in independent bioreactor runs could demonstrate that a defined oscillatory feeding profile was found to improve protein release, about 60 %, compared to the conventional constant feeding rate. The process technology included an oscillatory post-induction feed profile with the frequency of 4 min. The feed rate was oscillated triangularly between a maximum (1.3-fold of the maximum feed rate achieved at the end of the fed-batch phase) and a minimum (45 % of the maximum). The significant improvement indicates the potential to maximize the production rate, while this oscillatory feed profile can be easily scaled to industrial processes. Moreover, quantitative analysis of the primary metabolism revealed that the carbon dioxide yield can be used to identify the preferred feeding profile. This approach is therefore in line with the initiative of process analytical technology for science-based process understanding in process development and process control strategies.

  14. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  15. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  16. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  17. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  18. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  19. Dynamically reconfigurable optical interconnect architecture for parallel multiprocessor systems

    NASA Astrophysics Data System (ADS)

    Girard, Mary M.; Husbands, Charles R.; Antoszewska, Reza

    1991-12-01

    The progress in parallel processing technology in recent years has resulted in increased requirements to process large amounts of data in real time. The massively parallel architectures proposed for these applications require the use of a high speed interconnect system to achieve processor-to-processor connectivity without incurring excessive delays. The characteristics of optical components permit high speed operation while the nonconductive nature of the optical medium eliminates ground loop and transmission line problems normally associated with a conductive medium. The MITRE Corp. is evaluating an optical wavelength division multiple access interconnect network design to improve interconnectivity within parallel processor systems and to allow reconfigurability of processor communication paths. This paper describes the architecture and control of and highlights the results from an 8- channel multiprocessor prototype with effective throughput of 3.2 Gigabits per second (Gbps).

  20. A parallel trajectory optimization tool for aerospace plane guidance

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.; Park, Kihong

    1991-01-01

    A parallel trajectory optimization algorithm is being developed. One possible mission is to provide real-time, on-line guidance for the National Aerospace Plane. The algorithm solves a discrete-time problem via the augmented Lagrangian nonlinear programming algorithm. The algorithm exploits the dynamic programming structure of the problem to achieve parallelism in calculating cost functions, gradients, constraints, Jacobians, Hessian approximations, search directions, and merit functions. Special additions to the augmented Lagrangian algorithm achieve robust convergence, achieve (almost) superlinear local convergence, and deal with constraint curvature efficiency. The algorithm can handle control and state inequality constraints such as angle-of-attack and dynamic pressure constraints. Portions of the algorithm have been tested. The nonlinear programming core algorithm performs well on a variety of static test problems and on an orbit transfer problem. The parallel search direction algorithm can reduce wall clock time by a factor of 10 for this part of the computation task.

  1. Parallel computation of geometry control in adaptive truss structures

    NASA Technical Reports Server (NTRS)

    Ramesh, A. V.; Utku, S.; Wada, B. K.

    1992-01-01

    The fast computation of geometry control in adaptive truss structures involves two distinct parts: the efficient integration of the inverse kinematic differential equations that govern the geometry control and the fast computation of the Jacobian, which appears on the right-hand-side of the inverse kinematic equations. This paper present an efficient parallel implementation of the Jacobian computation on an MIMD machine. Large speedup from the parallel implementation is obtained, which reduces the Jacobian computation to an O(M-squared/n) procedure on an n-processor machine, where M is the number of members in the adaptive truss. The parallel algorithm given here is a good candidate for on-line geometry control of adaptive structures using attached processors.

  2. Parallel algorithm for target recognition using a multiclass hash database

    NASA Astrophysics Data System (ADS)

    Uddin, Mosleh; Myler, Harley R.

    1998-07-01

    A method for recognition of unknown targets using large databases of model targets is discussed. Our approach is based on parallel processing of multi-class hash databases that are generated off-line. A geometric hashing technique is used on feature points of model targets to create each class database. Bit level coding is then performed to represent the models in an image format. Parallelism is achieved during the recognition phase. Feature points of an unknown target are passed to parallel processors each accessing an individual class database. Each processor reads a particular class of hash data base and indexes feature points of the unknown target. A simple voting technique is applied to determine the best match model with the unknown. The paper discusses our technique and the results from testing with unknown FLIR targets.

  3. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  4. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  5. The AIS-5000 parallel processor

    SciTech Connect

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In this paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.

  6. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  7. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  8. Measures of effectiveness for BMD mid-course tracking on MIMD massively parallel computers

    SciTech Connect

    VanDyke, J.P.; Tomkins, J.L.; Furnish, M.D.

    1995-05-01

    The TRC code, a mid-course tracking code for ballistic missiles, has previously been implemented on a 1024-processor MIMD (Multiple Instruction -- Multiple Data) massively parallel computer. Measures of Effectiveness (MOE) for this algorithm have been developed for this computing environment. The MOE code is run in parallel with the TRC code. Particularly useful MOEs include the number of missed objects (real objects for which the TRC algorithm did not construct a track); of ghost tracks (tracks not corresponding to a real object); of redundant tracks (multiple tracks corresponding to a single real object); and of unresolved objects (multiple objects corresponding to a single track). All of these are expressed as a function of time, and tend to maximize during the time in which real objects are spawned (multiple reentry vehicles per post-boost vehicle). As well, it is possible to measure the track-truth separation as a function of time. A set of calculations is presented illustrating these MOEs as a function of time for a case with 99 post-boost vehicles, each of which spawns 9 reentry vehicles.

  9. Line-by-line spectroscopic simulations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  10. Evaluation of stereotactic body radiotherapy (SBRT) boost in the management of endometrial cancer.

    PubMed

    Demiral, S; Beyzadeoglu, M; Uysal, B; Oysul, K; Kahya, Y Elcim; Sager, O; Dincoglan, F; Gamsiz, H; Dirican, B; Surenkok, S

    2013-01-01

    The purpose of this study is to evaluate the use of linear accelerator (LINAC)-based stereotactic body radiotherapy (SBRT) boost with multileaf collimator technique after pelvic radiotherapy (RT) in patients with endometrial cancer. Consecutive patients with endometrial cancer treated using LINAC-based SBRT boost after pelvic RT were enrolled in the study. All patients had undergone surgery including total abdominal hysterectomy and bilateral salpingo-oophorectomy ± pelvic/paraortic lymphadenectomy before RT. Prescribed external pelvic RT dose was 45 Gray (Gy) in 1.8 Gy daily fractions. All patients were treated with SBRT boost after pelvic RT. The prescribed SBRT boost dose to the upper two thirds of the vagina including the vaginal vault was 18 Gy delivered in 3 fractions with 1-week intervals. Gastrointestinal and genitourinary toxicity was assessed using the Common Terminology Criteria for Adverse Events version 3 (CTCAE v3).Between April 2010 and May 2011, 18 patients with stage I-III endometrial cancer were treated with LINAC-based SBRT boost after pelvic RT. At a median follow-up of 24 (8-26) months with magnetic resonance imaging (MRI) and gynecological examination, local control rate of the study group was 100 % with negligible acute and late toxicity.LINAC-based SBRT boost to the vaginal cuff is a feasible gynecological cancer treatment modality with excellent local control and minimal toxicity that may replace traditional brachytherapy boost in the management of endometrial cancer. PMID:23374003

  11. A novel sparse boosting method for crater detection in the high resolution planetary image

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Yang, Gang; Guo, Lei

    2015-09-01

    Impact craters distributed on planetary surface become one of the main barriers during the soft landing of planetary probes. In order to accelerate the crater detection, in this paper, we present a new sparse boosting (SparseBoost) method for automatic detection of sub-kilometer craters. The SparseBoost method integrates an improved sparse kernel density estimator (RSDE-WL1) into the Boost algorithm and the RSDE-WL1 estimator is achieved by introducing weighted l1 penalty term into the reduced set density estimator. An iterative algorithm is proposed to implement the RSDE-WL1. The SparseBoost algorithm has the advantage of fewer selected features and simpler representation of the weak classifiers compared with the Boost algorithm. Our SparseBoost based crater detection method is evaluated on a large and high resolution image of Martian surface. Experimental results demonstrate that the proposed method can achieve less computational complexity in comparison with other crater detection methods in terms of selected features.

  12. Parallel execution of LISP programs

    SciTech Connect

    Weening, J.S.

    1989-01-01

    This dissertation considers several issues in the execution of Lisp programs on shared-memory multiprocessors. An overview of constructs for explicit parallelism in Lisp is first presented. The problems of partitioning a program into processes and scheduling these processes are then described, and a number of methods for performing these are proposed. These include cutting off process creation based on properties of the computation tree of the program, and basing partitioning decisions on the state of the system at runtime instead of the program. An experimental study of these methods has been performed using a simulator for parallel Lisp. The simulator, written in common Lisp using a continuation-passing style, is described in detail. This is followed by a description of the experiments that were performed and an analysis of the results. Two programs are used as illustrations-a Fast Fourier Transform, which has an abundance of parallelism, and the Cocke-Younger-Kasami parsing algorithm, for which good speedup is not as easy to obtain. The difficulty of using cutoff-based partitioning methods, and the differences between various scheduling methods, are shown. A combination of partitioning and scheduling methods which the author calls dynamic partitioning is analyzed in more detail. This method is based on examining the machine's runtime state; it requires that the programmer only identify parallelism in the program, without deciding which potential parallelism is actually useful. Several theorems are proved providing upper bounds on the amount of overhead produced by this method. He concludes that for programs whose computation trees have small height relative to their total size, dynamic partitioning can achieve asymptotically minimal overhead in the cost of process creation.

  13. Effects of parallel electron dynamics on plasma blob transport

    SciTech Connect

    Angus, Justin R.; Krasheninnikov, Sergei I.; Umansky, Maxim V.

    2012-08-15

    The 3D effects on sheath connected plasma blobs that result from parallel electron dynamics are studied by allowing for the variation of blob density and potential along the magnetic field line and using collisional Ohm's law to model the parallel current density. The parallel current density from linear sheath theory, typically used in the 2D model, is implemented as parallel boundary conditions. This model includes electrostatic 3D effects, such as resistive drift waves and blob spinning, while retaining all of the fundamental 2D physics of sheath connected plasma blobs. If the growth time of unstable drift waves is comparable to the 2D advection time scale of the blob, then the blob's density gradient will be depleted resulting in a much more diffusive blob with little radial motion. Furthermore, blob profiles that are initially varying along the field line drive the potential to a Boltzmann relation that spins the blob and thereby acts as an addition sink of the 2D potential. Basic dimensionless parameters are presented to estimate the relative importance of these two 3D effects. The deviation of blob dynamics from that predicted by 2D theory in the appropriate limits of these parameters is demonstrated by a direct comparison of 2D and 3D seeded blob simulations.

  14. Semi-coarsening multigrid methods for parallel computing

    SciTech Connect

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  15. Impact of the Radiation Boost on Outcomes After Breast-Conserving Surgery and Radiation

    SciTech Connect

    Murphy, Colin; Anderson, Penny R.; Li Tianyu; Bleicher, Richard J.; Sigurdson, Elin R.; Goldstein, Lori J.; Swaby, Ramona; Denlinger, Crystal; Dushkin, Holly; Nicolaou, Nicos; Freedman, Gary M.

    2011-09-01

    Purpose: We examined the impact of radiation tumor bed boost parameters in early-stage breast cancer on local control and cosmetic outcomes. Methods and Materials: A total of 3,186 women underwent postlumpectomy whole-breast radiation with a tumor bed boost for Tis to T2 breast cancer from 1970 to 2008. Boost parameters analyzed included size, energy, dose, and technique. Endpoints were local control, cosmesis, and fibrosis. The Kaplan-Meier method was used to estimate actuarial incidence, and a Cox proportional hazard model was used to determine independent predictors of outcomes on multivariate analysis (MVA). The median follow-up was 78 months (range, 1-305 months). Results: The crude cosmetic results were excellent in 54%, good in 41%, and fair/poor in 5% of patients. The 10-year estimate of an excellent cosmesis was 66%. On MVA, independent predictors for excellent cosmesis were use of electron boost, lower electron energy, adjuvant systemic therapy, and whole-breast IMRT. Fibrosis was reported in 8.4% of patients. The actuarial incidence of fibrosis was 11% at 5 years and 17% at 10 years. On MVA, independent predictors of fibrosis were larger cup size and higher boost energy. The 10-year actuarial local failure was 6.3%. There was no significant difference in local control by boost method, cut-out size, dose, or energy. Conclusions: Likelihood of excellent cosmesis or fibrosis are associated with boost technique, electron energy, and cup size. However, because of high local control and rare incidence of fair/poor cosmesis with a boost, the anatomy of the patient and tumor cavity should ultimately determine the necessary boost parameters.

  16. Parallel Quantum Circuit in a Tunnel Junction.

    PubMed

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-01-01

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N(2) power law is preserved for Ωab(N) and for Vab(N). PMID:27453262

  17. Parallel Quantum Circuit in a Tunnel Junction

    NASA Astrophysics Data System (ADS)

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-07-01

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N2 power law is preserved for Ωab(N) and for Vab(N).

  18. Parallel Quantum Circuit in a Tunnel Junction.

    PubMed

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-07-25

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N(2) power law is preserved for Ωab(N) and for Vab(N).

  19. Parallel Quantum Circuit in a Tunnel Junction

    PubMed Central

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-01-01

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N2 power law is preserved for Ωab(N) and for Vab(N). PMID:27453262

  20. Retroperitoneal Sarcoma (RPS) High Risk Gross Tumor Volume Boost (HR GTV Boost) Contour Delineation Agreement Among NRG Sarcoma Radiation and Surgical Oncologists

    PubMed Central

    Baldini, Elizabeth H.; Bosch, Walter; Kane, John M.; Abrams, Ross A.; Salerno, Kilian E.; Deville, Curtiland; Raut, Chandrajit P.; Petersen, Ivy A.; Chen, Yen-Lin; Mullen, John T.; Millikan, Keith W.; Karakousis, Giorgos; Kendrick, Michael L.; DeLaney, Thomas F.; Wang, Dian

    2015-01-01

    Purpose Curative intent management of retroperitoneal sarcoma (RPS) requires gross total resection. Preoperative radiotherapy (RT) often is used as an adjuvant to surgery, but recurrence rates remain high. To enhance RT efficacy with acceptable tolerance, there is interest in delivering “boost doses” of RT to high-risk areas of gross tumor volume (HR GTV) judged to be at risk for positive resection margins. We sought to evaluate variability in HR GTV boost target volume delineation among collaborating sarcoma radiation and surgical oncologist teams. Methods Radiation planning CT scans for three cases of RPS were distributed to seven paired radiation and surgical oncologist teams at six institutions. Teams contoured HR GTV boost volumes for each case. Analysis of contour agreement was performed using the simultaneous truth and performance level estimation (STAPLE) algorithm and kappa statistics. Results HRGTV boost volume contour agreement between the seven teams was “substantial” or “moderate” for all cases. Agreement was best on the torso wall posteriorly (abutting posterior chest abdominal wall) and medially (abutting ipsilateral para-vertebral space and great vessels). Contours varied more significantly abutting visceral organs due to differing surgical opinions regarding planned partial organ resection. Conclusions Agreement of RPS HRGTV boost volumes between sarcoma radiation and surgical oncologist teams was substantial to moderate. Differences were most striking in regions abutting visceral organs, highlighting the importance of collaboration between the radiation and surgical oncologist for “individualized” target delineation on the basis of areas deemed at risk and planned resection. PMID:26018727

  1. RS-34 Phoenix (Peacekeeper Post Boost Propulsion System) Utilization Study

    NASA Technical Reports Server (NTRS)

    Esther, Elizabeth A.; Kos, Larry; Bruno, Cy

    2012-01-01

    The Advanced Concepts Office (ACO) at the NASA Marshall Space Flight Center (MSFC) in conjunction with Pratt & Whitney Rocketdyne conducted a study to evaluate potential in-space applications for the Rocketdyne produced RS-34 propulsion system. The existing RS-34 propulsion system is a remaining asset from the decommissioned United States Air Force Peacekeeper ICBM program; specifically the pressure-fed storable bipropellant Stage IV Post Boost Propulsion System, renamed Phoenix. MSFC gained experience with the RS-34 propulsion system on the successful Ares I-X flight test program flown in October 2009. RS-34 propulsion system components were harvested from stages supplied by the USAF and used on the Ares I-X Roll control system (RoCS). The heritage hardware proved extremely robust and reliable and sparked interest for further utilization on other potential in-space applications. Subsequently, MSFC is working closely with the USAF to obtain all the remaining RS-34 stages for re-use opportunities. Prior to pursuit of securing the hardware, MSFC commissioned the Advanced Concepts Office to understand the capability and potential applications for the RS-34 Phoenix stage as it benefits NASA, DoD, and commercial industry. Originally designed, the RS-34 Phoenix provided in-space six-degrees-of freedom operational maneuvering to deploy multiple payloads at various orbital locations. The RS-34 Phoenix Utilization Study sought to understand how the unique capabilities of the RS-34 Phoenix and its application to six candidate missions: 1) small satellite delivery (SSD), 2) orbital debris removal (ODR), 3) ISS re-supply, 4) SLS kick stage, 5) manned GEO servicing precursor mission, and an Earth-Moon L-2 Waypoint mission. The small satellite delivery and orbital debris removal missions were found to closely mimic the heritage RS-34 mission. It is believed that this technology will enable a small, low-cost multiple satellite delivery to multiple orbital locations with a single

  2. RS-34 Phoenix (Peacekeeper Post Boost Propulsion System) Utilization Study

    NASA Technical Reports Server (NTRS)

    Esther, Elizabeth A.; Kos, Larry; Burnside, Christopher G.; Bruno, Cy

    2013-01-01

    The Advanced Concepts Office (ACO) at the NASA Marshall Space Flight Center (MSFC) in conjunction with Pratt & Whitney Rocketdyne conducted a study to evaluate potential in-space applications for the Rocketdyne produced RS-34 propulsion system. The existing RS-34 propulsion system is a remaining asset from the de-commissioned United States Air Force Peacekeeper ICBM program, specifically the pressure-fed storable bipropellant Stage IV Post Boost Propulsion System, renamed Phoenix. MSFC gained experience with the RS-34 propulsion system on the successful Ares I-X flight test program flown in October 2009. RS-34 propulsion system components were harvested from stages supplied by the USAF and used on the Ares I-X Roll control system (RoCS). The heritage hardware proved extremely robust and reliable and sparked interest for further utilization on other potential in-space applications. MSFC is working closely with the USAF to obtain RS-34 stages for re-use opportunities. Prior to pursuit of securing the hardware, MSFC commissioned the Advanced Concepts Office to understand the capability and potential applications for the RS-34 Phoenix stage as it benefits NASA, DoD, and commercial industry. As originally designed, the RS-34 Phoenix provided in-space six-degrees-of freedom operational maneuvering to deploy multiple payloads at various orbital locations. The RS-34 Phoenix Utilization Study sought to understand how the unique capabilities of the RS-34 Phoenix and its application to six candidate missions: 1) small satellite delivery (SSD), 2) orbital debris removal (ODR), 3) ISS re-supply, 4) SLS kick stage, 5) manned GEO servicing precursor mission, and an Earth-Moon L-2 Waypoint mission. The small satellite delivery and orbital debris removal missions were found to closely mimic the heritage RS-34 mission. It is believed that this technology will enable a small, low-cost multiple satellite delivery to multiple orbital locations with a single boost. For both the small

  3. Application of transmission-line super theory to classical transmission lines with risers

    NASA Astrophysics Data System (ADS)

    Rambousky, R.; Nitsch, J.; Tkachenko, S.

    2015-11-01

    By applying the Transmission-Line Super Theory (TLST) to a practical transmission-line configuration (two risers and a horizontal part of the line parallel to the ground plane) it is elaborated under which physical and geometrical conditions the horizontal part of the transmission-line can be represented by a classical telegrapher equation with a sufficiently accurate description of the physical properties of the line. The risers together with the part of the horizontal line close to them are treated as separate lines using the TLST. Novel frequency and local dependent reflection coefficients are introduced to take into account the action of the bends and their radiation. They can be derived from the matrizant elements of the TLST solution. It is shown that the solution of the resulting network and the TLST solution of the entire line agree for certain line configurations. The physical and geometrical parameters for these corresponding configurations are determined in this paper.

  4. Sassena — X-ray and neutron scattering calculated from molecular dynamics trajectories using massively parallel computers

    NASA Astrophysics Data System (ADS)

    Lindner, Benjamin; Smith, Jeremy C.

    2012-07-01

    Massively parallel computers now permit the molecular dynamics (MD) simulation of multi-million atom systems on time scales up to the microsecond. However, the subsequent analysis of the resulting simulation trajectories has now become a high performance computing problem in itself. Here, we present software for calculating X-ray and neutron scattering intensities from MD simulation data that scales well on massively parallel supercomputers. The calculation and data staging schemes used maximize the degree of parallelism and minimize the IO bandwidth requirements. The strong scaling tested on the Jaguar Petaflop Cray XT5 at Oak Ridge National Laboratory exhibits virtually linear scaling up to 7000 cores for most benchmark systems. Since both MPI and thread parallelism is supported, the software is flexible enough to cover scaling demands for different types of scattering calculations. The result is a high performance tool capable of unifying large-scale supercomputing and a wide variety of neutron/synchrotron technology. Catalogue identifier: AELW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 003 742 No. of bytes in distributed program, including test data, etc.: 798 Distribution format: tar.gz Programming language: C++, OpenMPI Computer: Distributed Memory, Cluster of Computers with high performance network, Supercomputer Operating system: UNIX, LINUX, OSX Has the code been vectorized or parallelized?: Yes, the code has been parallelized using MPI directives. Tested with up to 7000 processors RAM: Up to 1 Gbytes/core Classification: 6.5, 8 External routines: Boost Library, FFTW3, CMAKE, GNU C++ Compiler, OpenMPI, LibXML, LAPACK Nature of problem: Recent developments in supercomputing allow molecular dynamics simulations to

  5. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  6. A generalized parallel replica dynamics

    SciTech Connect

    Binder, Andrew; Lelièvre, Tony; Simpson, Gideon

    2015-03-01

    Metastability is a common obstacle to performing long molecular dynamics simulations. Many numerical methods have been proposed to overcome it. One method is parallel replica dynamics, which relies on the rapid convergence of the underlying stochastic process to a quasi-stationary distribution. Two requirements for applying parallel replica dynamics are knowledge of the time scale on which the process converges to the quasi-stationary distribution and a mechanism for generating samples from this distribution. By combining a Fleming–Viot particle system with convergence diagnostics to simultaneously identify when the process converges while also generating samples, we can address both points. This variation on the algorithm is illustrated with various numerical examples, including those with entropic barriers and the 2D Lennard-Jones cluster of seven atoms.

  7. Scans as primitive parallel operations

    SciTech Connect

    Blelloch, G.E. . Dept. of Computer Science)

    1989-11-01

    In most parallel random access machine (PRAM) models, memory references are assumed to take unit time. In practice, and in theory, certain scan operations, also known as prefix computations, can execute in no more time than these parallel memory references. This paper outlines an extensive study of the effect of including, in the PRAM models, such scan operations as unit-time primitives. The study concludes that the primitives improve the asymptotic running time of many algorithms by an O(log n) factor greatly simplify the description of many algorithms, and are significantly easier to implement than memory references. The authors argue that the algorithm designer should feel free to use these operations as if they were as cheap as a memory reference. This paper describes five algorithms that clearly illustrate how the scan primitives can be used in algorithm design. These all run on an EREW PRAM with the addition of two scan primitives.

  8. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  9. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  10. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  11. Robust 3D face recognition by local shape difference boosting.

    PubMed

    Wang, Yueming; Liu, Jianzhuang; Tang, Xiaoou

    2010-10-01

    This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient. PMID:20724762

  12. Electrochemical, H2O2-Boosted Catalytic Oxidation System

    NASA Technical Reports Server (NTRS)

    Akse, James R.; Thompson, John O.; Schussel, Leonard J.

    2004-01-01

    An improved water-sterilizing aqueous-phase catalytic oxidation system (APCOS) is based partly on the electrochemical generation of hydrogen peroxide (H2O2). This H2O2-boosted system offers significant improvements over prior dissolved-oxygen water-sterilizing systems in the way in which it increases oxidation capabilities, supplies H2O2 when needed, reduces the total organic carbon (TOC) content of treated water to a low level, consumes less energy than prior systems do, reduces the risk of contamination, and costs less to operate. This system was developed as a variant of part of an improved waste-management subsystem of the life-support system of a spacecraft. Going beyond its original intended purpose, it offers the advantage of being able to produce H2O2 on demand for surface sterilization and/or decontamination: this is a major advantage inasmuch as the benign byproducts of this H2O2 system, unlike those of systems that utilize other chemical sterilants, place no additional burden of containment control on other spacecraft air- or water-reclamation systems.

  13. Tamoxifen reduces fat mass by boosting reactive oxygen species.

    PubMed

    Liu, L; Zou, P; Zheng, L; Linarelli, L E; Amarell, S; Passaro, A; Liu, D; Cheng, Z

    2015-01-01

    As the pandemic of obesity is growing, a variety of animal models have been generated to study the mechanisms underlying the increased adiposity and development of metabolic disorders. Tamoxifen (Tam) is widely used to activate Cre recombinase that spatiotemporally controls target gene expression and regulates adiposity in laboratory animals. However, a critical question remains as to whether Tam itself affects adiposity and possibly confounds the functional study of target genes in adipose tissue. Here we administered Tam to Cre-absent forkhead box O1 (FoxO1) floxed mice (f-FoxO1) and insulin receptor substrate Irs1/Irs2 double floxed mice (df-Irs) and found that Tam induced approximately 30% reduction (P<0.05) in fat mass with insignificant change in body weight. Mechanistically, Tam promoted reactive oxygen species (ROS) production, apoptosis and autophagy, which was associated with downregulation of adipogenic regulator peroxisome proliferator-activated receptor gamma and dedifferentiation of mature adipocytes. However, normalization of ROS potently suppressed Tam-induced apoptosis, autophagy and adipocyte dedifferentiation, suggesting that ROS may account, at least in part, for the changes. Importantly, Tam-induced ROS production and fat mass reduction lasted for 4-5 weeks in the f-FoxO1 and df-Irs mice. Our data suggest that Tam reduces fat mass via boosting ROS, thus making a recovery period crucial for posttreatment study. PMID:25569103

  14. Fault diagnosis algorithm based on switching function for boost converters

    NASA Astrophysics Data System (ADS)

    Cho, H.-K.; Kwak, S.-S.; Lee, S.-H.

    2015-07-01

    A fault diagnosis algorithm, which is necessary for constructing a reliable power conversion system, should detect fault occurrences as soon as possible to protect the entire system from fatal damages resulting from system malfunction. In this paper, a fault diagnosis algorithm is proposed to detect open- and short-circuit faults that occur in a boost converter switch. The inductor voltage is abnormally kept at a positive DC value during a short-circuit fault in the switch or at a negative DC value during an open-circuit fault condition until the inductor current becomes zero. By employing these abnormal properties during faulty conditions, the inductor voltage is compared with the switching function to detect each fault type by generating fault alarms when a fault occurs. As a result, from the fault alarm, a decision is made in response to the fault occurrence and the fault type in less than two switching time periods using the proposed algorithm constructed in analogue circuits. In addition, the proposed algorithm has good resistivity to discontinuous current-mode operation. As a result, this algorithm features the advantages of low cost and simplicity because of its simple analogue circuit configuration.

  15. Controlled Vocabularies Boost International Participation and Normalization of Searches

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    2006-01-01

    The Global Change Master Directory's (GCMD) science staff set out to document Earth science data and provide a mechanism for it's discovery in fulfillment of a commitment to NASA's Earth Science progam and to the Committee on Earth Observation Satellites' (CEOS) International Directory Network (IDN.) At the time, whether to offer a controlled vocabulary search or a free-text search was resolved with a decision to support both. The feedback from the user community indicated that being asked to independently determine the appropriate 'English" words through a free-text search would be very difficult. The preference was to be 'prompted' for relevant keywords through the use of a hierarchy of well-designed science keywords. The controlled keywords serve to 'normalize' the search through knowledgeable input by metadata providers. Earth science keyword taxonomies were developed, rules for additions, deletions, and modifications were created. Secondary sets of controlled vocabularies for related descriptors such as projects, data centers, instruments, platforms, related data set link types, and locations, along with free-text searches assist users in further refining their search results. Through this robust 'search and refine' capability in the GCMD users are directed to the data and services they seek. The next step in guiding users more directly to the resources they desire is to build a 'reasoning' capability for search through the use of ontologies. Incorporating twelve sets of Earth science keyword taxonomies has boosted the GCMD S ability to help users define and more directly retrieve data of choice.

  16. Memory boosting effect of Citrus limon, Pomegranate and their combinations.

    PubMed

    Riaz, Azra; Khan, Rafeeq Alam; Algahtani, Hussein A

    2014-11-01

    Memory is greatly influenced by factors like food, stress and quality of sleep, hence present study was designed to evaluate the effect of Citrus limon and Pomegranate juices on memory of mice using Harvard Panlab Passive Avoidance response apparatus controlled through LE2708 Programmer. Passive avoidance is fear-motivated tests used to assess short or long-term memory of small animals, which measures latency to enter into the black compartment. Animals at MCLD showed highly significant and significant increase in latency to enter into the black compartment after 3 and 24 hours respectively than control, animals at HCLD showed significant increase in latency only after 3hours. Animals both at low and moderate doses of pomegranate showed significant increase in test latency after 3 hours, while animals at high dose showed highly significant and significant increase in latency after 3 and 24 hours respectively. There was highly significant and significant increase in latency in animals at CPJ-1 combination after 3 and 24 hours respectively; however animals received CPJ-2 combination showed significant increase in latency only after 3 hours as compare to control. These results suggest that Citrus limon and Pomegranate has phytochemicals and essential nutrients which boost memory, particularly short term memory. Hence it may be concluded that flavonoids in these juices may be responsible for memory enhancing effects and a synergistic effect is observed by CPJ-1 and CPJ-2 combinations. PMID:25362607

  17. Clinton budget squeezes EPA, boosts federal R D

    SciTech Connect

    Begley, R.

    1993-04-21

    Although Environmental Protection Agency chief Carol Browner tried to portray the numbers in a positive light, a budget cut is a budget cut and that is what she was handed by her new boss. Despite Clinton Administration rhetoric on the environment, the $6.4-billion EPA budget for fiscal 1994 is down almost 8% for 1993. The superfund program is hit hardest, down 6%, to $1.5 billion. Browner counts funds from the President's 1993 stimulus bill--currently in limbo in Congress--in her 1994 budget to arrive at an increase. She says 1994 will bring greater emphasis to pollution prevention, collaborative programs with industry on toxic releases, and improvement in EPA's science and research activities. EPA's air and pesticides programs will get more money, as well hazardous waste, which EPA says will [open quotes]eliminate unnecessary and burdensome requirements[close quotes] on industry and speed up corrective action. Water quality programs will be cut, as will the toxic substances program, although the Toxic Release Inventory will get a boost.

  18. Negative emotion boosts quality of visual working memory representation.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2016-08-01

    Negative emotion impacts a variety of cognitive processes, including working memory (WM). The present study investigated whether negative emotion modulated WM capacity (quantity) or resolution (quality), 2 independent limits on WM storage. In Experiment 1, observers tried to remember several colors over 1-s delay and then recalled the color of a randomly picked memory item by clicking a best-matching color on a continuous color wheel. On each trial, before the visual WM task, 1 of 3 emotion conditions (negative, neutral, or positive) was induced by having observers to rate the valence of an International Affective Picture System image. Visual WM under negative emotion showed enhanced resolution compared with neutral and positive conditions, whereas the number of retained representations was comparable across the 3 emotion conditions. These effects were generalized to closed-contour shapes in Experiment 2. To isolate the locus of these effects, Experiment 3 adopted an iconic memory version of the color recall task by eliminating the 1-s retention interval. No significant change in the quantity or quality of iconic memory was observed, suggesting that the resolution effects in the first 2 experiments were critically dependent on the need to retain memory representations over a short period of time. Taken together, these results suggest that negative emotion selectively boosts visual WM quality, supporting the dissociable nature quantitative and qualitative aspects of visual WM representation. (PsycINFO Database Record PMID:27078744

  19. ArborZ: PHOTOMETRIC REDSHIFTS USING BOOSTED DECISION TREES

    SciTech Connect

    Gerdes, David W.; Sypniewski, Adam J.; McKay, Timothy A.; Hao, Jiangang; Weis, Matthew R.; Wechsler, Risa H.; Busha, Michael T.

    2010-06-01

    Precision photometric redshifts will be essential for extracting cosmological parameters from the next generation of wide-area imaging surveys. In this paper, we introduce a photometric redshift algorithm, ArborZ, based on the machine-learning technique of boosted decision trees. We study the algorithm using galaxies from the Sloan Digital Sky Survey (SDSS) and from mock catalogs intended to simulate both the SDSS and the upcoming Dark Energy Survey. We show that it improves upon the performance of existing algorithms. Moreover, the method naturally leads to the reconstruction of a full probability density function (PDF) for the photometric redshift of each galaxy, not merely a single 'best estimate' and error, and also provides a photo-z quality figure of merit for each galaxy that can be used to reject outliers. We show that the stacked PDFs yield a more accurate reconstruction of the redshift distribution N(z). We discuss limitations of the current algorithm and ideas for future work.

  20. Predictors of virologic response to ritonavir-boosted protease inhibitors.

    PubMed

    Marcelin, Anne-Genevieve; Flandre, Philippe; Peytavin, Gilles; Calvez, Vincent

    2005-01-01

    The primary mechanism of resistance to protease inhibitors involves the stepwise accumulation of mutations that alter and block the substrate binding site of HIV protease. The large degree of cross-resistance among the different protease inhibitors is a source of considerable concern for the management of patients after treatment failure. Although the output of HIV-resistance tests has been based on therapeutically arbitrary criteria, there is now an ongoing move towards correlating test interpretation with virologic outcomes on treatment. This approach is undeniably superior, in principle, for tests intended to guide drug choices. However, the predictive accuracy of a given stratagem that links genotype or phenotype to drug response is strongly influenced by the study design, data capture and the analytical methodology used to derive it. There is no definitively superior methodology for generating a genotype-response association for use in interpreting a resistance test, and the various approaches used to date all have their strengths and weaknesses. Combining the information of therapeutic drug monitoring and resistance tests is likely to be of greatest clinical utility in antiretroviral-experienced patients harboring HIV strains with reduced susceptibility. The combination of pharmacologic and virologic parameters as a predictor of the virologic response has been merged into the parameter known as "inhibitory quotient". This article discusses the potential interest of the use of inhibitory quotients as an approach for enhancing the potency and durability of boosted protease inhibitors against protease inhibitor-resistant viruses. PMID:16425962

  1. Boosting training for myoelectric pattern recognition using Mixed-LDA.

    PubMed

    Liu, Jianwei; Sheng, Xinjun; Zhang, Dingguo; Zhu, Xiangyang

    2014-01-01

    Pattern recognition based myoelectric prostheses (MP) need a training procedure for calibrating the classifier. Due to the non-stationarity inhered in surface electromyography (sEMG) signals, the system should be retrained day by day in long-term use of MP. To boost the training procedure in later periods, we propose a method, namely Mixed-LDA, which computes the parameters of LDA through combining the model estimated on the incoming training samples of the current day with the prior models available from earlier days. An experiment ranged for 10 days on 5 subjects was carried out to simulate the long-term use of MP. Results show that the Mixed-LDA is significantly better than the baseline method (LDA) when few samples are used as training set in the new (current) day. For instance, in the task including 13 hand and wrist motions, the average classification rate of the Mixed-LDA is 88.74% when the number of training samples is 104 (LDA: 79.32%). This implies that the approach has the potential to improve the usability of MP based on pattern recognition by reducing the training time.

  2. Memory boosting effect of Citrus limon, Pomegranate and their combinations.

    PubMed

    Riaz, Azra; Khan, Rafeeq Alam; Algahtani, Hussein A

    2014-11-01

    Memory is greatly influenced by factors like food, stress and quality of sleep, hence present study was designed to evaluate the effect of Citrus limon and Pomegranate juices on memory of mice using Harvard Panlab Passive Avoidance response apparatus controlled through LE2708 Programmer. Passive avoidance is fear-motivated tests used to assess short or long-term memory of small animals, which measures latency to enter into the black compartment. Animals at MCLD showed highly significant and significant increase in latency to enter into the black compartment after 3 and 24 hours respectively than control, animals at HCLD showed significant increase in latency only after 3hours. Animals both at low and moderate doses of pomegranate showed significant increase in test latency after 3 hours, while animals at high dose showed highly significant and significant increase in latency after 3 and 24 hours respectively. There was highly significant and significant increase in latency in animals at CPJ-1 combination after 3 and 24 hours respectively; however animals received CPJ-2 combination showed significant increase in latency only after 3 hours as compare to control. These results suggest that Citrus limon and Pomegranate has phytochemicals and essential nutrients which boost memory, particularly short term memory. Hence it may be concluded that flavonoids in these juices may be responsible for memory enhancing effects and a synergistic effect is observed by CPJ-1 and CPJ-2 combinations.

  3. OBSERVATIONS OF DOPPLER BOOSTING IN KEPLER LIGHT CURVES

    SciTech Connect

    Van Kerkwijk, Marten H.; Breton, Rene P.; Justham, Stephen; Rappaport, Saul A.; Podsiadlowski, Philipp; Han, Zhanwen

    2010-05-20

    Among the initial results from Kepler were two striking light curves, for KOI 74 and KOI 81, in which the relative depths of the primary and secondary eclipses showed that the more compact, less luminous object was hotter than its stellar host. That result became particularly intriguing because a substellar mass had been derived for the secondary in KOI 74, which would make the high temperature challenging to explain; in KOI 81, the mass range for the companion was also reported to be consistent with a substellar object. We re-analyze the Kepler data and demonstrate that both companions are likely to be white dwarfs. We also find that the photometric data for KOI 74 show a modulation in brightness as the more luminous star orbits, due to Doppler boosting. The magnitude of the effect is sufficiently large that we can use it to infer a radial velocity amplitude accurate to 1 km s{sup -1}. As far as we are aware, this is the first time a radial-velocity curve has been measured photometrically. Combining our velocity amplitude with the inclination and primary mass derived from the eclipses and primary spectral type, we infer a secondary mass of 0.22 {+-} 0.03 M{sub sun}. We use our estimates to consider the likely evolutionary paths and mass-transfer episodes of these binary systems.

  4. Heterodyning Time Resolution Boosting for Velocimetry and Reflectivity Measurements

    SciTech Connect

    Erskine, D J

    2004-08-02

    A theoretical technique is described for boosting the temporal resolving power by several times, of detectors such as streak cameras in experiments that measure light reflected from or transmitted through a target, including velocity interferometer (VISAR) measurements. This is a means of effectively increasing the number of resolvable time bins in a streak camera record past the limit imposed by input slit width and blur on the output phosphor screen. The illumination intensity is modulated sinusoidally at a frequency similar to the limiting time response of the detector. A heterodyning effect beats the high frequency science signal down a lower frequency beat signal, which is recorded together with the conventional science signal. Using 3 separate illuminating channels having different phases, the beat term is separated algebraically from the conventional signal. By numerically reversing the heterodyning, and combining with the ordinary signal, the science signal can be reconstructed to better effective time resolution than the detector used alone. The effective time resolution can be approximately halved for a single modulation frequency, and further decreased inversely proportional to the number of independent modulation frequencies employed.

  5. Hard matching for boosted tops at two loops

    NASA Astrophysics Data System (ADS)

    Hoang, André H.; Pathak, Aditya; Pietrulewicz, Piotr; Stewart, Iain W.

    2015-12-01

    Cross sections for top quarks provide very interesting physics opportunities, being both sensitive to new physics and also perturbatively tractable due to the large top quark mass. Rigorous factorization theorems for top cross sections can be derived in several kinematic scenarios, including the boosted regime in the peak region that we consider here. In the context of the corresponding factorization theorem for e + e - collisions we extract the last missing ingredient that is needed to evaluate the cross section differential in the jet-mass at two-loop order, namely the matching coefficient at the scale μ≃ m t . Our extraction also yields the final ingredients needed to carry out logarithmic re-summation at next-to-next-to-leading logarithmic order (or N3LL if we ignore the missing 4-loop cusp anomalous dimension). This coefficient exhibits an amplitude level rapidity logarithm starting at O({α}_s^2) due to virtual top quark loops, which we treat using rapidity renormalization group (RG) evolution. Interestingly, this rapidity RG evolution appears in the matching coefficient between two effective theories around the heavy quark mass scale μ ≃ m t .

  6. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data. PMID:18348941

  7. Predicting Fusarium head blight epidemics with boosted regression trees.

    PubMed

    Shah, D A; De Wolf, E D; Paul, P A; Madden, L V

    2014-07-01

    Predicting major Fusarium head blight (FHB) epidemics allows for the judicious use of fungicides in suppressing disease development. Our objectives were to investigate the utility of boosted regression trees (BRTs) for predictive modeling of FHB epidemics in the United States, and to compare the predictive performances of the BRT models with those of logistic regression models we had developed previously. The data included 527 FHB observations from 15 states over 26 years. BRTs were fit to a training data set of 369 FHB observations, in which FHB epidemics were classified as either major (severity ≥ 10%) or non-major (severity < 10%), linked to a predictor matrix consisting of 350 weather-based variables and categorical variables for wheat type (spring or winter), presence or absence of corn residue, and cultivar resistance. Predictive performance was estimated on a test (holdout) data set consisting of the remaining 158 observations. BRTs had a misclassification rate of 0.23 on the test data, which was 31% lower than the average misclassification rate over 15 logistic regression models we had presented earlier. The strongest predictors were generally one of mean daily relative humidity, mean daily temperature, and the number of hours in which the temperature was between 9 and 30°C and relative humidity ≥ 90% simultaneously. Moreover, the predicted risk of major epidemics increased substantially when mean daily relative humidity rose above 70%, which is a lower threshold than previously modeled for most plant pathosystems. BRTs led to novel insights into the weather-epidemic relationship.

  8. The dark matter annihilation boost from low-temperature reheating

    NASA Astrophysics Data System (ADS)

    Erickcek, Adrienne L.

    2015-11-01

    The evolution of the Universe between inflation and the onset of big bang nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations of dwarf spheroidal galaxies. Although these constraints are subject to uncertainties regarding the internal structure of the microhalos that form from the enhanced perturbations, they open up the possibility of using gamma-ray observations to learn about the reheating of the Universe.

  9. Parallel fabrication of nanogap electrodes.

    PubMed

    Johnston, Danvers E; Strachan, Douglas R; Johnson, A T Charlie

    2007-09-01

    We have developed a technique for simultaneously fabricating large numbers of nanogaps in a single processing step using feedback-controlled electromigration. Parallel nanogap formation is achieved by a balanced simultaneous process that uses a novel arrangement of nanoscale shorts between narrow constrictions where the nanogaps form. Because of this balancing, the fabrication of multiple nanoelectrodes is similar to that of a single nanogap junction. The technique should be useful for constructing complex circuits of molecular-scale electronic devices.

  10. 2. LOOKING DOWN THE LINED POWER CANAL AS IT WINDS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. LOOKING DOWN THE LINED POWER CANAL AS IT WINDS ITS WAY TOWARD THE CEMENT MILL Photographer: Walter J. Lubken, November 19, 1907 - Roosevelt Power Canal & Diversion Dam, Parallels Salt River, Roosevelt, Gila County, AZ

  11. 16 CFR 1203.11 - Marking the impact test line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (HPI), with the brow parallel to the basic plane. Place a 5-kg (11-lb) preload ballast on top of the... helmet coinciding with the intersection of the surface of the helmet with the impact line planes...

  12. Massively parallel femtosecond laser processing.

    PubMed

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  13. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  14. How One Clinic Got a Big Boost in HPV Vaccination Rates

    MedlinePlus

    ... One Clinic Got a Big Boost in HPV Vaccination Rates The cervical cancer vaccine was treated as ... the United States, lagging far behind other recommended vaccinations in this age group. But, by lumping HPV ...

  15. Experimental Treatment for Duchenne Muscular Dystrophy Gets Boost from Existing Medication

    MedlinePlus

    ... 2013 March 2013 (historical) Experimental Treatment for Duchenne Muscular Dystrophy Gets Boost from Existing Medication A readily available ... effects of a promising experimental treatment for Duchenne muscular dystrophy (DMD), according to research partially funded by the ...

  16. The Effect of Element Formulation on the Prediction of Boost Effects in Numerical Tube Bending

    SciTech Connect

    Bardelcik, A.; Worswick, M.J.

    2005-08-05

    This paper presents advanced FE models of the pre-bending process to investigate the effect of element formulation on the prediction of boost effects in tube bending. Tube bending experiments are conducted with 3'' (OD) IF (Interstitial-Free) steel tube on a fully instrumented Eagle EPT-75 servo-hydraulic mandrel-rotary draw tube bender. Experiments were performed in which the bending boost was varied at three levels and resulted in consistent trends in the strain and thickness distribution within the pre-bent tubes. A numerical model of the rotary draw tube bender was used to simulate pre-bending of the IF tube with the three levels of boost from the experiments. To examine the effect of element formulation on the prediction of boost, the tube was modeled with shell and solid elements. Both models predicted the overall strain and thickness results well, but showed different trends in each of the models.

  17. Stable detection of expanded target by the use of boosting random ferns

    NASA Astrophysics Data System (ADS)

    Deng, Li; Wang, Chunhong; Rao, Changhui

    2012-10-01

    This paper studies the problem of keypoints recognition of extended target which lacks of texture information, and introduces an approach of stable detection of these targets called boosting random ferns (BRF). As common descriptors in this circumstance do not work as well as usual cases, matching of keypoints is hence turned into classification task so as to make use of the trainable characteristic of classifier. The kernel of BRF is consisted of random ferns as the classifier and AdaBoost (Adaptive Boosting) as the frame so that accuracy of random ferns classifier can be boosted to a relatively high level. Experiments compare BRF with widely used SURF descriptor and single random ferns classifier. The result shows that BRF obtains higher recognition rate of keypoints. Besides, for image sequence, BRF provides stronger stability than SURF in target detection, which proves the efficiency of BRF aiming to extended target which lacks of texture information.

  18. Boosted objects and jet substructure at the LHC: Report of BOOST2012, held at IFIC Valencia, 23rd-27th of July 2012

    SciTech Connect

    Altheimer, A.

    2014-03-21

    This report of the BOOST2012 workshop presents the results of four working groups that studied key aspects of jet substructure. We discuss the potential of first-principle QCD calculations to yield a precise description of the substructure of jets and study the accuracy of state-of-the-art Monte Carlo tools. Limitations of the experiments’ ability to resolve substructure are evaluated, with a focus on the impact of additional (pile-up) proton proton collisions on jet substructure performance in future LHC operating scenarios. The final section summarizes the lessons learnt from jet substructure analyses in searches for new physics in the production of boosted top quarks.

  19. Boosted objects and jet substructure at the LHC. Report of BOOST2012, held at IFIC Valencia, 23rd-27th of July 2012

    NASA Astrophysics Data System (ADS)

    Altheimer, A.; Arce, A.; Asquith, L.; Backus Mayes, J.; Kuutmann, E. Bergeaas; Berger, J.; Bjergaard, D.; Bryngemark, L.; Buckley, A.; Butterworth, J.; Cacciari, M.; Campanelli, M.; Carli, T.; Chala, M.; Chapleau, B.; Chen, C.; Chou, J. P.; Cornelissen, Th.; Curtin, D.; Dasgupta, M.; Davison, A.; de Almeida Dias, F.; de Cosa, A.; de Roeck, A.; Debenedetti, C.; Doglioni, C.; Ellis, S. D.; Fassi, F.; Ferrando, J.; Fleischmann, S.; Freytsis, M.; Gonzalez Silva, M. L.; de la Hoz, S. Gonzalez; Guescini, F.; Han, Z.; Hook, A.; Hornig, A.; Izaguirre, E.; Jankowiak, M.; Juknevich, J.; Kaci, M.; Kar, D.; Kasieczka, G.; Kogler, R.; Larkoski, A.; Loch, P.; Lopez Mateos, D.; Marzani, S.; Masetti, L.; Mateu, V.; Miller, D. W.; Mishra, K.; Nef, P.; Nordstrom, K.; Oliver Garcia, E.; Penwell, J.; Pilot, J.; Plehn, T.; Rappoccio, S.; Rizzi, A.; Rodrigo, G.; Safonov, A.; Salam, G. P.; Salt, J.; Schaetzel, S.; Schioppa, M.; Schmidt, A.; Scholtz, J.; Schwartzman, A.; Schwartz, M. D.; Segala, M.; Son, M.; Soyez, G.; Spannowsky, M.; Stewart, I.; Strom, D.; Swiatlowski, M.; Sanchez Martinez, V.; Takeuchi, M.; Thaler, J.; Thompson, E. N.; Tran, N. V.; Vermilion, C.; Villaplana, M.; Vos, M.; Wacker, J.; Walsh, J.

    2014-03-01

    This report of the BOOST2012 workshop presents the results of four working groups that studied key aspects of jet substructure. We discuss the potential of first-principle QCD calculations to yield a precise description of the substructure of jets and study the accuracy of state-of-the-art Monte Carlo tools. Limitations of the experiments' ability to resolve substructure are evaluated, with a focus on the impact of additional (pile-up) proton proton collisions on jet substructure performance in future LHC operating scenarios. A final section summarizes the lessons learnt from jet substructure analyses in searches for new physics in the production of boosted top quarks.

  20. Parallel micromanipulation method for microassembly

    NASA Astrophysics Data System (ADS)

    Sin, Jeongsik; Stephanou, Harry E.

    2001-09-01

    Microassembly deals with micron or millimeter scale objects where the tolerance requirements are in the micron range. Typical applications include electronics components (silicon fabricated circuits), optoelectronics components (photo detectors, emitters, amplifiers, optical fibers, microlenses, etc.), and MEMS (Micro-Electro-Mechanical-System) dies. The assembly processes generally require not only high precision but also high throughput at low manufacturing cost. While conventional macroscale assembly methods have been utilized in scaled down versions for microassembly applications, they exhibit limitations on throughput and cost due to the inherently serialized process. Since the assembly process depends heavily on the manipulation performance, an efficient manipulation method for small parts will have a significant impact on the manufacturing of miniaturized products. The objective of this study on 'parallel micromanipulation' is to achieve these three requirements through the handling of multiple small parts simultaneously (in parallel) with high precision (micromanipulation). As a step toward this objective, a new manipulation method is introduced. The method uses a distributed actuation array for gripper free and parallel manipulation, and a centralized, shared actuator for simplified controls. The method has been implemented on a testbed 'Piezo Active Surface (PAS)' in which an actively generated friction force field is the driving force for part manipulation. Basic motion primitives, such as translation and rotation of objects, are made possible with the proposed method. This study discusses the design of the proposed manipulation method PAS, and the corresponding manipulation mechanism. The PAS consists of two piezoelectric actuators for X and Y motion, two linear motion guides, two sets of nozzle arrays, and solenoid valves to switch the pneumatic suction force on and off in individual nozzles. One array of nozzles is fixed relative to the surface on

  1. On parallel random number generation for accelerating simulations of communication systems

    NASA Astrophysics Data System (ADS)

    Brugger, C.; Weithoffer, S.; de Schryver, C.; Wasenmüller, U.; Wehn, N.

    2014-11-01

    Powerful compute clusters and multi-core systems have become widely available in research and industry nowadays. This boost in utilizable computational power tempts people to run compute-intensive tasks on those clusters, either for speed or accuracy reasons. Especially Monte Carlo simulations with their inherent parallelism promise very high speedups. Nevertheless, the quality of Monte Carlo simulations strongly depends on the quality of the employed random numbers. In this work we present a comprehensive analysis of state-of-the-art pseudo random number generators like the MT19937 or the WELL generator used for parallel stream generation in different settings. These random number generators can be realized in hardware as well as in software and help to accelerate the analysis (or simulation) of communications systems. We show that it is possible to generate high-quality parallel random number streams with both generators, as long as some configuration constraints are met. We furthermore depict that distributed simulations with those generator types are viable even to very high degrees of parallelism.

  2. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such

  3. QCMPI: A parallel environment for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    :http://cpc.cs.qub.ac.uk/summaries/AECS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4866 No. of bytes in distributed program, including test data, etc.: 42 114 Distribution format: tar.gz Programming language: Fortran 90 and MPI Computer: Any system that supports Fortran 90 and MPI Operating system: developed and tested at the Pittsburgh Supercomputer Center, at the Barcelona Supercomputer (BSC/CNS) and on multi-processor Macs and PCs. For cases where distributed density matrix evaluation is invoked, the BLACS and SCALAPACK packages are needed. Has the code been vectorized or parallelized?: Yes Classification: 4.15 External routines: LAPACK, SCALAPACK, BLACS Nature of problem: Analysis of quantum computation algorithms and the effects of noise. Solution method: A Fortran 90/MPI package is provided that contains modular commands to create and analyze quantum circuits. Shor's factorization and Grover's search algorithms are explained in detail. Procedures for distributing state vector amplitudes over processors and for solving concurrent (multiverse) cases with noise effects are implemented. Density matrix and entropy evaluations are provided in both single and parallel versions. Running time: Test run takes less than 1 minute using 2 processors.

  4. Adaptive Replanning to Account for Lumpectomy Cavity Change in Sequential Boost After Whole-Breast Irradiation

    SciTech Connect

    Chen, Xiaojian; Qiao, Qiao; DeVries, Anthony; Li, Wenhui; Currey, Adam; Kelly, Tracy; Bergom, Carmen; Wilson, J. Frank; Li, X. Allen

    2014-12-01

    Purpose: To evaluate the efficiency of standard image-guided radiation therapy (IGRT) to account for lumpectomy cavity (LC) variation during whole-breast irradiation (WBI) and propose an adaptive strategy to improve dosimetry if IGRT fails to address the interfraction LC variations. Methods and Materials: Daily diagnostic-quality CT data acquired during IGRT in the boost stage using an in-room CT for 19 breast cancer patients treated with sequential boost after WBI in the prone position were retrospectively analyzed. Contours of the LC, treated breast, ipsilateral lung, and heart were generated by populating contours from planning CTs to boost fraction CTs using an auto-segmentation tool with manual editing. Three plans were generated on each fraction CT: (1) a repositioning plan by applying the original boost plan with the shift determined by IGRT; (2) an adaptive plan by modifying the original plan according to a fraction CT; and (3) a reoptimization plan by a full-scale optimization. Results: Significant variations were observed in LC. The change in LC volume at the first boost fraction ranged from a 70% decrease to a 50% increase of that on the planning CT. The adaptive and reoptimization plans were comparable. Compared with the repositioning plans, the adaptive plans led to an improvement in target coverage for an increased LC case (1 of 19, 7.5% increase in planning target volume evaluation volume V{sub 95%}), and breast tissue sparing for an LC decrease larger than 35% (3 of 19, 7.5% decrease in breast evaluation volume V{sub 50%}; P=.008). Conclusion: Significant changes in LC shape and volume at the time of boost that deviate from the original plan for WBI with sequential boost can be addressed by adaptive replanning at the first boost fraction.

  5. CRT combined with a sequential VMAT boost in the treatment of upper thoracic esophageal cancer.

    PubMed

    Jin, Xiance; Yi, Jinling; Zhou, Yongqiang; Yan, Huawei; Han, Ce; Xie, Congying

    2013-09-06

    The purpose of this study is to investigate the potential benefits of conformal radiotherapy (CRT) combined with a sequential volumetric-modulated arc therapy (VMAT) boost in the treatment of upper thoracic esophageal cancer. Ten patients with upper thoracic esophageal cancer previously treated with CRT plus a sequential VMAT boost plan were replanned with CRT plus an off-cord CRT boost plan and a full course of VMAT plan. Dosimetric parameters were compared. Results indicated that CRT plus off-cord CRT boost was inferior in planning target volume (PTV) coverage, as indicated by the volume covered by 93% (p = 0.05) and 95% (p = 0.02) of the prescription dose. The full course VMAT plan was superior in conformal index (CI) and conformation number (CN), and produced the highest protection for the spinal cord. CRT plus a VMAT boost demonstrated significant advantages in decreasing the volume of the lung irradiated by a dose of 10 Gy (V10, p = 0.007), 13 Gy (V13, p = 0.003), and 20 Gy (V20, p = 0.001). The full course VMAT plan demonstrated the lowest volume of lung receiving a dose of 30 Gy. CRT plus a VMAT boost for upper thoracic esophageal cancer can improve the target coverage and reduce the volume of lung irradiated by an intermediate dose. This combination may be a promising treatment technique for patients with upper thoracic esophageal cancer.

  6. Classification of patterns for diffuse lung diseases in thoracic CT images by AdaBoost algorithm

    NASA Astrophysics Data System (ADS)

    Kuwahara, Masayuki; Kido, Shoji; Shouno, Hayaru

    2009-02-01

    CT images are considered as effective for differential diagnosis of diffuse lung diseases. However, the diagnosis of diffuse lung diseases is a difficult problem for the radiologists, because they show a variety of patterns on CT images. So, our purpose is to construct a computer-aided diagnosis (CAD) system for classification of patterns for diffuse lung diseases in thoracic CT images, which gives both quantitative and objective information as a second opinion, to decrease the burdens of radiologists. In this article, we propose a CAD system based on the conventional pattern recognition framework, which consists of two sub-systems; one is feature extraction part and the other is classification part. In the feature extraction part, we adopted a Gabor filter, which can extract patterns such like local edges and segments from input textures, as a feature extraction of CT images. In the recognition part, we used a boosting method. Boosting is a kind of voting method by several classifiers to improve decision precision. We applied AdaBoost algorithm for boosting method. At first, we evaluated each boosting component classifier, and we confirmed they had not enough performances in classification of patterns for diffuse lung diseases. Next, we evaluated the performance of boosting method. As a result, by use of our system, we could improve the classification rate of patterns for diffuse lung diseases.

  7. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  8. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  9. Postmastectomy radiotherapy with integrated scar boost using helical tomotherapy

    SciTech Connect

    Rong Yi; Yadav, Poonam; Welsh, James S.; Fahner, Tasha; Paliwal, Bhudatt

    2012-10-01

    The purpose of this study was to evaluate helical tomotherapy dosimetry in postmastectomy patients undergoing treatment for chest wall and positive nodal regions with simultaneous integrated boost (SIB) in the scar region using strip bolus. Six postmastectomy patients were scanned with a 5-mm-thick strip bolus covering the scar planning target volume (PTV) plus 2-cm margin. For all 6 cases, the chest wall received a total cumulative dose of 49.3-50.4 Gy with daily fraction size of 1.7-2.0 Gy. Total dose to the scar PTV was prescribed to 58.0-60.2 Gy at 2.0-2.5 Gy per fraction. The supraclavicular PTV and mammary nodal PTV received 1.7-1.9 dose per fraction. Two plans (with and without bolus) were generated for all 6 cases. To generate no-bolus plans, strip bolus was contoured and overrode to air density before planning. The setup reproducibility and delivered dose accuracy were evaluated for all 6 cases. Dose-volume histograms were used to evaluate dose-volume coverage of targets and critical structures. We observed reduced air cavities with the strip bolus setup compared with what we normally see with the full bolus. The thermoluminescence dosimeters (TLD) in vivo dosimetry confirmed accurate dose delivery beneath the bolus. The verification plans performed on the first day megavoltage computed tomography (MVCT) image verified that the daily setup and overall dose delivery was within 2% accuracy compared with the planned dose. The hotspot of the scar PTV in no-bolus plans was 111.4% of the prescribed dose averaged over 6 cases compared with 106.6% with strip bolus. With a strip bolus only covering the postmastectomy scar region, we observed increased dose uniformity to the scar PTV, higher setup reproducibility, and accurate dose delivered beneath the bolus. This study demonstrates the feasibility of using a strip bolus over the scar using tomotherapy for SIB dosimetry in postmastectomy treatments.

  10. Supervised hashing using graph cuts and boosted decision trees.

    PubMed

    Lin, Guosheng; Shen, Chunhua; Hengel, Anton van den

    2015-11-01

    To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. PMID:26440270

  11. Pharmacokinetics and pharmacodynamics of boosted once-daily darunavir.

    PubMed

    Kakuda, Thomas N; Brochot, Anne; Tomaka, Frank L; Vangeneugden, Tony; Van De Casteele, Tom; Hoetelmans, Richard M W

    2014-10-01

    The ability to dose antiretroviral agents once daily simplifies the often complex therapeutic regimens required for the successful treatment of HIV infection. Thus, once-daily dosing can lead to improved patient adherence to medication and, consequently, sustained virological suppression and reduction in the risk of emergence of drug resistance. Several trials have evaluated once-daily darunavir/ritonavir in combination with other antiretrovirals (ARTEMIS and ODIN trials) or as monotherapy (MONET, MONOI and PROTEA trials) in HIV-1-infected adults. Data from ARTEMIS and ODIN demonstrate non-inferiority of once-daily darunavir/ritonavir against a comparator and, together with pharmacokinetic data, have established the suitability of once-daily darunavir/ritonavir for treatment-naive and treatment-experienced patients with no darunavir resistance-associated mutations. The findings of ARTEMIS and ODIN have led to recent updates to treatment guidelines, whereby once-daily darunavir/ritonavir, given with other antiretrovirals, is now a preferred treatment option for antiretroviral-naive adult patients and a simplified treatment option for antiretroviral-experienced adults who have no darunavir resistance-associated mutations. Once-daily dosing with darunavir/ritonavir is an option for treatment-naive and for treatment-experienced paediatric patients with no darunavir resistance-associated mutations based on the findings of the DIONE trial and ARIEL substudy. This article reviews the pharmacokinetics, efficacy, safety and tolerability of once-daily boosted darunavir. The feasibility of darunavir/ritonavir monotherapy as a treatment approach for some patients is also discussed. Finally, data on a fixed-dose combination of 800/150 mg of darunavir/cobicistat once daily are presented, showing comparable darunavir bioavailability to that obtained with 800/100 mg of darunavir/ritonavir once daily. PMID:24951533

  12. Modeling laser wakefield accelerators in a Lorentz boosted frame

    SciTech Connect

    Vay, J.-L.; Geddes, C.G.R.; Cormier-Michel, E.; Grotec, D. P.

    2010-06-15

    Modeling of laser-plasma wakefield accelerators in an optimal frame of reference is shown to produce orders of magnitude speed-up of calculations from first principles. Obtaining these speedups requires mitigation of a high-frequency instability that otherwise limits effectiveness in addition to solutions for handling data input and output in a relativistically boosted frame of reference. The observed high-frequency instability is mitigated using methods including an electromagnetic solver with tunable coefficients, its extension to accomodate Perfectly Matched Layers and Friedman's damping algorithms, as well as an efficient large bandwidth digital filter. It is shown that choosing the frame of the wake as the frame of reference allows for higher levels of filtering and damping than is possible in other frames for the same accuracy. Detailed testing also revealed serendipitously the existence of a singular time step at which the instability level is minimized, independently of numerical dispersion, thus indicating that the observed instability may not be due primarily to Numerical Cerenkov as has been conjectured. The techniques developed for Cerenkov mitigation prove nonetheless to be very efficient at controlling the instability. Using these techniques, agreement at the percentage level is demonstrated between simulations using different frames of reference, with speedups reaching two orders of magnitude for a 0.1 GeV class stages. The method then allows direct and efficient full-scale modeling of deeply depleted laser-plasma stages of 10 GeV-1 TeV for the first time, verifying the scaling of plasma accelerators to very high energies. Over 4, 5 and 6 orders of magnitude speedup is achieved for the modeling of 10 GeV, 100 GeV and 1 TeV class stages, respectively.

  13. Modeling laser wakefield accelerators in a Lorentz boosted frame

    SciTech Connect

    Vay, J.-L.; Geddes, C.G.R.; Cormier-Michel, E.; Grote, D.P.

    2010-09-15

    Modeling of laser-plasma wakefield accelerators in an optimal frame of reference [1] is shown to produce orders of magnitude speed-up of calculations from first principles. Obtaining these speedups requires mitigation of a high frequency instability that otherwise limits effectiveness in addition to solutions for handling data input and output in a relativistically boosted frame of reference. The observed high-frequency instability is mitigated using methods including an electromagnetic solver with tunable coefficients, its extension to accomodate Perfectly Matched Layers and Friedman's damping algorithms, as well as an efficient large bandwidth digital filter. It is shown that choosing theframe of the wake as the frame of reference allows for higher levels of filtering and damping than is possible in other frames for the same accuracy. Detailed testing also revealed serendipitously the existence of a singular time step at which the instability level is minimized, independently of numerical dispersion, thus indicating that the observed instability may not be due primarily to Numerical Cerenkov as has been conjectured. The techniques developed for Cerenkov mitigation prove nonetheless to be very efficient at controlling the instability. Using these techniques, agreement at the percentage level is demonstrated between simulations using different frames of reference, with speedups reaching two orders of magnitude for a 0.1 GeV class stages. The method then allows direct and efficient full-scale modeling of deeply depleted laser-plasma stages of 10 GeV-1 TeV for the first time, verifying the scaling of plasma accelerators to very high energies. Over 4, 5 and 6 orders of magnitude speedup is achieved for the modeling of 10 GeV, 100 GeV and 1 TeV class stages, respectively.

  14. Supervised hashing using graph cuts and boosted decision trees.

    PubMed

    Lin, Guosheng; Shen, Chunhua; Hengel, Anton van den

    2015-11-01

    To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.

  15. Hyperfractionated Concomitant Boost Proton Beam Therapy for Esophageal Carcinoma

    SciTech Connect

    Mizumoto, Masashi; Sugahara, Shinji; Okumura, Toshiyuki; Hashimoto, Takayuki; Oshiro, Yoshiko; Fukumitsu, Nobuyoshi; Nakahara, Akira; Terashima, Hideo; Tsuboi, Koji; Sakurai, Hideyuki

    2011-11-15

    Purpose: To evaluate the efficacy and safety of hyperfractionated concomitant boost proton beam therapy (PBT) for patients with esophageal cancer. Methods and Materials: The study participants were 19 patients with esophageal cancer who were treated with hyperfractionated photon therapy and PBT between 1990 and 2007. The median total dose was 78 GyE (range, 70-83 GyE) over a median treatment period of 48 days (range, 38-53 days). Ten of the 19 patients were at clinical T Stage 3 or 4. Results: There were no cases in which treatment interruption was required because of radiation-induced esophagitis or hematologic toxicity. The overall 1- and 5-year actuarial survival rates for all 19 patients were 79.0% and 42.8%, respectively, and the median survival time was 31.5 months (95% limits: 16.7- 46.3 months). Of the 19 patients, 17 (89%) showed a complete response within 4 months after completing treatment and 2 (11%) showed a partial response, giving a response rate of 100% (19/19). The 1- and 5-year local control rates for all 19 patients were 93.8% and 84.4 %, respectively. Only 1 patient had late esophageal toxicity of Grade 3 at 6 months after hyperfractionated PBT. There were no other nonhematologic toxicities, including no cases of radiation pneumonia or cardiac failure of Grade 3 or higher. Conclusions: The results suggest that hyperfractionated PBT is safe and effective for patients with esophageal cancer. Further studies are needed to establish the appropriate role and treatment schedule for use of PBT for esophageal cancer.

  16. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  17. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  18. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  19. Global Arrays Parallel Programming Toolkit

    SciTech Connect

    Nieplocha, Jaroslaw; Krishnan, Manoj Kumar; Palmer, Bruce J.; Tipparaju, Vinod; Harrison, Robert J.; Chavarría-Miranda, Daniel

    2011-01-01

    The two predominant classes of programming models for parallel computing are distributed memory and shared memory. Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this characteristic can have a negative impact on performance and scalability. Careful code restructuring to increase data reuse and replacing fine grain load/stores with block access to shared data can address the problem and yield performance for shared memory that is competitive with message-passing. However, this performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distributed memory models, such as message-passing or one-sided communication, offer performance and scalability but they are difficult to program. The Global Arrays toolkit attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed by the programmer. This management is achieved by calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be specified by the programmer and hence managed. GA is related to the global address space languages such as UPC, Titanium, and, to a lesser extent, Co-Array Fortran. In addition, by providing a set of data-parallel operations, GA is also related to data-parallel languages such as HPF, ZPL, and Data Parallel C. However, the Global Array programming model is implemented as a library that works with most languages used for technical computing and does not rely on compiler technology for achieving

  20. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to