Science.gov

Sample records for lines boost parallel

  1. Learning and Parallelization Boost Constraint Search

    ERIC Educational Resources Information Center

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  2. Development of a high speed parallel hybrid boost bearing

    NASA Technical Reports Server (NTRS)

    Winn, L. W.; Eusepi, M. W.

    1973-01-01

    The analysis, design, and testing of the hybrid boost bearing are discussed. The hybrid boost bearing consists of a fluid film bearing coupled in parallel with a rolling element bearing. This coupling arrangement makes use of the inherent advantages of both the fluid film and rolling element bearing and at the same time minimizes their disadvantages and limitations. The analytical optimization studies that lead to the final fluid film bearing design are reported. The bearing consisted of a centrifugally-pressurized planar fluid film thrust bearing with oil feed through the shaft center. An analysis of the test ball bearing is also presented. The experimental determination of the hybrid bearing characteristics obtained on the basis of individual bearing component tests and a combined hybrid bearing assembly is discussed and compared to the analytically determined performance characteristics.

  3. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  4. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  5. Parallel line scanning ophthalmoscope for retinal imaging.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2015-11-15

    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of parallelism in illuminating the retina compared to traditional scanning laser ophthalmoscope systems utilizing scanning mirrors. The system operated at the shot-noise limit with a signal-to-noise ratio of 28 for an optical power measured at the cornea of 100 μW. To demonstrate the imaging capabilities of the system, the macula and the optic nerve head of a healthy volunteer were imaged. Confocal images show good contrast and lateral resolution with a 10°×10° field of view. PMID:26565868

  6. Electrowetting films on parallel line electrodes.

    PubMed

    Yeo, Leslie Y; Chang, Hsueh-Chia

    2006-01-01

    A lubrication analysis is presented for the spreading dynamics of a high permittivity polar dielectric liquid drop due to an electric field sustained by parallel line electrode pairs separated by a distance R(e). The normal Maxwell stress, concentrated at the tip region near the apparent three-phase contact line, produces a negative capillary pressure that is responsible for pulling out a thin finger of liquid film ahead of the macroscopic drop, analogous to that obtained in self-similar gravity spreading. This front-running electrowetting film maintains a constant contact angle and volume as its front position advances in time t by the universal law 0.43R(e)(t/T(cap))1/3, independent of the drop dimension, surface tension, and wettability. T(cap)=pi(2)mu(l)R(e)/8(epsilon0epsilonl)V2 is the electrocapillary time scale where mu(l) is the liquid viscosity, epsilon0epsilonl the liquid permittivity, and V the applied voltage. This spreading dynamics for the electrowetting film is much faster than the rest of the drop; after a short transient, the latter spreads over the electrowetting film by draining into it. By employing matched asymptotics, we are able to elucidate this unique mechanism, justified by the reasonable agreement with numerical and experimental results. Unlike the usual electrowetting-on-dielectric configuration where the field singularity at the contact line produces a static change in the contact angle consistent with the Lippmann equation, we show that the parallel electrode configuration produces a bulk negative Maxwell pressure within the drop. This Maxwell pressure increases in magnitude toward the contact line due to field confinement and is responsible for a bulk pressure gradient that gives rise to a front-running spontaneous electrowetting film. PMID:16486159

  7. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  8. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  9. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  10. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  11. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Loads parallel to hinge line. 23.393... Control Surface and System Loads § 23.393 Loads parallel to hinge line. (a) Control surfaces and supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line....

  12. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  13. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  14. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  15. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  16. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  17. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12...

  18. VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN QUARRY AREA, FACING NORTHEAST - Granite Hill Plantation, Quarry No. 2, South side of State Route 16, 1.3 miles northeast east of Sparta, Sparta, Hancock County, GA

  19. Parallel algorithms for line detection on a mesh

    SciTech Connect

    Guerra, C.; Hambrusch, S. . Dept. of Computer Science)

    1989-02-01

    The authors consider the problems of detecting lines in an n x n image on an n x n mesh of processors. They present two new and efficient parallel algorithms which detect lines by performing a Hough transform. Both algorithms perform only simple data movement operations over relatively short distances.

  20. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  1. 15. ELEVATED CAMERA STAND, SHOWING LINE OF CAMERA STANDS PARALLEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. ELEVATED CAMERA STAND, SHOWING LINE OF CAMERA STANDS PARALLEL TO SLED TRACK. Looking west southwest down Camera Road. - Edwards Air Force Base, South Base Sled Track, Edwards Air Force Base, North of Avenue B, between 100th & 140th Streets East, Lancaster, Los Angeles County, CA

  2. Parallel inhomogeneity and the Alfven resonance. 1: Open field lines

    NASA Technical Reports Server (NTRS)

    Hansen, P. J.; Harrold, B. G.

    1994-01-01

    In light of a recent demonstration of the general nonexistence of a singularity at the Alfven resonance in cold, ideal, linearized magnetohydrodynamics, we examine the effect of a small density gradient parallel to uniform, open ambient magnetic field lines. To lowest order, energy deposition is quantitatively unaffected but occurs continuously over a thickened layer. This effect is illustrated in a numerical analysis of a plasma sheet boundary layer model with perfectly absorbing boundary conditions. Consequences of the results are discussed, both for the open field line approximation and for the ensuing closed field line analysis.

  3. Parallel line analysis: multifunctional software for the biomedical sciences.

    PubMed

    Swank, P R; Lewis, M L; Damron, K L; Morrison, D R

    1990-10-01

    An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time. PMID:2289387

  4. Antiretroviral Therapy and Efficacy After Virologic Failure on First-line Boosted Protease Inhibitor Regimens

    PubMed Central

    Zheng, Yu; Hughes, Michael D.; Lockman, Shahin; Benson, Constance A.; Hosseinipour, Mina C.; Campbell, Thomas B.; Gulick, Roy M.; Daar, Eric S.; Sax, Paul E.; Riddler, Sharon A.; Haubrich, Richard; Salata, Robert A.; Currier, Judith S.

    2014-01-01

    Background. Virologic failure (VF) on a first-line ritonavir-boosted protease inhibitor (PI/r) regimen is associated with low rates of resistance, but optimal management after failure is unknown. Methods. The analysis included participants in randomized trials who experienced VF on a first-line regimen of PI/r plus 2 nucleoside reverse transcriptase inhibitors (NRTIs) and had at least 24 weeks of follow-up after VF. Antiretroviral management and virologic suppression (human immunodeficiency virus type 1 [HIV-1] RNA <400 copies/mL) after VF were assessed. Results. Of 209 participants, only 1 participant had major PI-associated treatment-emergent mutations at first-line VF. The most common treatment approach after VF (66%) was to continue the same regimen. The virologic suppression rate 24 weeks after VF was 64% for these participants, compared with 72% for those who changed regimens (P = .19). Participants remaining on the same regimen had lower NRTI resistance rates (11% vs 30%; P = .003) and higher CD4+ cell counts (median, 275 vs 213 cells/µL; P = .005) at VF than those who changed. Among participants remaining on their first-line regimen, factors at or before VF significantly associated with subsequent virologic suppression were achieving HIV-1 RNA <400 copies/mL before VF (odds ratio [OR], 3.39 [95% confidence interval {CI}, 1.32–8.73]) and lower HIV-1 RNA at VF (OR for <10 000 vs ≥10 000 copies/mL, 3.35 [95% CI, 1.40–8.01]). Better adherence after VF was also associated with subsequent suppression (OR for <100% vs 100%, 0.38 [95% CI, .15–.97]). For participants who changed regimens, achieving HIV-1 RNA <400 copies/mL before VF also predicted subsequent suppression. Conclusions. For participants failing first-line PI/r with no or limited drug resistance, remaining on the same regimen is a reasonable approach. Improving adherence is important to subsequent treatment success. PMID:24842909

  5. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    SciTech Connect

    Guo Zehua; Tang Xianzhu

    2012-06-15

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  6. Migration Effects of Parallel Genetic Algorithms on Line Topologies of Heterogeneous Computing Resources

    NASA Astrophysics Data System (ADS)

    Gong, Yiyuan; Guan, Senlin; Nakamura, Morikazu

    This paper investigates migration effects of parallel genetic algorithms (GAs) on the line topology of heterogeneous computing resources. Evolution process of parallel GAs is evaluated experimentally on two types of arrangements of heterogeneous computing resources: the ascending and descending order arrangements. Migration effects are evaluated from the viewpoints of scalability, chromosome diversity, migration frequency and solution quality. The results reveal that the performance of parallel GAs strongly depends on the design of the chromosome migration in which we need to consider the arrangement of heterogeneous computing resources, the migration frequency and so on. The results contribute to provide referential scheme of implementation of parallel GAs on heterogeneous computing resources.

  7. Parallel line raster eliminates ambiguities in reading timing of pulses less than 500 microseconds apart

    NASA Technical Reports Server (NTRS)

    Horne, A. P.

    1966-01-01

    Parallel horizontal line raster is used for precision timing of events occurring less than 500 microseconds apart for observation of hypervelocity phenomena. The raster uses a staircase vertical deflection and eliminates ambiguities in reading timing of pulses close to the end of each line.

  8. Highly parallel vector visualization using line integral convolution

    SciTech Connect

    Cabral, B.; Leedom, C.

    1995-12-01

    Line Integral Convolution (LIC) is an effective imaging operator for visualizing large vector fields. It works by blurring an input image along local vector field streamlines yielding an output image. LIC is highly parallelizable because it uses only local read-sharing of input data and no write-sharing of output data. Both coarse- and fine-grained implementations have been developed. The coarse-grained implementation uses a straightforward row-tiling of the vector field to parcel out work to multiple CPUs. The fine-grained implementation uses a series of image warps and sums to compute the LIC algorithm across the entire vector field at once. This is accomplished by novel use of high-performance graphics hardware texture mapping and accumulation buffers.

  9. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  10. Emission Line Galaxies in the STIS Parallel Survey. 1; Observations and Data Analysis

    NASA Technical Reports Server (NTRS)

    Teplitz, Harry I.; Collins, Nicholas R.; Gardner, Jonathan P.; Hill, Robert S.; Heap, Sara R.; Lindler, Don J.; Rhodes, Jason; Woodgate, Bruce E.

    2002-01-01

    In the first three years of operation STIS obtained slitless spectra of approximately 2500 fields in parallel to prime HST observations as part of the STIS Parallel Survey (SPS). The archive contains approximately 300 fields at high galactic latitude (|b| greater than 30) with spectroscopic exposure times greater than 3000 seconds. This sample contains 220 fields (excluding special regions and requiring a consistent grating angle) observed between 6 June 1997 and 21 September 2000, with a total survey area of approximately 160 square arcminutes. At this depth, the SPS detects an average of one emission line galaxy per three fields. We present the analysis of these data, and the identification of 131 low to intermediate redshift galaxies detected by optical emission lines. The sample contains 78 objects with emission lines that we infer to be redshifted [OII]3727 emission at 0.43 < z < 1.7. The comoving number density of these objects is comparable to that of Halpha-emitting galaxies in the NICMOS parallel observations. One quasar and three probable Seyfert galaxies are detected. Many of the emission-line objects show morphologies suggestive of mergers or interactions. The reduced data are available upon request from the authors.

  11. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  12. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  13. High-speed, digitally refocused retinal imaging with line-field parallel swept source OCT

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Ginner, Laurin; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-03-01

    MHz OCT allows mitigating undesired influence of motion artifacts during retinal assessment, but comes in state-of-the-art point scanning OCT at the price of increased system complexity. By changing the paradigm from scanning to parallel OCT for in vivo retinal imaging the three-dimensional (3D) acquisition time is reduced without a trade-off between speed, sensitivity and technological requirements. Furthermore, the intrinsic phase stability allows for applying digital refocusing methods increasing the in-focus imaging depth range. Line field parallel interferometric imaging (LPSI) is utilizing a commercially available swept source, a single-axis galvo-scanner and a line scan camera for recording 3D data with up to 1MHz A-scan rate. Besides line-focus illumination and parallel detection, we mitigate the necessity for high-speed sensor and laser technology by holographic full-range imaging, which allows for increasing the imaging speed by low sampling of the optical spectrum. High B-scan rates up to 1kHz further allow for implementation of lable-free optical angiography in 3D by calculating the inter B-scan speckle variance. We achieve a detection sensitivity of 93.5 (96.5) dB at an equivalent A-scan rate of 1 (0.6) MHz and present 3D in vivo retinal structural and functional imaging utilizing digital refocusing. Our results demonstrate for the first time competitive imaging sensitivity, resolution and speed with a parallel OCT modality. LPSI is in fact currently the fastest OCT device applied to retinal imaging and operating at a central wavelength window around 800 nm with a detection sensitivity of higher than 93.5 dB.

  14. Parlin, a general microcomputer program for parallel-line analysis of bioassays.

    PubMed

    Jesty, J; Godfrey, H P

    1986-04-01

    Commonly used manual and calculator methods for analysis of clinically important parallel-line bioassays are subject to operator bias and provide neither confidence limits for the results nor any indication of their validity. To remedy this, the authors have written a general program for statistical analysis of these bioassays for the IBM Personal Computer and its compatibles. The program has been used for analysis of bioassays for specific coagulation factors and inflammatory lymphokines and for radioimmunoassays for prostaglandins. The program offers a choice of no transform, logarithmic, or logit transformation of data, which are fitted to parallel lines for standard and unknown. It analyzes the fit for parallelism and linearity with an F test, and calculates the best estimate of the result and its 95% confidence limits. Comparison of results calculated by PARLIN with those previously obtained manually shows excellent correlation (r greater than 0.99). Results obtained using PARLIN are quickly available with current assay technics and provide a complete evaluation of the bioassay at no increase in cost. PMID:3456698

  15. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  16. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  17. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  18. Compact Bandpass Filter Based on Parallel-coupled Lines and Quasi-lumped Structure

    NASA Astrophysics Data System (ADS)

    Ding, Chen; Li, Jiao; Wei, Feng; Shi, Xiao-Wei

    2016-01-01

    A compact microstrip bandpass filter (BPF) using quarter-wavelength resonators is proposed based on the parallel-coupled lines (PCLs) and quasi-lumped structure. A method based on the matrix and network transformation of cascaded-quadruplet (CQ) filters is investigated and successfully applied to the BPF design. The design formulas for the proposed BPF are analytically developed. Specifically, in order to verify the feasibility of the proposed method, three BPFs centering at 1.575 GHz with different FBWs are designed. Good agreement between the simulated and measured results is observed. Moreover, the designed filters can achieve a wide stopband.

  19. Acceleration on stretched meshes with line-implicit LU-SGS in parallel implementation

    NASA Astrophysics Data System (ADS)

    Otero, Evelyn; Eliasson, Peter

    2015-02-01

    The implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver is combined with the line-implicit technique to improve convergence on the very anisotropic grids necessary for resolving the boundary layers. The computational fluid dynamics code used is Edge, a Navier-Stokes flow solver for unstructured grids based on a dual grid and edge-based formulation. Multigrid acceleration is applied with the intention to accelerate the convergence to steady state. LU-SGS works in parallel and gives better linear scaling with respect to the number of processors, than the explicit scheme. The ordering techniques investigated have shown that node numbering does influence the convergence and that the orderings from Delaunay and advancing front generation were among the best tested. 2D Reynolds-averaged Navier-Stokes computations have clearly shown the strong efficiency of our novel approach line-implicit LU-SGS which is four times faster than implicit LU-SGS and line-implicit Runge-Kutta. Implicit LU-SGS for Euler and line-implicit LU-SGS for Reynolds-averaged Navier-Stokes are at least twice faster than explicit and line-implicit Runge-Kutta, respectively, for 2D and 3D cases. For 3D Reynolds-averaged Navier-Stokes, multigrid did not accelerate the convergence and therefore may not be needed.

  20. Line-field parallel swept source MHz OCT for structural and functional retinal imaging.

    PubMed

    Fechtig, Daniel J; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M; Drexler, Wolfgang; Leitgeb, Rainer A

    2015-03-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation. PMID:25798298

  1. Line-field parallel swept source MHz OCT for structural and functional retinal imaging

    PubMed Central

    Fechtig, Daniel J.; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-01-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation. PMID:25798298

  2. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an

  3. Retinal photoreceptor imaging with high-speed line-field parallel spectral domain OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Ginner, Laurin; Kumar, Abhishek; Pircher, Michael; Schmoll, Tilman; Wurster, Lara M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2016-03-01

    We present retinal photoreceptor imaging with a line-field parallel spectral domain OCT modality, utilizing a commercially available 2D CMOS detector array operating at and imaging speed of 500 B-scans/s. Our results demonstrate for the first time in vivo structural and functional retinal assessment with a line-field OCT setup providing sufficient sensitivity, lateral and axial resolution and 3D acquisition rates in order to resolve individual photoreceptor cells. The phase stability of the system is manifested by the high phase-correlation across the lateral FOV on the level of individual photoreceptors. The setup comprises a Michelson interferometer illuminated by a broadband light source, where a line-focus is formed via a cylindrical lens and the back-propagated light from sample and reference arm is detected by a 2D array after passing a diffraction grating. The spot size of the line-focus on the retina is 5μm, which corresponds to a PSF of 50μm and an oversampling factor of 3.6 at the detector plane, respectively. A full 3D stack was recorded in only 0.8 s. We show representative enface images, tomograms and phase-difference maps of cone photoreceptors with a lateral FOV close to 2°. The high-speed capability and the phase stability due to parallel illumination and detection may potentially lead to novel structural and functional diagnostic tools on a cellular and microvascular imaging level. Furthermore, the presented system enables competitive imaging results as compared to respective point scanning modalities and facilitates utilizing software based digital aberration correction algorithms for achieving 3D isotropic resolution across the full FOV.

  4. Retinal photoreceptor imaging with high-speed line-field parallel spectral domain OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ginner, Laurin; Fechtig, Daniel J.; Schmoll, Tilman; Wurster, Lara M.; Pircher, Michael; Leitgeb, Rainer A.; Drexler, Wolfgang

    2016-03-01

    We present retinal photoreceptor imaging with a line-field parallel spectral domain OCT modality, utilizing a commercially available 2D CMOS detector array operating at and imaging speed of 500 B-scans/s. Our results demonstrate for the first time in vivo structural and functional retinal assessment with a line-field OCT setup providing sufficient sensitivity, lateral and axial resolution and 3D acquisition rates in order to resolve individual photoreceptor cells. The setup comprises a Michelson interferometer illuminated by a broadband light source, where a line-focus is formed via a cylindrical lens and the back-propagated light from sample and reference arm is detected by a 2D array after passing a diffraction grating. The spot size of the line-focus on the retina is 5μm, which corresponds to a PSF of 50μm and an oversampling factor of 3.6 at the detector plane, respectively. A full 3D stack was recorded in only 0.8 s. We show representative enface images, tomograms and phase-difference maps of cone photoreceptors with a lateral FOV close to 2°. The high-speed capability and the phase stability due to parallel illumination and detection may potentially lead to novel structural and functional diagnostic tools on a cellular and microvascular imaging level. Furthermore, the presented system enables competitive imaging results as compared to respective point scanning modalities and facilitates utilizing software based digital aberration correction algorithms for achieving 3D isotropic resolution across the full FOV.

  5. An on-line learning tracking of non-rigid target combining multiple-instance boosting and level set

    NASA Astrophysics Data System (ADS)

    Chen, Mingming; Cai, Jingju

    2013-10-01

    Visual tracking algorithms based on online boosting generally use a rectangular bounding box to represent the position of the target, while actually the shape of the target is always irregular. This will cause the classifier to learn the features of the non-target parts in the rectangle region, thereby the performance of the classifier is reduced, and drift would happen. To avoid the limitations of the bounding-box, we propose a novel tracking-by-detection algorithm involving the level set segmentation, which ensures the classifier only learn the features of the real target area in the tracking box. Because the shape of the target only changes a little between two adjacent frames and the current level set algorithm can avoid the re-initialization of the signed distance function, it only takes a few iterations to converge to the position of the target contour in the next frame. We also make some improvement on the level set energy function so that the zero level set would have less possible to converge to the false contour. In addition, we use gradient boost to improve the original multi-instance learning (MIL) algorithm like the WMILtracker, which greatly speed up the tracker. Our algorithm outperforms the original MILtracker both on speed and precision. Compared with the WMILtracker, our algorithm runs at a almost same speed, but we can avoid the drift caused by background learning, so the precision is better.

  6. Parametric analysis of hollow conductor parallel and coaxial transmission lines for high frequency space power distribution

    NASA Technical Reports Server (NTRS)

    Jeffries, K. S.; Renz, D. D.

    1984-01-01

    A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.

  7. A micromachined silicon parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT)

    NASA Astrophysics Data System (ADS)

    Cho, Young Y.; Chang, Cheng-Chung; Wang, Lihong V.; Zou, Jun

    2015-03-01

    To achieve real-time photoacoustic tomography (PAT), massive transducer arrays and data acquisition (DAQ) electronics are needed to receive the PA signals simultaneously, which results in complex and high-cost ultrasound receiver systems. To address this issue, we have developed a new PA data acquisition approach using acoustic time delay. Optical fibers were used as parallel acoustic delay lines (PADLs) to create different time delays in multiple channels of PA signals. This makes the PA signals reach a single-element transducer at different times. As a result, they can be properly received by single-channel DAQ electronics. However, due to their small diameter and fragility, using optical fiber as acoustic delay lines poses a number of challenges in the design, construction and packaging of the PADLs, thereby limiting their performances and use in real imaging applications. In this paper, we report the development of new silicon PADLs, which are directly made from silicon wafers using advanced micromachining technologies. The silicon PADLs have very low acoustic attenuation and distortion. A linear array of 16 silicon PADLs were assembled into a handheld package with one common input port and one common output port. To demonstrate its real-time PAT capability, the silicon PADL array (with its output port interfaced with a single-element transducer) was used to receive 16 channels of PA signals simultaneously from a tissue-mimicking optical phantom sample. The reconstructed PA image matches well with the imaging target. Therefore, the silicon PADL array can provide a 16× reduction in the ultrasound DAQ channels for real-time PAT.

  8. Handheld photoacoustic tomography probe built using optical-fiber parallel acoustic delay lines

    NASA Astrophysics Data System (ADS)

    Cho, Young; Chang, Cheng-Chung; Yu, Jaesok; Jeon, Mansik; Kim, Chulhong; Wang, Lihong V.; Zou, Jun

    2014-08-01

    The development of the first miniaturized parallel acoustic delay line (PADL) probe for handheld photoacoustic tomography (PAT) is reported. Using fused-silica optical fibers with low acoustic attenuation, we constructed two arrays of eight PADLs. Precision laser micromachining was conducted to produce robust and accurate mechanical support and alignment structures for the PADLs, with minimal acoustic distortion and interchannel coupling. The 16 optical-fiber PADLs, each with a different time delay, were arranged to form one input port and two output ports. A handheld PADL probe was constructed using two single-element transducers and two data acquisition channels (equal to a channel reduction ratio of 8∶1). Photoacoustic (PA) images of a black-ink target embedded in an optically scattering phantom were successfully acquired. After traveling through the PADLs, the eight channels of differently time-delayed PA signals reached each single-element ultrasonic transducer in a designated nonoverlapping time series, allowing clear signal separation for PA image reconstruction. Our results show that the PADL technique and the handheld probe can potentially enable real-time PAT, while significantly reducing the complexity and cost of the ultrasound receiver system.

  9. Handheld photoacoustic tomography probe built using optical-fiber parallel acoustic delay lines.

    PubMed

    Cho, Young; Chang, Cheng-Chung; Yu, Jaesok; Jeon, Mansik; Kim, Chulhong; Wang, Lihong V; Zou, Jun

    2014-08-01

    The development of the first miniaturized parallel acoustic delay line (PADL) probe for handheld photoacoustic tomography (PAT) is reported. Using fused-silica optical fibers with low acoustic attenuation, we constructed two arrays of eight PADLs. Precision laser micromachining was conducted to produce robust and accurate mechanical support and alignment structures for the PADLs, with minimal acoustic distortion and interchannel coupling. The 16 optical-fiber PADLs, each with a different time delay, were arranged to form one input port and two output ports. A handheld PADL probe was constructed using two single-element transducers and two data acquisition channels (equal to a channel reduction ratio of 8∶1). Photoacoustic (PA) images of a black-ink target embedded in an optically scattering phantom were successfully acquired. After traveling through the PADLs, the eight channels of differently time-delayed PA signals reached each single-element ultrasonic transducer in a designated nonoverlapping time series, allowing clear signal separation for PA image reconstruction. Our results show that the PADL technique and the handheld probe can potentially enable real-time PAT, while significantly reducing the complexity and cost of the ultrasound receiver system. PMID:25104413

  10. A penny saved is ten dollars earned: fifteen ways to lower overhead and boost the bottom line.

    PubMed

    Pollock, Kim

    2013-01-01

    As expenses rise and reimbursements remain flat or decline, it's more important than ever to scrutinize practice expenditures on a regular basis. This article provides tips for evaluating individual line items on the profit and loss statement and identifying expenses that can be reduced without sacrificing quality of care or patient satisfaction. When aggregated, even seemingly small reductions add up to big annual savings. PMID:23767128

  11. Analytical Solution for Two Parallel Traces on PCB in the Time Domain with Application to Hairpin Delay Lines

    NASA Astrophysics Data System (ADS)

    Xiao, Fengchao; Murano, Kimitoshi; Kami, Yoshio

    In this paper the time-domain analysis of two parallel traces is investigated. First, the telegrapher's equations for transmission line are applied to the parallel traces on printed circuit board (PCB), and are solved by using the mode decomposition technique. The time-domain solutions are then obtained by using the inverse Laplace transform. Although the Fourier-transform technique is also applicable for this problem, the solution is given numerically. Contrarily, the inverse Laplace transform successfully leads to an analytical expression for the transmission characteristics. The analytical expression is represented by series, which clearly explains the coupling mechanism. The analytical expression for the fundamental section of a meander delay line is investigated in detail. The analytical solution is validated by measurements, and the characteristics of the distortions in the output waveforms of meander delay lines due to the crosstalk are also investigated.

  12. A handheld optical fiber parallel acoustic delay line (PADL) probe for photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Cho, Young; Chang, Cheung-Chung; Jeon, Mansik; Kim, Chulhong; Wang, Lihong V.; Zou, Jun

    2014-03-01

    In current photoacoustic tomography (PAT), l-D or 2-D ultrasound arrays and multi-channel data acquisition (DAQ) electronics are used to detect the photoacoustic signals simultaneously for "real-time" image construction. However, as the number of transducer elements and DAQ channels increase, the construction and operation of the ultrasound receiving system will become complex and costly. This situation can be addressed by using parallel acoustic delay lines (PADLs) to create true time delays in multiple PA signal channels. The time-delayed PA signals will reach the ultrasound transducer at different times and therefore can be received by one single-element transducer without mixing with each other. In this paper, we report the development of the first miniaturized PADL probe suitable for handheld operations. Fusedsilica optical fibers with low acoustic attenuation were used to construct the 16 PADLs with specific time delays. The handheld probe structure was fabricated using precision laser-micromachining process to provide robust mechanical support and accurate alignment of the PADLs with minimal acoustic distortion and inter-channel coupling. The 16 optical-fiber PADLs were arranged to form one input port and two output ports. Photoacoustic imaging of a black-ink target embedded in an optically-scattering phantom was successfully conducted using the handheld PADL probe with two single-element transducers and two DAQ channels (equal to a channel reduction ratio of 8:1). Our results show that the PADL technique and the handheld probe could provide a promising solution for real-time PAT with significantly reduced complexity and cost of the ultrasound receiver system.

  13. The new moon illusion and the role of perspective in the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2015-01-01

    In the new moon illusion, the sun does not appear to be in a direction perpendicular to the boundary between the lit and dark sides of the moon, and aircraft jet trails appear to follow curved paths across the sky. In both cases, lines that are physically straight and parallel to the horizon appear to be curved. These observations prompted us to investigate the neglected question of how we are able to judge the straightness and parallelism of extended lines. To do this, we asked observers to judge the 2-D alignment of three artificial "stars" projected onto the dome of the Saint Petersburg Planetarium that varied in both their elevation and their separation in horizontal azimuth. The results showed that observers make substantial, systematic errors, biasing their judgments away from the veridical great-circle locations and toward equal-elevation settings. These findings further demonstrate that whenever information about the distance of extended lines or isolated points is insufficient, observers tend to assume equidistance, and as a consequence, their straightness judgments are biased toward the angular separation of straight and parallel lines. PMID:25239097

  14. Parallelism at CERN: real-time and off-line applications in the GP-MIMD2 project

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo

    1997-02-01

    A wide range of General Purpose High-Energy Physics applications, ranging from Monte carlo simulation to data acquisition, from interactive data analysis to on-line filtering, have been ported, or developed, and run in parallel on IBM SP-2 and Meiko CS-2 CERN large multi-processor machines. The ESPRIT project GP-MIMD2 has been a catalyst for the interest in parallel computing at CERN. The project provided the 128 processors Meiko CS-2 system that is now succesfully integrated in the CERN computing environment. The CERN experiment NA48 was involved in the GP-MIMD2 project since the beginning. NA48 physicists run, as part of their day-to-day work, simulation and analysis programs parallelized using the Message Passing Interface MPI. The CS-2 is also a vital component of the experiment Data Acquisition System and will be used to calibrate in real-time the 13 000 channels liquid krypton calorimeter.

  15. Parallel heat flux and flow acceleration in open field line plasmas with magnetic trapping

    SciTech Connect

    Guo, Zehua; Tang, Xian-Zhu; McDevitt, Chris

    2014-10-15

    The magnetic field strength modulation in a tokamak scrape-off layer (SOL) provides both flux expansion next to the divertor plates and magnetic trapping in a large portion of the SOL. Previously, we have focused on a flux expander with long mean-free-path, motivated by the high temperature and low density edge anticipated for an absorbing boundary enabled by liquid lithium surfaces. Here, the effects of magnetic trapping and a marginal collisionality on parallel heat flux and parallel flow acceleration are examined. The various transport mechanisms are captured by kinetic simulations in a simple but representative mirror-expander geometry. The observed parallel flow acceleration is interpreted and elucidated with a modified Chew-Goldberger-Low model that retains temperature anisotropy and finite collisionality.

  16. Resolving magnetic field line stochasticity and parallel thermal transport in MHD simulations

    SciTech Connect

    Nishimura, Y.; Callen, J.D.; Hegna, C.C.

    1998-12-31

    Heat transport along braided, or chaotic magnetic field lines is a key to understand the disruptive phase of tokamak operations, both the major disruption and the internal disruption (sawtooth oscillation). Recent sawtooth experimental results in the Tokamak Fusion Test Reactor (TFTR) have inferred that magnetic field line stochasticity in the vicinity of the q = 1 inversion radius plays an important role in rapid changes in the magnetic field structures and resultant thermal transport. In this study, the characteristic Lyapunov exponents and spatial correlation of field line behaviors are calculated to extract the characteristic scale length of the microscopic magnetic field structure (which is important for net radial global transport). These statistical values are used to model the effect of finite thermal transport along magnetic field lines in a physically consistent manner.

  17. Quantitative Profiling of Protein Tyrosine Kinases in Human Cancer Cell Lines by Multiplexed Parallel Reaction Monitoring Assays.

    PubMed

    Kim, Hye-Jung; Lin, De; Lee, Hyoung-Joo; Li, Ming; Liebler, Daniel C

    2016-02-01

    Protein tyrosine kinases (PTKs) play key roles in cellular signal transduction, cell cycle regulation, cell division, and cell differentiation. Dysregulation of PTK-activated pathways, often by receptor overexpression, gene amplification, or genetic mutation, is a causal factor underlying numerous cancers. In this study, we have developed a parallel reaction monitoring-based assay for quantitative profiling of 83 PTKs. The assay detects 308 proteotypic peptides from 54 receptor tyrosine kinases and 29 nonreceptor tyrosine kinases in a single run. Quantitative comparisons were based on the labeled reference peptide method. We implemented the assay in four cell models: 1) a comparison of proliferating versus epidermal growth factor-stimulated A431 cells, 2) a comparison of SW480Null (mutant APC) and SW480APC (APC restored) colon tumor cell lines, and 3) a comparison of 10 colorectal cancer cell lines with different genomic abnormalities, and 4) lung cancer cell lines with either susceptibility (11-18) or acquired resistance (11-18R) to the epidermal growth factor receptor tyrosine kinase inhibitor erlotinib. We observed distinct PTK expression changes that were induced by stimuli, genomic features or drug resistance, which were consistent with previous reports. However, most of the measured expression differences were novel observations. For example, acquired resistance to erlotinib in the 11-18 cell model was associated not only with previously reported up-regulation of MET, but also with up-regulation of FLK2 and down-regulation of LYN and PTK7. Immunoblot analyses and shotgun proteomics data were highly consistent with parallel reaction monitoring data. Multiplexed parallel reaction monitoring assays provide a targeted, systems-level profiling approach to evaluate cancer-related proteotypes and adaptations. Data are available through Proteome eXchange Accession PXD002706. PMID:26631510

  18. The proposed planning method as a parallel element to a real service system for dynamic sharing of service lines.

    PubMed

    Klampfer, Saša; Chowdhury, Amor

    2015-07-01

    This paper presents a solution to the bottleneck problem with dynamic sharing or leasing of service capacities. From this perspective the use of the proposed method as a parallel element in service capacities sharing is very important, because it enables minimization of the number of interfaces, and consequently of the number of leased lines, with a combination of two service systems with time-opposite peak loads. In this paper we present a new approach, methodology, models and algorithms which solve the problems of dynamic leasing and sharing of service capacities. PMID:25792516

  19. Experimental evaluation of a boron-lined parallel plate proportional counter for use in nuclear safeguards coincidence counting

    NASA Astrophysics Data System (ADS)

    Henzlova, D.; Evans, L. G.; Menlove, H. O.; Swinhoe, M. T.; Marlow, J. B.

    2013-01-01

    Boron-lined proportional technologies are increasingly being considered as a viable option for the near-term replacement of 3He-based technologies for use in international nuclear safeguards neutron detection and coincidence counting applications. In order to determine the applicability and feasibility of any replacement technology for international safeguards, it must be evaluated against performance parameters specific to nuclear safeguards applications. In this paper, we present an experimental evaluation of a boron-lined parallel plate proportional counter developed by Precision Data Technology, Inc. (PDT). The counter performance was evaluated using a high-rate 252Cf spontaneous fission neutron source and a set of 137Cs gamma-ray sources with a dose rate of 450 mR/h at the detector face. The performance data were subsequently compared with an equivalent 3He-based system defined using the Monte Carlo N-particle eXtended (MCNPX) radiation transport code.

  20. Emission-Line Galaxies from the NICMOS/Hubble Space Telescope Grism Parallel Survey

    NASA Astrophysics Data System (ADS)

    McCarthy, Patrick J.; Yan, Lin; Freudling, Wolfram; Teplitz, Harry I.; Malumuth, Eliot M.; Weymann, Ray J.; Malkan, Matthew A.; Fosbury, Robert A. E.; Gardner, Jonathan P.; Storrie-Lombardi, Lisa J.; Thompson, Rodger I.; Williams, Robert E.; Heap, Sara R.

    1999-08-01

    We present the first results of a survey of random fields with the slitless G141 (λc=1.5 μm, Δλ=0.8 μm) grism on the near-IR camera and multiobject spectrometer (NICMOS) on board the Hubble Space Telescope (HST). Approximately 64 arcmin2 have been observed at intermediate and high Galactic latitudes. The 3 σ limiting line and continuum fluxes in each field vary from 7.5×10-17 to 1×10-17 ergs cm-2 s-1, and from H=20 to 22, respectively. Our median and area-weighted 3 σ limiting line fluxes within a 4 pixel aperture are nearly identical at 4.1×10-17 ergs cm-2 s-1 and are 60% deeper than the deepest narrowband imaging surveys from the ground. We have identified 33 emission-line objects and derive their observed wavelengths, fluxes, and equivalent widths. We argue that the most likely line identification is Hα and that the redshift range probed is from 0.75 to 1.9. The 2 σ rest-frame equivalent width limits range from 9 to 130 Å, with an average of 40 Å. The survey probes an effective comoving volume of 105 h-350 Mpc3 for q0=0.5. Our derived comoving number density of emission-line galaxies in the range 0.7lines have a median F160W magnitude of 20.4 (Vega scale) and a median Hα luminosity of 2.7×1042 ergs s-1. The implied star formation rates range from 1 to 324 Msolar yr-1, with an average [N II] λλ6583, 6548 corrected rate of 21 Msolar yr-1 for H0=50 km s-1 Mpc and q0=0.5 (34 Msolar yr-1 for q0=0.1).

  1. Parallel Configuration For Fast Superconducting Strip Line Detectors With Very Large Area In Time Of Flight Mass Spectrometry

    SciTech Connect

    Casaburi, A.; Zen, N.; Suzuki, K.; Ohkubo, M.; Ejrnaes, M.; Cristiano, R.; Pagano, S.

    2009-12-16

    We realized a very fast and large Superconducting Strip Line Detector based on a parallel configuration of nanowires. The detector with size 200x200 {mu}m{sup 2} recorded a sub-nanosecond pulse width of 700 ps in FWHM (400 ps rise time and 530 ps relaxation time) for lysozyme monomers/multimers molecules accelerated at 175 keV in a Time of Flight Mass Spectrometer. This record is the best in the class of superconducting detectors and comparable with the fastest NbN superconducting single photon detector of 10x10 {mu}m{sup 2}. We succeeded in acquiring mass spectra as the first step for a scale-up to {approx}mm pixel size for high throughput MS analysis, while keeping a fast response.

  2. A germ cell determinant reveals parallel pathways for germ line development in Caenorhabditis elegans.

    PubMed

    Mainpal, Rana; Nance, Jeremy; Yanowitz, Judith L

    2015-10-15

    Despite the central importance of germ cells for transmission of genetic material, our understanding of the molecular programs that control primordial germ cell (PGC) specification and differentiation are limited. Here, we present findings that X chromosome NonDisjunction factor-1 (XND-1), known for its role in regulating meiotic crossover formation, is an early determinant of germ cell fates in Caenorhabditis elegans. xnd-1 mutant embryos display a novel 'one PGC' phenotype as a result of G2 cell cycle arrest of the P4 blastomere. Larvae and adults display smaller germ lines and reduced brood size consistent with a role for XND-1 in germ cell proliferation. Maternal XND-1 proteins are found in the P4 lineage and are exclusively localized to the nucleus in PGCs, Z2 and Z3. Zygotic XND-1 turns on shortly thereafter, at the ∼300-cell stage, making XND-1 the earliest zygotically expressed gene in worm PGCs. Strikingly, a subset of xnd-1 mutants lack germ cells, a phenotype shared with nos-2, a member of the conserved Nanos family of germline determinants. We generated a nos-2 null allele and show that nos-2; xnd-1 double mutants display synthetic sterility. Further removal of nos-1 leads to almost complete sterility, with the vast majority of animals without germ cells. Sterility in xnd-1 mutants is correlated with an increase in transcriptional activation-associated histone modification and aberrant expression of somatic transgenes. Together, these data strongly suggest that xnd-1 defines a new branch for PGC development that functions redundantly with nos-2 and nos-1 to promote germline fates by maintaining transcriptional quiescence and regulating germ cell proliferation. PMID:26395476

  3. Micromachined silicon parallel acoustic delay lines as time-delayed ultrasound detector array for real-time photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Cho, Y.; Chang, C.-C.; Wang, L. V.; Zou, J.

    2016-02-01

    This paper reports the development of a new 16-channel parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT). The PADLs were directly fabricated from single-crystalline silicon substrates using deep reactive ion etching. Compared with other acoustic delay lines (e.g., optical fibers), the micromachined silicon PADLs offer higher acoustic transmission efficiency, smaller form factor, easier assembly, and mass production capability. To demonstrate its real-time photoacoustic imaging capability, the silicon PADL array was interfaced with one single-element ultrasonic transducer followed by one channel of data acquisition electronics to receive 16 channels of photoacoustic signals simultaneously. A PAT image of an optically-absorbing target embedded in an optically-scattering phantom was reconstructed, which matched well with the actual size of the imaged target. Because the silicon PADL array allows a signal-to-channel reduction ratio of 16:1, it could significantly simplify the design and construction of ultrasonic receivers for real-time PAT.

  4. Bidirectional buck boost converter

    DOEpatents

    Esser, Albert Andreas Maria

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero.

  5. Bidirectional buck boost converter

    DOEpatents

    Esser, A.A.M.

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero. 20 figs.

  6. Calculation of the Potential and Electric Flux Lines for Parallel Plate Capacitors with Symmetrically Placed Equal Lengths by Using the Method of Conformal Mapping

    NASA Astrophysics Data System (ADS)

    Albayrak, Erhan

    2001-05-01

    The classical problem of the parallel-plate capacitors has been investigated by a number of authors, including Love [1], Langton [2] and Lin [3]. In this paper, the exact equipotentials and electric flux lines of symmetrically placed two thin conducting plates are obtained using the Schwarz- Cristoffel transformation and the method of conformal mapping. The coordinates x , y in the z-plane corresponding to the constant electric flux lines and equipotential lines are obtained after very detailed and cumbersome calculations. The complete field distribution is given by constructing the family of lines of electric flux and equipotential.

  7. Transfer of line radiation in differentially expanding atmospheres. VI The plane parallel atmosphere with expanding and contracting regions

    NASA Technical Reports Server (NTRS)

    Noerdlinger, P. D.

    1981-01-01

    The non-LTE radiative transfer problem for a two level atom with complete redistribution over a Doppler profile is solved for a plane parallel slab (overlying a radiating photosphere) that has a velocity field which rises symmetrically from zero at either face to a central maximum. Since the velocity gradient reverses, distant layers of the slab become coupled by radiation that jumps intervening layers. The Feautrier method is used, but an iterative variant is also employed as a check in cases where poorly conditioned matrices are encountered. Approximations are developed to explain some of the principal features. It is found that the source function S tends to have two plateaus with values near 2/3 I sub 0 and 1/3 I sub 0, where I sub 0 is the photospheric continuum incident from below; the larger value lies nearer the photosphere. The upper layers sometimes exhibit a rise in S owing to interconnection by radiation to the base. It is noted that the radiation force is largest at the two faces and the midplane. Some line profiles are found to have unusually steep absorptions at rest frequency because of the low excitation in the uppermost, stationary layers.

  8. New pose-detection method for self-calibrated cameras based on parallel lines and its application in visual control system.

    PubMed

    Xu, De; Li, You Fu; Shen, Yang; Tan, Min

    2006-10-01

    In this paper, a new method is proposed to detect the pose of an object with two cameras. First, the intrinsic parameters of the cameras are self-calibrated with two pairs of parallel lines that are orthogonal. Then, the poses of the cameras relative to the parallel lines are deduced, and the rotational transformation between the two cameras is calculated. With the intrinsic parameters and the relative pose of the two cameras, a method is proposed to obtain the poses of a line, plane, and rigid object. Furthermore, a new visual-control method is developed using a pose detection rather than a three-dimensional reconstruction. Experiments are conducted to verify the effectiveness of the proposed method. PMID:17036816

  9. Performance Boosting Additive

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.

  10. Creeping of pinned vortex lines nearly parallel to the superconducting plane of Bi 2Sr 2CaCu 2O 8 in the irreversible region

    NASA Astrophysics Data System (ADS)

    Nakaharai, S.; Ishiguro, T.; Watauchi, S.; Shimoyama, J.; Kishio, K.

    2002-12-01

    The study of the irreversible motion of magnetic flux lines nearly parallel to the superconducting (SC) plane of Bi 2Sr 2CaCu 2O 8 is presented. On tilting a crystal in the magnetic field nearly parallel to the SC plane, the relaxation behavior of the pinned vortex lines is determined by the creeping of pancake vortices (PV) formed on the SC sheets. The results are interpreted in terms of the crossing lattice model with respect to the coexistence of the Josephson-like vortex and PV in a tilted magnetic field. Based on the temperature dependence of the response time, the pinning potential of the PV, which are connected to the Josephson-like vortices, is evaluated.

  11. Resonance line transfer calculations by doubling thin layers. I - Comparison with other techniques. II - The use of the R-parallel redistribution function. [planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Yelle, Roger V.; Wallace, Lloyd

    1989-01-01

    A versatile and efficient technique for the solution of the resonance line scattering problem with frequency redistribution in planetary atmospheres is introduced. Similar to the doubling approach commonly used in monochromatic scattering problems, the technique has been extended to include the frequency dependence of the radiation field. Methods for solving problems with external or internal sources and coupled spectral lines are presented, along with comparison of some sample calculations with results from Monte Carlo and Feautrier techniques. The doubling technique has also been applied to the solution of resonance line scattering problems where the R-parallel redistribution function is appropriate, both neglecting and including polarization as developed by Yelle and Wallace (1989). With the constraint that the atmosphere is illuminated from the zenith, the only difficulty of consequence is that of performing precise frequency integrations over the line profiles. With that problem solved, it is no longer necessary to use the Monte Carlo method to solve this class of problem.

  12. Non-cytotoxic copper overload boosts mitochondrial energy metabolism to modulate cell proliferation and differentiation in the human erythroleukemic cell line K562.

    PubMed

    Ruiz, Lina M; Jensen, Erik L; Rossel, Yancing; Puas, German I; Gonzalez-Ibanez, Alvaro M; Bustos, Rodrigo I; Ferrick, David A; Elorza, Alvaro A

    2016-07-01

    Copper is integral to the mitochondrial respiratory complex IV and contributes to proliferation and differentiation, metabolic reprogramming and mitochondrial function. The K562 cell line was exposed to a non-cytotoxic copper overload to evaluate mitochondrial dynamics, function and cell fate. This induced higher rates of mitochondrial turnover given by an increase in mitochondrial fusion and fission events and in the autophagic flux. The appearance of smaller and condensed mitochondria was also observed. Bioenergetics activity included more respiratory complexes, higher oxygen consumption rate, superoxide production and ATP synthesis, with no decrease in membrane potential. Increased cell proliferation and inhibited differentiation also occurred. Non-cytotoxic copper levels can modify mitochondrial metabolism and cell fate, which could be used in cancer biology and regenerative medicine. PMID:27094959

  13. Heterologous prime-boost vaccination.

    PubMed

    Lu, Shan

    2009-06-01

    An effective vaccine usually requires more than one time immunization in the form of prime-boost. Traditionally the same vaccines are given multiple times as homologous boosts. New findings suggested that prime-boost can be done with different types of vaccines containing the same antigens. In many cases such heterologous prime-boost can be more immunogenic than homologous prime-boost. Heterologous prime-boost represents a new way of immunization and will stimulate better understanding on the immunological basis of vaccines. PMID:19500964

  14. Online Bagging and Boosting

    NASA Technical Reports Server (NTRS)

    Oza, Nikunji C.

    2005-01-01

    Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.

  15. Boosted apparent horizons

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp

    Boosted black holes play an important role in General Relativity (GR), especially in relation to the binary black hole problem. Solving Einstein vac- uum equations in the strong field regime had long been the holy grail of numerical relativity until the significant breakthroughs made in 2005 and 2006. Numerical relativity plays a crucial role in gravitational wave detection by providing numerically generated gravitational waveforms that help search for actual signatures of gravitational radiation exciting laser interferometric de- tectors such as LIGO, VIRGO and GEO600 here on Earth. Binary black holes orbit each other in an ever tightening adiabatic inspiral caused by energy loss due to gravitational radiation emission. As the orbits shrinks, the holes speed up and eventually move at relativistic speeds in the vicinity of each other (separated by ~ 10M or so where 2M is the Schwarzschild radius). As such, one must abandon the Newtonian notion of a point mass on a circular orbit with tangential velocity and replace it with the concept of black holes, cloaked behind spheroidal event horizons that become distorted due to strong gravity, and further appear distorted because of Lorentz effects from the high orbital velocity. Apparent horizons (AHs) are 2-dimensional boundaries that are trapped surfaces. Conceptually, one can think of them as 'quasi-local' definitions for a black hole horizon. This will be explained in more detail in chapter 2. Apparent horizons are especially important in numerical relativity as they provide a computationally efficient way of describing and locating a black hole horizon. For a stationary spacetime, apparent horizons are 2-dimensional cross-sections of the event horizon, which is itself a 3-dimensional null surface in spacetime. Because an AH is a 2-dimensional cross-section of an event horizon, its area remains invariant under distortions due to Lorentz boosts although its shape changes. This fascinating property of the AH can be

  16. True-time delay line with separate carrier tuning using dual-parallel MZM and stimulated Brillouin scattering-induced slow light.

    PubMed

    Li, Wei; Zhu, Ning Hua; Wang, Li Xian; Wang, Jia Sheng; Liu, Jian Guo; Liu, Yu; Qi, Xiao Qiong; Xie, Liang; Chen, Wei; Wang, Xin; Han, Wei

    2011-06-20

    We experimentally demonstrate a novel tunable true-time delay line with separate carrier tuning using dual-parallel Mach-Zehnder modulator and stimulated Brillouin scattering-induced slow light. The phase of the optical carrier can be continuously and precisely controlled by simply adjusting the dc bias of the dual-parallel Mach-Zehnder modulator. In addition, both the slow light and single-sideband modulation can be simultaneously achieved in the stimulated Brillouin scattering process with three types of configuration. Finally, the true-time delay technique is clearly verified by a two-tap incoherent microwave photonic filter as the free spectral range of the filter is changed. PMID:21716468

  17. Stability of arsenic peptides in plant extracts: off-line versus on-line parallel elemental and molecular mass spectrometric detection for liquid chromatographic separation.

    PubMed

    Bluemlein, Katharina; Raab, Andrea; Feldmann, Jörg

    2009-01-01

    The instability of metal and metalloid complexes during analytical processes has always been an issue of an uncertainty regarding their speciation in plant extracts. Two different speciation protocols were compared regarding the analysis of arsenic phytochelatin (As(III)PC) complexes in fresh plant material. As the final step for separation/detection both methods used RP-HPLC simultaneously coupled to ICP-MS and ES-MS. However, one method was the often used off-line approach using two-dimensional separation, i.e. a pre-cleaning step using size-exclusion chromatography with subsequent fraction collection and freeze-drying prior to the analysis using RP-HPLC-ICP-MS and/or ES-MS. This approach revealed that less than 2% of the total arsenic was bound to peptides such as phytochelatins in the root extract of an arsenate exposed Thunbergia alata, whereas the direct on-line method showed that 83% of arsenic was bound to peptides, mainly as As(III)PC(3) and (GS)As(III)PC(2). Key analytical factors were identified which destabilise the As(III)PCs. The low pH of the mobile phase (0.1% formic acid) using RP-HPLC-ICP-MS/ES-MS stabilises the arsenic peptide complexes in the plant extract as well as the free peptide concentration, as shown by the kinetic disintegration study of the model compound As(III)(GS)(3) at pH 2.2 and 3.8. But only short half-lives of only a few hours were determined for the arsenic glutathione complex. Although As(III)PC(3) showed a ten times higher half-life (23 h) in a plant extract, the pre-cleaning step with subsequent fractionation in a mobile phase of pH 5.6 contributes to the destabilisation of the arsenic peptides in the off-line method. Furthermore, it was found that during a freeze-drying process more than 90% of an As(III)PC(3) complex and smaller free peptides such as PC(2) and PC(3) can be lost. Although the two-dimensional off-line method has been used successfully for other metal complexes, it is concluded here that the fractionation and

  18. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    NASA Astrophysics Data System (ADS)

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boron-lined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter-Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  19. Gradient boosting machines, a tutorial.

    PubMed

    Natekin, Alexey; Knoll, Alois

    2013-01-01

    Gradient boosting machines are a family of powerful machine-learning techniques that have shown considerable success in a wide range of practical applications. They are highly customizable to the particular needs of the application, like being learned with respect to different loss functions. This article gives a tutorial introduction into the methodology of gradient boosting methods with a strong focus on machine learning aspects of modeling. A theoretical information is complemented with descriptive examples and illustrations which cover all the stages of the gradient boosting model design. Considerations on handling the model complexity are discussed. Three practical examples of gradient boosting applications are presented and comprehensively analyzed. PMID:24409142

  20. Gradient boosting machines, a tutorial

    PubMed Central

    Natekin, Alexey; Knoll, Alois

    2013-01-01

    Gradient boosting machines are a family of powerful machine-learning techniques that have shown considerable success in a wide range of practical applications. They are highly customizable to the particular needs of the application, like being learned with respect to different loss functions. This article gives a tutorial introduction into the methodology of gradient boosting methods with a strong focus on machine learning aspects of modeling. A theoretical information is complemented with descriptive examples and illustrations which cover all the stages of the gradient boosting model design. Considerations on handling the model complexity are discussed. Three practical examples of gradient boosting applications are presented and comprehensively analyzed. PMID:24409142

  1. Collisional Line Mixing in Parallel and Perpendicular Bands of Linear Molecules by a Non-Markovian Approach

    NASA Astrophysics Data System (ADS)

    Buldyreva, Jeanna

    2013-06-01

    Reliable modeling of radiative transfer in planetary atmospheres requires accounting for the collisional line mixing effects in the regions of closely spaced vibrotational lines as well as in the spectral wings. Because of too high CPU cost of calculations from ab initio potential energy surfaces (if available), the relaxation matrix describing the influence of collisions is usually built by dynamical scaling laws, such as Energy-Corrected Sudden law. Theoretical approaches currently used for calculation of absorption near the band center are based on the impact approximation (Markovian collisions without memory effects) and wings are modeled via introducing some empirical parameters [1,2]. Operating with the traditional non-symmetric metric in the Liouville space, these approaches need corrections of the ECS-modeled relaxation matrix elements ("relaxation times" and "renormalization procedure") in order to ensure the fundamental relations of detailed balance and sum rules.We present an extension to the infrared absorption case of the previously developed [3] for rototranslational Raman scattering spectra of linear molecules non-Markovian approach of ECS-type. Owing to the specific choice of symmetrized metric in the Liouville space, the relaxation matrix is corrected for initial bath-molecule correlations and satisfies non-Markovian sum rules and detailed balance. A few standard ECS parameters determined by fitting to experimental linewidths of the isotropic Q-branch enable i) retrieval of these isolated-line parameters for other spectroscopies (IR absorption and anisotropic Raman scattering); ii) reproducing of experimental intensities of these spectra. Besides including vibrational angular momenta in the IR bending shapes, Coriolis effects are also accounted for. The efficiency of the method is demonstrated on OCS-He and CO_2-CO_2 spectra up to 300 and 60 atm, respectively. F. Niro, C. Boulet, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 88, 483

  2. Line Mixing in Parallel and Perpendicular Bands of CO2: A Further Test of the Refined Robert-Bonamy Formalism

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Qiancheng; Tipping, R. H.

    2015-01-01

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modeling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modeling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (sigma yields sigma and sigma yields pi) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  3. Line mixing in parallel and perpendicular bands of CO2: A further test of the refined Robert-Bonamy formalism.

    PubMed

    Boulet, C; Ma, Q; Tipping, R H

    2015-09-28

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modelling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modelling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (Σ → Σ and Σ → Π) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model. PMID:26429017

  4. Observation of hole injection boost via two parallel paths in Pentacene thin-film transistors by employing Pentacene: 4, 4″-tris(3-methylphenylphenylamino) triphenylamine: MoO{sub 3} buffer layer

    SciTech Connect

    Yan, Pingrui; Liu, Ziyang; Liu, Dongyang; Wang, Xuehui; Yue, Shouzhen; Zhao, Yi; Zhang, Shiming

    2014-11-01

    Pentacene organic thin-film transistors (OTFTs) were prepared by introducing 4, 4″-tris(3-methylphenylphenylamino) triphenylamine (m-MTDATA): MoO{sub 3}, Pentacene: MoO{sub 3}, and Pentacene: m-MTDATA: MoO{sub 3} as buffer layers. These OTFTs all showed significant performance improvement comparing to the reference device. Significantly, we observe that the device employing Pentacene: m-MTDATA: MoO{sub 3} buffer layer can both take advantage of charge transfer complexes formed in the m-MTDATA: MoO{sub 3} device and suitable energy level alignment existed in the Pentacene: MoO{sub 3} device. These two parallel paths led to a high mobility, low threshold voltage, and contact resistance of 0.72 cm{sup 2}/V s, −13.4 V, and 0.83 kΩ at V{sub ds} = − 100 V. This work enriches the understanding of MoO{sub 3} doped organic materials for applications in OTFTs.

  5. Boosted Beta Regression

    PubMed Central

    Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas

    2013-01-01

    Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706

  6. Analytic boosted boson discrimination

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-01

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

  7. An open-source, massively parallel code for non-LTE synthesis and inversion of spectral lines and Zeeman-induced Stokes profiles

    NASA Astrophysics Data System (ADS)

    Socas-Navarro, H.; de la Cruz Rodríguez, J.; Asensio Ramos, A.; Trujillo Bueno, J.; Ruiz Cobo, B.

    2015-05-01

    With the advent of a new generation of solar telescopes and instrumentation, interpreting chromospheric observations (in particular, spectropolarimetry) requires new, suitable diagnostic tools. This paper describes a new code, NICOLE, that has been designed for Stokes non-LTE radiative transfer, for synthesis and inversion of spectral lines and Zeeman-induced polarization profiles, spanning a wide range of atmospheric heights from the photosphere to the chromosphere. The code features a number of unique features and capabilities and has been built from scratch with a powerful parallelization scheme that makes it suitable for application on massive datasets using large supercomputers. The source code is written entirely in Fortran 90/2003 and complies strictly with the ANSI standards to ensure maximum compatibility and portability. It is being publicly released, with the idea of facilitating future branching by other groups to augment its capabilities. The source code is currently hosted at the following repository: http://https://github.com/hsocasnavarro/NICOLE

  8. Real and virtual image separation in digital in-line holography microscopy by recording two parallel holograms

    NASA Astrophysics Data System (ADS)

    Ling, Hangjian; Katz, Joseph

    2013-11-01

    Maintaining high magnification and micron resolution in applications of digital in-line holography microscopy for 3D velocity measurements requires a hologram plane located very close or even within the sample volume. Separation between overlapping real and virtual images becomes a challenge in such cases. Here, we introduced a simple method based on recording two holograms through the same microscope objective that are separated by a short distance from each other. When the same particle fields are reconstructed from the two holograms, the real images overlap, whereas virtual images are separated by twice the distance between hologram planes. Thus, real and virtual images can be easily distinguished. Due to the elongation of the reconstructed particle in the axial direction, the distance between hologram planes is selected to exceed the elongated traces. This technique has been applied to record 3D traces of thousands of 2 um particles in a 0 . 5 × 0 . 5 × 0 . 5 mm sample volume using hologram planes separated by 27 um. Experimental setup, alignment and data analysis procedures, including reconstruction, calibration, particles segmentation and precision particles positioning will be discussed. Sponsored by ONR.

  9. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGESBeta

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  10. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  11. Ultrarelativistic boost with scalar field

    NASA Astrophysics Data System (ADS)

    Svítek, O.; Tahamtan, T.

    2016-02-01

    We present the ultrarelativistic boost of the general global monopole solution which is parametrized by mass and deficit solid angle. The problem is addressed from two different perspectives. In the first one the primary object for performing the boost is the metric tensor while in the second one the energy momentum tensor is used. Since the solution is sourced by a triplet of scalar fields that effectively vanish in the boosting limit we investigate the behavior of a scalar field in a simpler setup. Namely, we perform the boosting study of the spherically symmetric solution with a free scalar field given by Janis, Newman and Winicour. The scalar field is again vanishing in the limit pointing to a broader pattern of scalar field behaviour during an ultrarelativistic boost in highly symmetric situations.

  12. Probing the Sizes of Absorbers: Correlations in the z 3.5 Lyman-alpha Forest Between Parallel Lines of Sight

    NASA Astrophysics Data System (ADS)

    Becker, G.; Sargent, W. L. W.; Rauch, M.

    2003-12-01

    Studies of the intergalactic medium along parallel lines of sight towards quasar pairs offer valuable information on the sizes of intervening absorbers as well as provide the basis for a test of the cosmological constant. We present a study of two high-redshift pairs with moderate separation observed with Keck ESI, Q1422+2309A/Q1424+2255 (z = 3.63, θ = 39'') and Q1439-0034A/B (z = 4.25, θ = 33''). The crosscorrelation of transmitted flux in the Lyα forest shows a strong peak at zero velocity lag in both pairs, suggesting that the Lyα absorbers are coherent over scales > 230-300 proper kpc. Two strong C IV systems at z = 3.4, closely separated along the line of sight, appear in Q1439B but not in Q1439A, consistent with the picture of outflowing material from an intervening galaxy. In contrast, a Mg II system at z = 1.68 does appear in both Q1439A and B. This suggests either a single absorber of size > 280 kpc or two separate, clustered absorbers. We additionally examine the impact of spectral characteristics on applying the Alcock-Paczynski test to quasar pairs, finding a strong dependence on resolution.

  13. Lines

    ERIC Educational Resources Information Center

    Mires, Peter B.

    2006-01-01

    National Geography Standards for the middle school years generally stress the teaching of latitude and longitude. There are many creative ways to explain the great grid that encircles our planet, but the author has found that students in his college-level geography courses especially enjoy human-interest stories associated with lines of latitude…

  14. An evaluation of relation between the relative parallelism of occlusal plane to ala-tragal line and variation in the angulation of Po-Na-ANS angle in dentulous subjects: A cephalometric study

    PubMed Central

    Shetty, Sanath; Shenoy, K. Kamalakanth; Ninan, Justin; Mahaseth, Pranay

    2015-01-01

    Aims: The aim was to evaluate if any correlation exists between variation in angulation of Po-Na-ANS angle and relative parallelism of the occlusal plane to the different tragal levels of the ear in dentulous subjects. Methodology: A total of 200 subjects were selected for the study. A custom made occlusal plane analyzer was used to determine the posterior point of the ala-tragal line. The lateral cephalogram was shot for each of the subjects. The points Porion, Nasion, and Anterior Nasal Spine were located and the angle formed between these points was measured. Statistical Analysis Used: Fischer's exact test was used to find the correlation between Po-Na-ANS angle and relative parallelism of the occlusal plane to the ala-tragal line at different tragal levels. Results: Statistical analysis showed no significant correlation between Po-Na-ANS angle and relative parallelism of an occlusal plane at different tragal levels, and an inferior point on the tragus was the most common. Conclusion: Irrespective of variations in the Po-Na-ANS angle, no correlation exists between the variation in the angulations of Po-Na-ANS angle and the relative parallelism of occlusal plane to the ala-tragal line at different tragal levels. Furthermore, in a large number of subjects (54%), the occlusal plane was found parallel to a line joining the inferior border of the ala of the nose and the inferior part of the tragus. PMID:26929506

  15. Long-term effectiveness of initiating non-nucleoside reverse transcriptase inhibitor- versus ritonavir-boosted protease inhibitor-based antiretroviral therapy: implications for first-line therapy choice in resource-limited settings

    PubMed Central

    Lima, Viviane D; Hull, Mark; McVea, David; Chau, William; Harrigan, P Richard; Montaner, Julio SG

    2016-01-01

    Introduction In many resource-limited settings, combination antiretroviral therapy (cART) failure is diagnosed clinically or immunologically. As such, there is a high likelihood that patients may stay on a virologically failing regimen for a substantial period of time. Here, we compared the long-term impact of initiating non-nucleoside reverse transcriptase inhibitor (NNRTI)- versus boosted protease inhibitor (bPI)-based cART in British Columbia (BC), Canada. Methods We followed prospectively 3925 ART-naïve patients who started NNRTIs (N=1963, 50%) or bPIs (N=1962; 50%) from 1 January 2000 until 30 June 2013 in BC. At six months, we assessed whether patients virologically failed therapy (a plasma viral load (pVL) >50 copies/mL), and we stratified them based on the pVL at the time of failure ≤500 versus >500 copies/mL. We then followed these patients for another six months and calculated their probability of achieving subsequent viral suppression (pVL <50 copies/mL twice consecutively) and of developing drug resistance. These probabilities were adjusted for fixed and time-varying factors, including cART adherence. Results At six months, virologic failure rates were 9.5 and 14.3 cases per 100 person-months for NNRTI and bPI initiators, respectively. NNRTI initiators who failed with a pVL ≤500 copies/mL had a 16% higher probability of achieving subsequent suppression at 12 months than bPI initiators (0.81 (25th–75th percentile 0.75–0.83) vs. 0.72 (0.61–0.75)). However, if failing NNRTI initiators had a pVL >500 copies/mL, they had a 20% lower probability of suppressing at 12 months than pVL-matched bPI initiators (0.37 (0.29–0.45) vs. 0.46 (0.38–0.54)). In terms of evolving HIV drug resistance, those who failed on NNRTI performed worse than bPI in all scenarios, especially if they failed with a viral load >500 copies/mL. Conclusions Our results show that patients who virologically failed at six months on NNRTI and continued on the same regimen had a

  16. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  17. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  18. Boosting with Averaged Weight Vectors

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.

  19. A Magnetohydrodynamic Boost for Relativistic Jets

    NASA Technical Reports Server (NTRS)

    Mizuno, Yosuke; Hardee, Philip; Hartmann, Dieter H.; Nishikawa, Ken-Ichi; Zhang, Bing

    2007-01-01

    We performed relativistic magnetohydrodynamic simulations of the hydrodynamic boosting mechanism for relativistic jets explored by Aloy & Rezzolla (2006) using the RAISHIN code. Simulation results show that the presence of a magnetic field changes the properties of the shock interface between the tenuous, overpressured jet (V^z j) flowing tangentially to a dense external medium. We find that magnetic fields can lead to more efficient acceleration of the jet, in comparison to the pure-hydrodynamic case. A "poloidal" magnetic field (B^z), tangent to the interface and parallel to the jet flow, produces both a stronger outward moving shock and a stronger inward moving rarefaction wave. This leads to a large velocity component normal to the interface in addition to acceleration tangent to the interface, and the jet is thus accelerated to larger Lorentz factors than those obtained in the pure-hydrodynamic case. Likewise, a strong "toroidal" magnetic field (B^y), tangent to the interface but perpendicular to the jet flow, also leads to stronger acceleration tangent to the shock interface relative to the pure-hydrodynamic case. Thus. the presence and relative orientation of a magnetic field in relativistic jets can significant modify the hydrodynamic boost mechanism studied by Aloy & Rezzolla (2006).

  20. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  1. Reliable 3-phase PWM boost rectifiers employing a stacked dual boost converter subtopology

    SciTech Connect

    Salmon, J.C.

    1996-05-01

    This paper describes circuit topologies for 3-phase pulse-width modulation (PWM) boost rectifiers that operate with a unity fundamental power factor and a low-distortion ac line current. Overlap delays between the switching of the upper and lower devices in a PWM rectifier leg are not critical and diodes eliminates the possibility of the dc-link capacitor discharging into short circuits and shoot-through fault conditions. The rectifiers are controlled using a stacked dual boost converter cell subtopology model that can be used in two current control modes. The dual current-control mode shapes two line currents and can achieve current distortion levels below 5%. The single current-control mode shapes one line current and can achieve current distortion levels close to 5% with the rectifier output dc voltage at the standard level associated with a rectified mains voltage. The per-unit current ratings for the switches in the 3-phase PWM switch networks are around 15--20% of the input rms line current as compared to 71% for a standard 3-phase PWM rectifier. Circuit simulations and experimental results are used to demonstrate the performance and feasibility of the rectifiers described.

  2. A Magnetohydrodynamic Boost for Relativistic Jets

    NASA Technical Reports Server (NTRS)

    Mizuno, Yosuke; Hardee, Philip; Hartmann, dieter; Nishikwa, Ken-Ichi; Zhang, Bing

    2006-01-01

    We have performed relativistic magnetohydrodynamic simulations of the hydrodynamic boosting mechanism for relativistic jets explored by Aloy & Rezzolla (2006) using the RAISHIN code. Simulation results show that the presence of a magnetic field may change the properties of the shock interface between the tenuous, overpressured jet (V(sub j) (sup z)) flowing tangentially to a dense external medium. Magnetic fields can lead to more efficient acceleration of the jet, in comparison to the pure-hydrodynamic case. A poloidal magnetic field (B(sup z)), tangent to the interface and parallel to the jet flow, produces both a stronger outward moving shock and inward moving rarefaction wave. This leads to a large velocity component normal to the interface in addition to acceleration tangent to the interface, and the jet is thus accelerated to a larger Lorentz factors than those obtained in the pure-hydrodynamic case. In contrast, a strong toroidal magnetic field (B(sup y)), tangent to the interface but perpendicular to the jet flow, also leads to stronger acceleration tangent to the shock interface relative to the pure-hydrodynamic case, but to a lesser extent than found for the poloidal case due to the fact that the velocity component normal to the shock interface is now much smaller. Overall, the acceleration efficiency in the toroidal case is less than that of the poloidal case but both geometries still result in higher Lorentz factors than the pure-hydrodynamic case. Thus, the presence and relative orientation of a magnetic field in relativistic jets can have a significant influence on the hydrodynamic boost mechanism studied by Aloy & Rezzolla (2006).

  3. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  4. Interferometric resolution boosting for spectrographs

    SciTech Connect

    Erskine, D J; Edelstein, J

    2004-05-25

    Externally dispersed interferometry (EDI) is a technique for enhancing the performance of spectrographs for wide bandwidth high resolution spectroscopy and Doppler radial velocimetry. By placing a small angle-independent interferometer near the slit of a spectrograph, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moir{acute e} pattern, which manifests high detailed spectral information heterodyned down to detectably low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry. Previous demonstrations of {approx}2.5x resolution boost used an interferometer having a single fixed delay. We report new data indicating {approx}6x Gaussian resolution boost (140,000 from a spectrograph with 25,000 native resolving power), taken by using multiple exposures at widely different interferometer delays.

  5. Classification of airborne laser scanning data using JointBoost

    NASA Astrophysics Data System (ADS)

    Guo, Bo; Huang, Xianfeng; Zhang, Fan; Sohn, Gunho

    2015-02-01

    The demands for automatic point cloud classification have dramatically increased with the wide-spread use of airborne LiDAR. Existing research has mainly concentrated on a few dominant objects such as terrain, buildings and vegetation. In addition to those key objects, this paper proposes a supervised classification method to identify other types of objects including power-lines and pylons from point clouds using a JointBoost classifier. The parameters for the learning model are estimated with various features computed based on the geometry and echo information of a LiDAR point cloud. In order to overcome the shortcomings stemming from the inclusion of bare ground data before classification, the proposed classifier directly distinguishes terrain using a feature step-off count. Feature selection is conducted using JointBoost to evaluate feature correlations thus improving both classification accuracy and operational efficiency. In this paper, the contextual constraints for objects extracted by graph-cut segmentation are used to optimize the initial classification results obtained by the JointBoost classifier. Our experimental results show that the step-off count significantly contributes to classification. Seventeen effective features are selected for the initial classification results using the JointBoost classifier. Our experiments indicate that the proposed features and method are effective for classification of airborne LiDAR data from complex scenarios.

  6. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  7. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  8. Where boosted significances come from

    NASA Astrophysics Data System (ADS)

    Plehn, Tilman; Schichtel, Peter; Wiegand, Daniel

    2014-03-01

    In an era of increasingly advanced experimental analysis techniques it is crucial to understand which phase space regions contribute a signal extraction from backgrounds. Based on the Neyman-Pearson lemma we compute the maximum significance for a signal extraction as an integral over phase space regions. We then study to what degree boosted Higgs strategies benefit ZH and tt¯H searches and which transverse momenta of the Higgs are most promising. We find that Higgs and top taggers are the appropriate tools, but would profit from a targeted optimization towards smaller transverse momenta. MadMax is available as an add-on to MadGraph 5.

  9. Electric rockets get a boost

    SciTech Connect

    Ashley, S.

    1995-12-01

    This article reports that xenon-ion thrusters are expected to replace conventional chemical rockets in many nonlaunch propulsion tasks, such as controlling satellite orbits and sending space probes on long exploratory missions. The space age dawned some four decades ago with the arrival of powerful chemical rockets that could propel vehicles fast enough to escape the grasp of earth`s gravity. Today, chemical rocket engines still provide the only means to boost payloads into orbit and beyond. The less glamorous but equally important job of moving vessels around in space, however, may soon be assumed by a fundamentally different rocket engine technology that has been long in development--electric propulsion.

  10. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  11. Recursive bias estimation and L2 boosting

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  12. Proposal of Boost Motor Driver with Electric Double Layer Capacitor

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hirokazu

    This paper proposes a boost motor driver with EDLC as a new boost motor driver. The boost motor driver has two advantages against conventional boost motor drivers. The first is that the boost motor driver can decrease an input power peak. The second is that the boost motor driver can charge almost all regeneration energy. The dynamic performance of boost voltage and these advantages of the boost motor driver is simulated. From the simulation, results that the boost motor driver has good performance are derived.

  13. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  14. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

  15. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer`s task easier.

  16. Estimate of avoidance maneuver rate for HASTOL tether boost facility

    NASA Astrophysics Data System (ADS)

    Forward, Robert L.

    2002-01-01

    The Hypersonic Airplane Space Tether Orbital Launch (HASTOL) Architecture uses a hypersonic airplane (or reusable launch vehicle) to carry a payload from the surface of the Earth to 150 km altitude and a speed of Mach 17. The hypersonic airplane makes a rendezvous with the grapple at the tip of a long, rotating, orbiting space tether boost facility, which picks up the payload from the airplane. Release of the payload at the proper point in the tether rotation boosts the payload into a higher orbit, typically into a Geosynchronous Transfer Orbit (GTO), with lower orbits and Earth escape other options. The HASTOL Tether Boost Facility will have a length of 636 km. Its center of mass will be in a 604 km by 890 km equatorial orbit. It is estimated that by the time of the start of operations of the HASTOL Tether Boost facility in the year 2020, there will be 500 operational spacecraft using the same volume of space as the HASTOL facility. These operational spacecraft would likely be made inoperative by an impact with one of the lines in the multiline HASTOL Hoytether™ and should be avoided. There will also be non-operational spacecraft and large pieces of orbital debris with effective size greater than five meters in diameter that could cut a number of lines in the HASTOL Hoytether™, and should also be avoided. It is estimated, using two different methods and combining them, that the HASTOL facility will need to make avoidance maneuvers about once every four days if the 500 operational spacecraft and large pieces of orbital debris greater than 5 m in diameter, were each protected by a 2 km diameter miss distance protection sphere. If by 2020, the ability to know the positions of operational spacecraft and large pieces of orbital debris improved to allow a 600 m diameter miss distance protection sphere around each object, then the number of HASTOL facility maneuvers needed drops to one every two weeks. .

  17. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING.

    PubMed

    Liu, Meizhu; Vemuri, Baba C

    2011-03-30

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) - used to represent the distribution over the training data and the classification error respectively - to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  18. Series Connected Buck-Boost Regulator

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G. (Inventor)

    2006-01-01

    A Series Connected Buck-Boost Regulator (SCBBR) that switches only a fraction of the input power, resulting in relatively high efficiencies. The SCBBR has multiple operating modes including a buck, a boost, and a current limiting mode, so that an output voltage of the SCBBR ranges from below the source voltage to above the source voltage.

  19. Bagging, boosting, and C4.5

    SciTech Connect

    Quinlan, J.R.

    1996-12-31

    Breiman`s bagging and Freund and Schapire`s boosting are recent methods for improving the predictive power of classifier learning systems. Both form a set of classifiers that are combined by voting, bagging by generating replicated bootstrap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater benefit. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classifiers reduces this downside and also leads to slightly better results on most of the datasets considered.

  20. Parallel induction of tetrahydrobiopterin biosynthesis and indoleamine 2,3-dioxygenase activity in human cells and cell lines by interferon-gamma.

    PubMed Central

    Werner, E R; Werner-Felmayer, G; Fuchs, D; Hausen, A; Reibnegger, G; Wachter, H

    1989-01-01

    In all of eight tested human cells and cell lines with inducible indoleamine 2,3-dioxygenase (EC 1.13.11.17) tetrahydrobiopterin biosynthesis was activated by interferon-gamma. This was demonstrated by GTP cyclohydrolase I (EC 3.5.4.16) activities and intracellular neopterin and biopterin concentrations. Pteridine synthesis was influenced by extracellular tryptophan. In T 24-cell extracts, submillimolar concentrations of tetrahydrobiopterin stimulated the indoleamine 2,3-dioxygenase reaction. PMID:2511835

  1. GPU-based parallel clustered differential pulse code modulation

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Li, Wenze; Kong, Wanqiu

    2015-10-01

    Hyperspectral remote sensing technology is widely used in marine remote sensing, geological exploration, atmospheric and environmental remote sensing. Owing to the rapid development of hyperspectral remote sensing technology, resolution of hyperspectral image has got a huge boost. Thus data size of hyperspectral image is becoming larger. In order to reduce their saving and transmission cost, lossless compression for hyperspectral image has become an important research topic. In recent years, large numbers of algorithms have been proposed to reduce the redundancy between different spectra. Among of them, the most classical and expansible algorithm is the Clustered Differential Pulse Code Modulation (CDPCM) algorithm. This algorithm contains three parts: first clusters all spectral lines, then trains linear predictors for each band. Secondly, use these predictors to predict pixels, and get the residual image by subtraction between original image and predicted image. Finally, encode the residual image. However, the process of calculating predictors is timecosting. In order to improve the processing speed, we propose a parallel C-DPCM based on CUDA (Compute Unified Device Architecture) with GPU. Recently, general-purpose computing based on GPUs has been greatly developed. The capacity of GPU improves rapidly by increasing the number of processing units and storage control units. CUDA is a parallel computing platform and programming model created by NVIDIA. It gives developers direct access to the virtual instruction set and memory of the parallel computational elements in GPUs. Our core idea is to achieve the calculation of predictors in parallel. By respectively adopting global memory, shared memory and register memory, we finally get a decent speedup.

  2. Boost-phase discrimination research

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.; Feiereisen, William J.

    1993-01-01

    The final report describes the combined work of the Computational Chemistry and Aerothermodynamics branches within the Thermosciences Division at NASA Ames Research Center directed at understanding the signatures of shock-heated air. Considerable progress was made in determining accurate transition probabilities for the important band systems of NO that account for much of the emission in the ultraviolet region. Research carried out under this project showed that in order to reproduce the observed radiation from the bow shock region of missiles in their boost phase it is necessary to include the Burnett terms in the constituent equation, account for the non-Boltzmann energy distribution, correctly model the NO formation and rotational excitation process, and use accurate transition probabilities for the NO band systems. This work resulted in significant improvements in the computer code NEQAIR that models both the radiation and fluid dynamics in the shock region.

  3. Advanced Airfoils Boost Helicopter Performance

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Carson Helicopters Inc. licensed the Langley RC4 series of airfoils in 1993 to develop a replacement main rotor blade for their Sikorsky S-61 helicopters. The company's fleet of S-61 helicopters has been rebuilt to include Langley's patented airfoil design, and the helicopters are now able to carry heavier loads and fly faster and farther, and the main rotor blades have twice the previous service life. In aerial firefighting, the performance-boosting airfoils have helped the U.S. Department of Agriculture's Forest Service control the spread of wildfires. In 2003, Carson Helicopters signed a contract with Ducommun AeroStructures Inc., to manufacture the composite blades for Carson Helicopters to sell

  4. Speeding up Boosting decision trees training

    NASA Astrophysics Data System (ADS)

    Zheng, Chao; Wei, Zhenzhong

    2015-10-01

    To overcome the drawback that Boosting decision trees perform fast speed in the test time while the training process is relatively too slow to meet the requirements of applications with real-time learning, we propose a fast decision trees training method by pruning those noneffective features in advance. And basing on this method, we also design a fast Boosting decision trees training algorithm. Firstly, we analyze the structure of each decision trees node, and prove that the classification error of each node has a bound through derivation. Then, by using the error boundary to prune non-effective features in the early stage, we greatly accelerate the decision tree training process, and would not affect the training results at all. Finally, the decision tree accelerated training method is integrated into the general Boosting process forming a fast boosting decision trees training algorithm. This algorithm is not a new variant of Boosting, on the contrary, it should be used in conjunction with existing Boosting algorithms to achieve more training acceleration. To test the algorithm's speedup performance and performance combined with other accelerated algorithms, the original AdaBoost and two typical acceleration algorithms LazyBoost and StochasticBoost were respectively used in conjunction with this algorithm into three fast versions, and their classification performance was tested by using the Lsis face database which contained 12788 images. Experimental results reveal that this fast algorithm can achieve more than double training speedup without affecting the results of the trained classifier, and can be combined with other acceleration algorithms. Key words: Boosting algorithm, decision trees, classifier training, preliminary classification error, face detection

  5. Identification of Phenylbutyrate-Generated Metabolites in Huntington Disease Patients using Parallel LC/EC-array/MS and Off-line Tandem MS

    PubMed Central

    Ebbel, Erika N.; Leymarie, Nancy; Schiavo, Susan; Sharma, Swati; Gevorkian, Sona; Hersch, Steven; Matson, Wayne R.; Costello, Catherine E.

    2013-01-01

    Oral sodium phenyl butyrate (SPB) is currently under investigation as a histone deacetylation (HDAC) inhibitor in Huntington disease (HD). Ongoing studies indicate that symptoms related to HD genetic abnormalities decrease with SPB therapy. In a recently reported safety and tolerability study of SPB in HD, we analyzed overall chromatographic patterns from a method that employs gradient Liquid Chromatography with series Electrochemical array, UV and Fluorescence (LCECA/UV/F) for measuring SPB and its metabolite phenylacetate (PA). We found that plasma and urine from SPB-treated patients yielded individual-specific patterns of ca. 20 metabolites which may provide a means for the selection of subjects for extended trials of SPB. The structural identification of these metabolites is of critical importance, since their characterization will facilitate understanding the mechanisms of drug action and possible side effects. We have now developed an iterative process with LCECA, parallel LCECA/LCMS, and high performance tandem MS, for metabolite characterization. We report here the details of this method and its use for identification of 10 plasma and urinary metabolites in treated subjects, including indole species in urine that are not themselves metabolites of SPB. This approach thus contributes to understanding metabolic pathways that differ among HD individuals being treated with SPB. PMID:20074541

  6. Pregnancy boosts vaccine-induced Bovine Neonatal Pancytopenia-associated alloantibodies.

    PubMed

    Benedictus, Lindert; Rutten, Victor P M G; Koets, Ad P

    2016-02-17

    Although maternal vaccination is generally considered to be safe, the occurrence of Bovine Neonatal Pancytopenia (BNP) in cattle shows that maternal vaccination may pose a risk to the offspring. Pregsure BVD-induced maternal alloantibodies cause BNP in newborn calves. The occurrence of BNP years after last Pregsure BVD vaccination indicates that alloantibody levels may remain high in dams. Since pregnancy induces alloantibodies we hypothesized that pregnancy boosts the vaccine-induced alloantibody response. Alloantibody levels in Pregsure BVD-vaccinated dams increased from conception towards the end of gestation and declined after parturition. In parallel, BVDV-antibody levels remained constant, indicating that there is specific boosting of alloantibodies. Since the rise in alloantibodies coincides with pregnancy and other alloantigen sources were excluded, we concluded that fetal alloantigens expressed during pregnancy boost the alloimmune response in the dam. These results help explain why BNP cases occur even years after Pregsure BVD has been taken off the market. PMID:26796141

  7. Bagging and boosting negatively correlated neural networks.

    PubMed

    Islam, Md Monirul; Yao, Xin; Shahriar Nirjon, S M Shahriar; Islam, Muhammad Asiful; Murase, Kazuyuki

    2008-06-01

    In this paper, we propose two cooperative ensemble learning algorithms, i.e., NegBagg and NegBoost, for designing neural network (NN) ensembles. The proposed algorithms incrementally train different individual NNs in an ensemble using the negative correlation learning algorithm. Bagging and boosting algorithms are used in NegBagg and NegBoost, respectively, to create different training sets for different NNs in the ensemble. The idea behind using negative correlation learning in conjunction with the bagging/boosting algorithm is to facilitate interaction and cooperation among NNs during their training. Both NegBagg and NegBoost use a constructive approach to automatically determine the number of hidden neurons for NNs. NegBoost also uses the constructive approach to automatically determine the number of NNs for the ensemble. The two algorithms have been tested on a number of benchmark problems in machine learning and NNs, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, satellite, soybean, and waveform problems. The experimental results show that NegBagg and NegBoost require a small number of training epochs to produce compact NN ensembles with good generalization. PMID:18558541

  8. Processing Semblances Induced through Inter-Postsynaptic Functional LINKs, Presumed Biological Parallels of K-Lines Proposed for Building Artificial Intelligence

    PubMed Central

    Vadakkan, Kunjumon I.

    2011-01-01

    The internal sensation of memory, which is available only to the owner of an individual nervous system, is difficult to analyze for its basic elements of operation. We hypothesize that associative learning induces the formation of functional LINK between the postsynapses. During memory retrieval, the activation of either postsynapse re-activates the functional LINK evoking a semblance of sensory activity arriving at its opposite postsynapse, nature of which defines the basic unit of internal sensation – namely, the semblion. In neuronal networks that undergo continuous oscillatory activity at certain levels of their organization re-activation of functional LINKs is expected to induce semblions, enabling the system to continuously learn, self-organize, and demonstrate instantiation, features that can be utilized for developing artificial intelligence (AI). This paper also explains suitability of the inter-postsynaptic functional LINKs to meet the expectations of Minsky’s K-lines, basic elements of a memory theory generated to develop AI and methods to replicate semblances outside the nervous system. PMID:21845180

  9. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    SciTech Connect

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  10. Regular Exercise May Boost Prostate Cancer Survival

    MedlinePlus

    ... nih.gov/medlineplus/news/fullstory_158374.html Regular Exercise May Boost Prostate Cancer Survival Study found that ... HealthDay News) -- Sticking to a moderate or intense exercise regimen may improve a man's odds of surviving ...

  11. Do ADHD Medicines Boost Substance Abuse Risk?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_159904.html Do ADHD Medicines Boost Substance Abuse Risk? Chances were actually ... that their children who take stimulants to treat attention deficit hyperactivity disorder (ADHD) may be at higher risk for substance ...

  12. Anemia Boosts Stroke Death Risk, Study Finds

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160476.html Anemia Boosts Stroke Death Risk, Study Finds Blood condition ... 2016 (HealthDay News) -- Older stroke victims suffering from anemia -- a lack of red blood cells -- may have ...

  13. Avoiding Anemia: Boost Your Red Blood Cells

    MedlinePlus

    ... link, please review our exit disclaimer . Subscribe Avoiding Anemia Boost Your Red Blood Cells If you’re ... and sluggish, you might have a condition called anemia. Anemia is a common blood disorder that many ...

  14. Old Drug Boosts Brain's Memory Centers

    MedlinePlus

    ... medlineplus/news/fullstory_159605.html Old Drug Boosts Brain's Memory Centers But more research needed before recommending ... called methylene blue may rev up activity in brain regions involved in short-term memory and attention, ...

  15. Tools to Boost Steam System Efficiency

    SciTech Connect

    2005-05-01

    The Steam System Scoping Tool quickly evaluates your entire steam system operation and spots the areas that are the best opportunities for improvement. The tool suggests a range of ways to save steam energy and boost productivity.

  16. Old Drug Boosts Brain's Memory Centers

    MedlinePlus

    ... gov/news/fullstory_159605.html Old Drug Boosts Brain's Memory Centers But more research needed before recommending ... called methylene blue may rev up activity in brain regions involved in short-term memory and attention, ...

  17. Engineering report: Oxygen boost compressor study

    NASA Technical Reports Server (NTRS)

    Tera, L. S.

    1974-01-01

    An oxygen boost compressor is described which supports a self-contained life support system. A preliminary analysis of the compressor is presented along with performance test results, and recommendations for follow-on efforts.

  18. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-12-31

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  19. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-01-01

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  20. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  1. Centaur liquid oxygen boost pump vibration test

    NASA Technical Reports Server (NTRS)

    Tang, H. M.

    1975-01-01

    The Centaur LOX boost pump was subjected to both the simulated Titan Centaur proof flight and confidence demonstration vibration test levels. For each test level, both sinusoidal and random vibration tests were conducted along each of the three orthogonal axes of the pump and turbine assembly. In addition to these tests, low frequency longitudinal vibration tests for both levels were conducted. All tests were successfully completed without damage to the boost pump.

  2. The NICMOS Parallel Observing Program

    NASA Astrophysics Data System (ADS)

    McCarthy, Patrick

    2002-07-01

    We propose to manage the default set of pure parallels with NICMOS. Our experience with both our GO NICMOS parallel program and the public parallel NICMOS programs in cycle 7 prepared us to make optimal use of the parallel opportunities. The NICMOS G141 grism remains the most powerful survey tool for HAlpha emission-line galaxies at cosmologically interesting redshifts. It is particularly well suited to addressing two key uncertainties regarding the global history of star formation: the peak rate of star formation in the relatively unexplored but critical 1<= z <= 2 epoch, and the amount of star formation missing from UV continuum-based estimates due to high extinction. Our proposed deep G141 exposures will increase the sample of known HAlpha emission- line objects at z ~ 1.3 by roughly an order of magnitude. We will also obtain a mix of F110W and F160W images along random sight-lines to examine the space density and morphologies of the reddest galaxies. The nature of the extremely red galaxies remains unclear and our program of imaging and grism spectroscopy provides unique information regarding both the incidence of obscured star bursts and the build up of stellar mass at intermediate redshifts. In addition to carrying out the parallel program we will populate a public database with calibrated spectra and images, and provide limited ground- based optical and near-IR data for the deepest parallel fields.

  3. Philippine campaign boosts child immunizations.

    PubMed

    Manuel-santana, R

    1993-03-01

    In 1989, USAID awarded the Philippines a 5-year, US $50 million Child Survival Program targeting improvement in immunization coverage of children, prenatal care coverage for pregnant women, and contraceptive prevalence. Upon successful completion of performance benchmarks at the end of each year, USAID released monies to fund child survival activities for the following year. This program accomplished a major program goal, which was decentralization of health planning. The Philippine Department of Health soon incorporated provincial health planning. The Philippine Department of Health soon incorporated provincial health planning in its determination of allocation of resources. Social marketing activities contributed greatly to success in achieving the goal of boosting the immunization coverage rate for the 6 antigens listed under the Expanded Program for Immunization (51%-85% of infants, 1986-1991). In fact, rural health officers in Tarlac Province in Central Luzon went from household to household to talk to mothers about the benefits of immunizing a 1-year-old child, thereby contributing greatly to their achieving a 95% full immunization coverage rate by December 1991. Social marketing techniques included modern marketing strategies and multimedia channels. They first proved successful in metro Manila which, at the beginning of the campaign, had the lowest immunization rate of all 14 regions. Every Wednesday was designated immunization day and was when rural health centers vaccinated the children. Social marketing also successfully publicized oral rehydration therapy (ORT), breast feeding, and tuberculosis control. Another contributing factor to program success in child survival activities was private sector involvement. For example, the Philippine Pediatric Society helped to promote ORT as the preferred treatment for acute diarrhea. Further, the commercial sector distributed packets of oral rehydration salts and even advertised its own ORT product. At the end of 2

  4. Boosted Jets at the LHC

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew

    2015-04-01

    Jets are collimated streams of high-energy particles ubiquitous at any particle collider experiment and serve as proxy for the production of elementary particles at short distances. As the Large Hadron Collider at CERN continues to extend its reach to ever higher energies and luminosities, an increasingly important aspect of any particle physics analysis is the study and identification of jets, electroweak bosons, and top quarks with large Lorentz boosts. In addition to providing a unique insight into potential new physics at the tera-electron volt energy scale, high energy jets are a sensitive probe of emergent phenomena within the Standard Model of particle physics and can teach us an enormous amount about quantum chromodynamics itself. Jet physics is also invaluable for lower-level experimental issues including triggering and background reduction. It is especially important for the removal of pile-up, which is radiation produced by secondary proton collisions that contaminates every hard proton collision event in the ATLAS and CMS experiments at the Large Hadron Collider. In this talk, I will review the myriad ways that jets and jet physics are being exploited at the Large Hadron Collider. This will include a historical discussion of jet algorithms and the requirements that these algorithms must satisfy to be well-defined theoretical objects. I will review how jets are used in searches for new physics and ways in which the substructure of jets is being utilized for discriminating backgrounds from both Standard Model and potential new physics signals. Finally, I will discuss how jets are broadening our knowledge of quantum chromodynamics and how particular measurements performed on jets manifest the universal dynamics of weakly-coupled conformal field theories.

  5. Tracking down hyper-boosted top quarks

    DOE PAGESBeta

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directlymore » employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.« less

  6. Tracking down hyper-boosted top quarks

    SciTech Connect

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  7. Centrifugal compressor design for electrically assisted boost

    NASA Astrophysics Data System (ADS)

    Y Yang, M.; Martinez-Botas, R. F.; Zhuge, W. L.; Qureshi, U.; Richards, B.

    2013-12-01

    Electrically assisted boost is a prominent method to solve the issues of transient lag in turbocharger and remains an optimized operation condition for a compressor due to decoupling from turbine. Usually a centrifugal compressor for gasoline engine boosting is operated at high rotational speed which is beyond the ability of an electric motor in market. In this paper a centrifugal compressor with rotational speed as 120k RPM and pressure ratio as 2.0 is specially developed for electrically assisted boost. A centrifugal compressor including the impeller, vaneless diffuser and the volute is designed by meanline method followed by 3D detailed design. Then CFD method is employed to predict as well as analyse the performance of the design compressor. The results show that the pressure ratio and efficiency at design point is 2.07 and 78% specifically.

  8. Boost Converters for Gas Electric and Fuel Cell Hybrid Electric Vehicles

    SciTech Connect

    McKeever, JW

    2005-06-16

    Hybrid electric vehicles (HEVs) are driven by at least two prime energy sources, such as an internal combustion engine (ICE) and propulsion battery. For a series HEV configuration, the ICE drives only a generator, which maintains the state-of-charge (SOC) of propulsion and accessory batteries and drives the electric traction motor. For a parallel HEV configuration, the ICE is mechanically connected to directly drive the wheels as well as the generator, which likewise maintains the SOC of propulsion and accessory batteries and drives the electric traction motor. Today the prime energy source is an ICE; tomorrow it will very likely be a fuel cell (FC). Use of the FC eliminates a direct drive capability accentuating the importance of the battery charge and discharge systems. In both systems, the electric traction motor may use the voltage directly from the batteries or from a boost converter that raises the voltage. If low battery voltage is used directly, some special control circuitry, such as dual mode inverter control (DMIC) which adds a small cost, is necessary to drive the electric motor above base speed. If high voltage is chosen for more efficient motor operation or for high speed operation, the propulsion battery voltage must be raised, which would require some type of two-quadrant bidirectional chopper with an additional cost. Two common direct current (dc)-to-dc converters are: (1) the transformer-based boost or buck converter, which inverts a dc voltage, feeds the resulting alternating current (ac) into a transformer to raise or lower the voltage, and rectifies it to complete the conversion; and (2) the inductor-based switch mode boost or buck converter [1]. The switch-mode boost and buck features are discussed in this report as they operate in a bi-directional chopper. A benefit of the transformer-based boost converter is that it isolates the high voltage from the low voltage. Usually the transformer is large, further increasing the cost. A useful feature

  9. Music Might Give Babies' Language Skills a Boost

    MedlinePlus

    ... nlm.nih.gov/medlineplus/news/fullstory_158486.html Music Might Give Babies' Language Skills a Boost Small ... April 25, 2016 (HealthDay News) -- Can listening to music boost your baby's brainpower? Maybe, at least in ...

  10. Music Might Give Babies' Language Skills a Boost

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_158486.html Music Might Give Babies' Language Skills a Boost Small ... April 25, 2016 (HealthDay News) -- Can listening to music boost your baby's brainpower? Maybe, at least in ...

  11. Augmented Replicative Capacity of the Boosting Antigen Improves the Protective Efficacy of Heterologous Prime-Boost Vaccine Regimens

    PubMed Central

    Penaloza-MacMaster, Pablo; Teigler, Jeffrey E.; Obeng, Rebecca C.; Kang, Zi H.; Provine, Nicholas M.; Parenteau, Lily; Blackmore, Stephen; Ra, Joshua; Borducchi, Erica N.

    2014-01-01

    ABSTRACT Prime-boost immunization regimens have proven efficacious at generating robust immune responses. However, whether the level of replication of the boosting antigen impacts the magnitude and protective efficacy of vaccine-elicited immune responses remains unclear. To evaluate this, we primed mice with replication-defective adenovirus vectors expressing the lymphocytic choriomeningitis virus (LCMV) glycoprotein (GP), followed by boosting with either LCMV Armstrong, which is rapidly controlled, or LCMV CL-13, which leads to a more prolonged exposure to the boosting antigen. Although priming of naive mice with LCMV CL-13 normally results in T cell exhaustion and establishment of chronic infection, boosting with CL-13 resulted in potent recall CD8 T cell responses that were greater than those following boosting with LCMV Armstrong. Furthermore, following the CL-13 boost, a greater number of anamnestic CD8 T cells localized to the lymph nodes, exhibited granzyme B expression, and conferred improved protection against Listeria and vaccinia virus challenges compared with the Armstrong boost. Overall, our findings suggest that the replicative capacity of the boosting antigen influences the protective efficacy afforded by prime-boost vaccine regimens. These findings are relevant for optimizing vaccine candidates and suggest a benefit of robustly replicating vaccine vectors. IMPORTANCE The development of optimal prime-boost vaccine regimens is a high priority for the vaccine development field. In this study, we compared two boosting antigens with different replicative capacities. Boosting with a more highly replicative vector resulted in augmented immune responses and improved protective efficacy. PMID:24648461

  12. Boost symmetry in the Quantum Gravity sector

    SciTech Connect

    Cianfrani, Francesco; Montani, Giovanni

    2008-01-03

    We perform a canonical quantization of gravity in a second-order formulation, taking as configuration variables those describing a 4-bein, not adapted to the space-time splitting. We outline how, neither if we fix the Lorentz frame before quantizing, nor if we perform no gauge fixing at all, is invariance under boost transformations affected by the quantization.

  13. The Attentional Boost Effect with Verbal Materials

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Spataro, Pietro; Picklesimer, Milton

    2014-01-01

    Study stimuli presented at the same time as unrelated targets in a detection task are better remembered than stimuli presented with distractors. This attentional boost effect (ABE) has been found with pictorial (Swallow & Jiang, 2010) and more recently verbal materials (Spataro, Mulligan, & Rossi-Arnaud, 2013). The present experiments…

  14. Cleanouts boost Devonian shale gas flow

    SciTech Connect

    Not Available

    1991-02-04

    Cleaning shale debris from the well bores is an effective way to boost flow rates from old open hole Devonian shale gas wells, research on six West Virginia wells begun in 1985 has shown. Officials involved with the study say the Appalachian basin could see 20 year recoverable gas reserves hiked by 315 bcf if the process is used on a wide scale.

  15. Schools Enlisting Defense Industry to Boost STEM

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2008-01-01

    Defense contractors Northrop Grumman Corp. and Lockheed Martin Corp. are joining forces in an innovative partnership to develop high-tech simulations to boost STEM--or science, technology, engineering, and mathematics--education in the Baltimore County schools. The Baltimore County partnership includes the local operations of two major military…

  16. The Attentional Boost Effect and Context Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Smith, S. Adam; Spataro, Pietro

    2016-01-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors--the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is…

  17. Weight-Loss Surgery May Boost Survival

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_159166.html Weight-Loss Surgery May Boost Survival Overall death risk dropped ... 3, 2016 THURSDAY, June 2, 2016 (HealthDay News) -- Weight-loss surgery might significantly lower obese people's risk of ...

  18. Committee approves bill to boost NIH funding.

    PubMed

    2015-08-01

    A U.S. House of Representatives committee approved the 21st Century Cures Act. If passed by Congress, the bill would boost funding for the NIH and FDA and introduce new strategies for accelerating the approval of drugs and devices. PMID:26116105

  19. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  20. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and

  1. Occurrence of perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) in N.E. Spanish surface waters and their removal in a drinking water treatment plant that combines conventional and advanced treatments in parallel lines.

    PubMed

    Flores, Cintia; Ventura, Francesc; Martin-Alonso, Jordi; Caixach, Josep

    2013-09-01

    Perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) are two emerging contaminants that have been detected in all environmental compartments. However, while most of the studies in the literature deal with their presence or removal in wastewater treatment, few of them are devoted to their detection in treated drinking water and fate during drinking water treatment. In this study, analyses of PFOS and PFOA have been carried out in river water samples and in the different stages of a drinking water treatment plant (DWTP) which has recently improved its conventional treatment process by adding ultrafiltration and reverse osmosis in a parallel treatment line. Conventional and advanced treatments have been studied in several pilot plants and in the DWTP, which offers the opportunity to compare both treatments operating simultaneously. From the results obtained, neither preoxidation, sand filtration, nor ozonation, removed both perfluorinated compounds. As advanced treatments, reverse osmosis has proved more effective than reverse electrodialysis to remove PFOA and PFOS in the different configurations of pilot plants assayed. Granular activated carbon with an average elimination efficiency of 64±11% and 45±19% for PFOS and PFOA, respectively and especially reverse osmosis, which was able to remove ≥99% of both compounds, were the sole effective treatment steps. Trace levels of PFOS (3.0-21 ng/L) and PFOA (<4.2-5.5 ng/L) detected in treated drinking water were significantly lowered in comparison to those measured in precedent years. These concentrations represent overall removal efficiencies of 89±22% for PFOA and 86±7% for PFOS. PMID:23764674

  2. Conformal pure radiation with parallel rays

    NASA Astrophysics Data System (ADS)

    Leistner, Thomas; Nurowski, Paweł

    2012-03-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves.

  3. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  4. FloatBoost learning and statistical face detection.

    PubMed

    Li, Stan Z; Zhang, ZhenQiu

    2004-09-01

    A novel learning procedure, called FloatBoost, is proposed for learning a boosted classifier for achieving the minimum error rate. FloatBoost learning uses a backtrack mechanism after each iteration of AdaBoost learning to minimize the error rate directly, rather than minimizing an exponential function of the margin as in the traditional AdaBoost algorithms. A second contribution of the paper is a novel statistical model for learning best weak classifiers using a stagewise approximation of the posterior probability. These novel techniques lead to a classifier which requires fewer weak classifiers than AdaBoost yet achieves lower error rates in both training and testing, as demonstrated by extensive experiments. Applied to face detection, the FloatBoost learning method, together with a proposed detector pyramid architecture, leads to the first real-time multiview face detection system reported. PMID:15742888

  5. Bioactive Molecule Prediction Using Extreme Gradient Boosting.

    PubMed

    Babajide Mustapha, Ismail; Saeed, Faisal

    2016-01-01

    Following the explosive growth in chemical and biological data, the shift from traditional methods of drug discovery to computer-aided means has made data mining and machine learning methods integral parts of today's drug discovery process. In this paper, extreme gradient boosting (Xgboost), which is an ensemble of Classification and Regression Tree (CART) and a variant of the Gradient Boosting Machine, was investigated for the prediction of biological activity based on quantitative description of the compound's molecular structure. Seven datasets, well known in the literature were used in this paper and experimental results show that Xgboost can outperform machine learning algorithms like Random Forest (RF), Support Vector Machines (LSVM), Radial Basis Function Neural Network (RBFN) and Naïve Bayes (NB) for the prediction of biological activities. In addition to its ability to detect minority activity classes in highly imbalanced datasets, it showed remarkable performance on both high and low diversity datasets. PMID:27483216

  6. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  7. Voltage-Boosting Driver For Switching Regulator

    NASA Technical Reports Server (NTRS)

    Trump, Ronald C.

    1990-01-01

    Driver circuit assures availability of 10- to 15-V gate-to-source voltage needed to turn on n-channel metal oxide/semiconductor field-effect transistor (MOSFET) acting as switch in switching voltage regulator. Includes voltage-boosting circuit efficiently providing gate voltage 10 to 15 V above supply voltage. Contains no exotic parts and does not require additional power supply. Consists of NAND gate and dual voltage booster operating in conjunction with pulse-width modulator part of regulator.

  8. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  9. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  10. Boosting family income to promote child development.

    PubMed

    Duncan, Greg J; Magnuson, Katherine; Votruba-Drzal, Elizabeth

    2014-01-01

    Families who live in poverty face disadvantages that can hinder their children's development in many ways, write Greg Duncan, Katherine Magnuson, and Elizabeth Votruba-Drzal. As they struggle to get by economically, and as they cope with substandard housing, unsafe neighborhoods, and inadequate schools, poor families experience more stress in their daily lives than more affluent families do, with a host of psychological and developmental consequences. Poor families also lack the resources to invest in things like high-quality child care and enriched learning experiences that give more affluent children a leg up. Often, poor parents also lack the time that wealthier parents have to invest in their children, because poor parents are more likely to be raising children alone or to work nonstandard hours and have inflexible work schedules. Can increasing poor parents' incomes, independent of any other sort of assistance, help their children succeed in school and in life? The theoretical case is strong, and Duncan, Magnuson, and Votruba-Drzal find solid evidence that the answer is yes--children from poor families that see a boost in income do better in school and complete more years of schooling, for example. But if boosting poor parents' incomes can help their children, a crucial question remains: Does it matter when in a child's life the additional income appears? Developmental neurobiology strongly suggests that increased income should have the greatest effect during children's early years, when their brains and other systems are developing rapidly, though we need more evidence to prove this conclusively. The authors offer examples of how policy makers could incorporate the findings they present to create more effective programs for families living in poverty. And they conclude with a warning: if a boost in income can help poor children, then a drop in income--for example, through cuts to social safety net programs like food stamps--can surely harm them. PMID:25518705

  11. Experimental Research in Boost Driver with EDLCs

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hirokazu

    The supply used in servo systems tends to have a high voltage in order to reduce loss and improve the response of motor drives. We propose a new boost motor driver that comprises EDLCs. The proposed driver has a simple structure, wherein the EDLCs are connected in series to the supply, and comprises a charge circuit to charge the EDLCs. The proposed driver has three advantages over conventional boost drivers. The first advantage is that the driver can easily attain the stable boost voltage. The second advantage is that the driver can reduce input power peaks. In a servo system, the input power peaks become greater than the rated power in order to accelerate the motor rapidly. This implies that the equipments that supply power to servo systems must have sufficient power capacity to satisfy the power peaks. The proposed driver can suppress the increase of the power capacity of supply facilities. The third advantage is that the driver can store almost all of the regenerative energy. Conventional drivers have a braking resistor to suppress the increase in the DC link voltage. This causes a considerable reduction in the efficiency. The proposed driver is more efficient than conventional drivers. In this study, the experimental results confirmed the effectiveness of the proposed driver and showed that the drive performance of the proposed driver is the same as that of a conventional driver. Furthermore, it was confirmed that the results of the simulation of a model of the EDLC module, whose capacitance is dependent on the frequency, correspond well with the experimental results.

  12. Boost matrix converters in clean energy systems

    NASA Astrophysics Data System (ADS)

    Karaman, Ekrem

    This dissertation describes an investigation of novel power electronic converters, based on the ultra-sparse matrix topology and characterized by the minimum number of semiconductor switches. The Z-source, Quasi Z-source, Series Z-source and Switched-inductor Z-source networks were originally proposed for boosting the output voltage of power electronic inverters. These ideas were extended here on three-phase to three-phase and three-phase to single-phase indirect matrix converters. For the three-phase to three-phase matrix converters, the Z-source networks are placed between the three-switch input rectifier stage and the output six-switch inverter stage. A brief shoot-through state produces the voltage boost. An optimal pulse width modulation technique was developed to achieve high boosting capability and minimum switching losses in the converter. For the three-phase to single-phase matrix converters, those networks are placed similarly. For control purposes, a new modulation technique has been developed. As an example application, the proposed converters constitute a viable alternative to the existing solutions in residential wind-energy systems, where a low-voltage variable-speed generator feeds power to the higher-voltage fixed-frequency grid. Comprehensive analytical derivations and simulation results were carried out to investigate the operation of the proposed converters. Performance of the proposed converters was then compared between each other as well as with conventional converters. The operation of the converters was experimentally validated using a laboratory prototype.

  13. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  14. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  15. A Composite PWM Control Strategy for Boost Converter

    NASA Astrophysics Data System (ADS)

    Qingfeng, Liu; Zhaoxia, Leng; Jinkun, Sun; Huamin, Wang

    In order to improve the control performance of boost converter with large signal disturbance, a composite PWM control strategy for boost converter operating in continuous condition mode (CCM) was proposed in this paper. The parasitical loss of Boost converter was analyzed and a loss compensation strategy was adopted to design feed-forward tracker for converter. The composite PWM controller consisted of the tracker and PID controller. Simulation and experiment results validated the validity of the control strategy presented in this paper.

  16. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  17. Eclipse Parallel Tools Platform

    SciTech Connect

    Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

  18. Parallel execution and scriptability in micromagnetic simulations

    NASA Astrophysics Data System (ADS)

    Fischbacher, Thomas; Franchin, Matteo; Bordignon, Giuliano; Knittel, Andreas; Fangohr, Hans

    2009-04-01

    We demonstrate the feasibility of an "encapsulated parallelism" approach toward micromagnetic simulations that combines offering a high degree of flexibility to the user with the efficient utilization of parallel computing resources. While parallelization is obviously desirable to address the high numerical effort required for realistic micromagnetic simulations through utilizing now widely available multiprocessor systems (including desktop multicore CPUs and computing clusters), conventional approaches toward parallelization impose strong restrictions on the structure of programs: numerical operations have to be executed across all processors in a synchronized fashion. This means that from the user's perspective, either the structure of the entire simulation is rigidly defined from the beginning and cannot be adjusted easily, or making modifications to the computation sequence requires advanced knowledge in parallel programming. We explain how this dilemma is resolved in the NMAG simulation package in such a way that the user can utilize without any additional effort on his side both the computational power of multiple CPUs and the flexibility to tailor execution sequences for specific problems: simulation scripts written for single-processor machines can just as well be executed on parallel machines and behave in precisely the same way, up to increased speed. We provide a simple instructive magnetic resonance simulation example that demonstrates utilizing both custom execution sequences and parallelism at the same time. Furthermore, we show that this strategy of encapsulating parallelism even allows to benefit from speed gains through parallel execution in simulations controlled by interactive commands given at a command line interface.

  19. A multiview boosting approach to tissue segmentation

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Xu, Sheng; Pinto, Peter A.; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter L.; Wood, Bradford J.

    2014-04-01

    Digitized histopathology images have a great potential for improving or facilitating current assessment tools in cancer pathology. In order to develop accurate and robust automated methods, the precise segmentation of histologic objects such epithelium, stroma, and nucleus is necessary, in the hopes of information extraction not otherwise obvious to the subjective eye. Here, we propose a multivew boosting approach to segment histology objects of prostate tissue. Tissue specimen images are first represented at different scales using a Gaussian kernel and converted into several forms such HSV and La*b*. Intensity- and texture-based features are extracted from the converted images. Adopting multiview boosting approach, we effectively learn a classifier to predict the histologic class of a pixel in a prostate tissue specimen. The method attempts to integrate the information from multiple scales (or views). 18 prostate tissue specimens from 4 patients were employed to evaluate the new method. The method was trained on 11 tissue specimens including 75,832 epithelial and 103,453 stroma pixels and tested on 55,319 epithelial and 74,945 stroma pixels from 7 tissue specimens. The technique showed 96.7% accuracy, and as summarized into a receiver operating characteristic (ROC) plot, the area under the ROC curve (AUC) of 0.983 (95% CI: 0.983-0.984) was achieved.

  20. Centaur boost pump turbine icing investigation

    NASA Technical Reports Server (NTRS)

    Rollbuhler, R. J.

    1976-01-01

    An investigation was conducted to determine if ice formation in the Centaur vehicle liquid oxygen boost pump turbine could prevent rotation of the pump and whether or not this phenomenon could have been the failure mechanism for the Titan/Centaur vehicle TC-1. The investigation consisted of a series of tests done in the LeRC Space Power Chamber Facility to evaluate evaporative cooling behavior patterns in a turbine as a function of the quantity of water trapped in the turbine and as a function of the vehicle ascent pressure profile. It was found that evaporative freezing of water in the turbine housing, due to rapid depressurization within the turbine during vehicle ascent, could result in the formation of ice that would block the turbine and prevent rotation of the boost pump. But for such icing conditions to exist it would be necessary to have significant quantities of water in the turbine and/or its components, and the turbine housing temperature would have to be colder than 40 F at vehicle liftoff.

  1. Low temperature operation of a boost converter

    SciTech Connect

    Moss, B.S.; Boudreaux, R.R.; Nelms, R.M.

    1996-12-31

    The development of satellite power systems capable of operating at low temperatures on the order of 77K would reduce the heating system required on deep space vehicles. The power supplies in the satellite power system must be capable of operating at these temperatures. This paper presents the results of a study into the operation of a boost converter at temperatures close to 77K. The boost converter is designed to supply an output voltage and power of 42 V and 50 W from a 28 V input source. The entire system, except the 28 V source, is placed in the environmental chamber. This is important because the system does not require any manual adjustments to maintain a constant output voltage with a high efficiency. The constant 42 V output of this converter is a benefit of the application of a CMOS microcontroller in the feedback path. The switch duty cycle is adjusted by the microcontroller to maintain a constant output voltage. The efficiency of the system varied less than 1% over the temperature range of 22 C to {minus}184 C and was approximately 94.2% when the temperature was {minus}184 C.

  2. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  3. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  4. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  5. Parallel Climate Analysis Toolkit (ParCAT)

    Energy Science and Technology Software Center (ESTSC)

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  6. Series Transmission Line Transformer

    DOEpatents

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  7. Template matching on parallel architectures

    SciTech Connect

    Sher

    1985-07-01

    Many important problems in computer vision can be characterized as template-matching problems on edge images. Some examples are circle detection and line detection. Two techniques for template matching are the Hough transform and correlation. There are two algorithms for correlation: a shift-and-add-based technique and a Fourier-transform-based technique. The most efficient algorithm of these three varies depending on the size of the template and the structure of the image. On different parallel architectures, the choice of algorithms for a specific problem is different. This paper describes two parallel architectures: the WARP and the Butterfly and describes why and how the criterion for making the choice of algorithms differs between the two machines.

  8. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  9. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  10. REBoost: probabilistic resampling for boosted pedestrian detection

    NASA Astrophysics Data System (ADS)

    Lai, Shiming; Liu, Yu; Zhang, Maojun; Theobald, Barry-John

    2011-12-01

    Cascaded object detectors have demonstrated great success in fast object detection, where image regions can quickly be rejected using a cascade of increasingly complex rejectors/detectors. Although such cascaded detectors typically are fast and require minimal computation, they usually require iterative training, where classifiers are retrained to optimize rejection thresholds after testing on a validation set. We propose a cascaded object detector that uses probabilistic resampling for boosting reweighting, which has the advantage that only a single training step is required. Decision thresholds can be tuned on a validation set without the need for classifier retraining. Empirical results on a pedestrian detection task demonstrate that this reweighting results in a strong classifier that quickly rejects image regions and offers higher accuracy than other competing approaches.

  11. Boosted X Waves in Nonlinear Optical Systems

    SciTech Connect

    Arevalo, Edward

    2010-01-15

    X waves are spatiotemporal optical waves with intriguing superluminal and subluminal characteristics. Here we theoretically show that for a given initial carrier frequency of the system localized waves with genuine superluminal or subluminal group velocity can emerge from initial X waves in nonlinear optical systems with normal group velocity dispersion. Moreover, we show that this temporal behavior depends on the wave detuning from the carrier frequency of the system and not on the particular X-wave biconical form. A spatial counterpart of this behavior is also found when initial X waves are boosted in the plane transverse to the direction of propagation, so a fully spatiotemporal motion of localized waves can be observed.

  12. Boosting jet power in black hole spacetimes

    PubMed Central

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W.; Liebling, Steven L.; Motl, Patrick M.; Garrett, Travis

    2011-01-01

    The extraction of rotational energy from a spinning black hole via the Blandford–Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux. PMID:21768341

  13. Boosting jet power in black hole spacetimes

    NASA Astrophysics Data System (ADS)

    Neilsen, D.; Lehner, L.; Palenzuela, C.; Hirschmann, E. W.; Liebling, S. L.; Motl, P. M.; Garrett, T.

    2011-08-01

    The extraction of rotational energy from a spinning black hole via the Blandford-Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux.

  14. Boosted top quarks and jet structure

    NASA Astrophysics Data System (ADS)

    Schätzel, Sebastian

    2015-09-01

    The Large Hadron Collider is the first particle accelerator that provides high enough energy to produce large numbers of boosted top quarks. The decay products of these top quarks are confined to a cone in the top quark flight direction and can be clustered into a single jet. Top quark reconstruction then amounts to analysing the structure of the jet and looking for subjets that are kinematically compatible with top quark decay. Many techniques have been developed in this context to identify top quarks in a large background of non-top jets. This article reviews the results obtained using data recorded in the years 2010-2012 by the experiments ATLAS and CMS. Studies of Standard Model top quark production and searches for new massive particles that decay to top quarks are presented.

  15. Glucose Starvation Boosts Entamoeba histolytica Virulence

    PubMed Central

    Tovy, Ayala; Hertz, Rivka; Siman-Tov, Rama; Syan, Sylvie; Faust, Daniela; Guillen, Nancy; Ankri, Serge

    2011-01-01

    The unicellular parasite, Entamoeba histolytica, is exposed to numerous adverse conditions, such as nutrient deprivation, during its life cycle stages in the human host. In the present study, we examined whether the parasite virulence could be influenced by glucose starvation (GS). The migratory behaviour of the parasite and its capability to kill mammalian cells and to lyse erythrocytes is strongly enhanced following GS. In order to gain insights into the mechanism underlying the GS boosting effects on virulence, we analyzed differences in protein expression levels in control and glucose-starved trophozoites, by quantitative proteomic analysis. We observed that upstream regulatory element 3-binding protein (URE3-BP), a transcription factor that modulates E.histolytica virulence, and the lysine-rich protein 1 (KRiP1) which is induced during liver abscess development, are upregulated by GS. We also analyzed E. histolytica membrane fractions and noticed that the Gal/GalNAc lectin light subunit LgL1 is up-regulated by GS. Surprisingly, amoebapore A (Ap-A) and cysteine proteinase A5 (CP-A5), two important E. histolytica virulence factors, were strongly down-regulated by GS. While the boosting effect of GS on E. histolytica virulence was conserved in strains silenced for Ap-A and CP-A5, it was lost in LgL1 and in KRiP1 down-regulated strains. These data emphasize the unexpected role of GS in the modulation of E.histolytica virulence and the involvement of KRiP1 and Lgl1 in this phenomenon. PMID:21829737

  16. The attentional boost effect and context memory.

    PubMed

    Mulligan, Neil W; Smith, S Adam; Spataro, Pietro

    2016-04-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors-the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is unclear if context memory is likewise affected. Some accounts suggest enhanced perceptual encoding or associative binding, predicting an ABE on context memory, whereas other evidence suggests a more abstract, amodal basis of the effect. In Experiment 1, context memory was assessed in terms of an intramodal perceptual detail, the font and color of the study word. Experiment 2 examined context memory cross-modally, assessing memory for the modality (visual or auditory) of the study word. Experiments 3 and 4 assessed context memory with list discrimination, in which 2 study lists are presented and participants must later remember which list (if either) a test word came from. In all experiments, item (recognition) memory was also assessed and consistently displayed a robust ABE. In contrast, the attentional-boost manipulation did not enhance context memory, whether defined in terms of visual details, study modality, or list membership. There was some evidence that the mode of responding on the detection task (motoric response as opposed to covert counting of targets) may impact context memory but there was no evidence of an effect of target detection, per se. In sum, the ABE did not occur in context memory with verbal materials. (PsycINFO Database Record PMID:26348201

  17. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  18. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  19. Development of cassava periclinal chimera may boost production.

    PubMed

    Bomfim, N; Nassar, N M A

    2014-01-01

    Plant periclinal chimeras are genotypic mosaics arranged concentrically. Trials to produce them to combine different species have been done, but pratical results have not been achieved. We report for the second time the development of a very productive interspecific periclinal chimera in cassava. It has very large edible roots up to 14 kg per plant at one year old compared to 2-3 kg in common varieties. The epidermal tissue formed was from Manihot esculenta cultivar UnB 032, and the subepidermal and internal tissue from the wild species, Manihot fortalezensis. We determined the origin of tissues by meiotic and mitotic chromosome counts, plant anatomy and morphology. Epidermal features displayed useful traits to deduce tissue origin: cell shape and size, trichome density and stomatal length. Chimera roots had a wholly tuberous and edible constitution with smaller starch granule size and similar distribution compared to cassava. Root size enlargement might have been due to an epigenetic effect. These results suggest a new line of improved crop based on the development of interspecific chimeras composed of different combinations of wild and cultivated species. It promises boosting cassava production through exceptional root enlargement. PMID:24615046

  20. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  1. First results of the Los Alamos polyphase boost converter-modulator

    SciTech Connect

    Doss, James D.; Gribble, R. F.; Lynch, M. T.; Rees, D. E.; Tallerico, P. J.; Reass, W. A.

    2001-01-01

    This paper describes the first full-scale electrical test results of the Los Alamos polyphase boost converter-modulator being developed for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. The convertrr-modulator provides 140 kV, 1.2 mS, 60 Hz pulses to a 5 MW, 805 MHz klystron. The system, which has 1 MW average power, derives its +/- 1250 Volt DC buss link voltages from a standard 3-phase utility 13.8 kV to 2100 volt transformer. An SCR pre-regulator provides a soft-start function in addition to correction of line and load variations, from no-load to full-load. Energy storage is provided by low inductance self-clearing metallized hazy polypropylene traction capacitors. Each of the 3-phase H-bridge Insulated Gate Bipolar Transistor (IGBT) Pulse-Width Modulation (PWM) drivers are resonated with the amorphous nanocrystalline boost transformer and associated peaking circuits to provide zero-voltage-switching characteristics for the IGBT's. This design feature minimizes IGBT switching losses. By PWM of individual IGBT conduction angles, output pulse regulation with adaptive feedforward and feedback techniques is used to improve the klystron voltage pulse shape. In addition to the first operational results, this paper will discuss the relevant design techniques associated with the boost converter-modulator topology.

  2. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  3. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  4. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  5. 39% access time improvement, 11% energy reduction, 32 kbit 1-read/1-write 2-port static random-access memory using two-stage read boost and write-boost after read sensing scheme

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yasue; Moriwaki, Shinichi; Kawasumi, Atsushi; Miyano, Shinji; Shinohara, Hirofumi

    2016-04-01

    We propose novel circuit techniques for 1 clock (1CLK) 1 read/1 write (1R/1W) 2-port static random-access memories (SRAMs) to improve read access time (tAC) and write margins at low voltages. Two-stage read boost (TSR-BST) and write word line boost (WWL-BST) after the read sensing schemes have been proposed. TSR-BST reduces the worst read bit line (RBL) delay by 61% and RBL amplitude by 10% at V DD = 0.5 V, which improves tAC by 39% and reduces energy dissipation by 11% at V DD = 0.55 V. WWL-BST after read sensing scheme improves minimum operating voltage (V min) by 140 mV. A 32 kbit 1CLK 1R/1W 2-port SRAM with TSR-BST and WWL-BST has been developed using a 40 nm CMOS.

  6. Linked-View Parallel Coordinate Plot Renderer

    Energy Science and Technology Software Center (ESTSC)

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  7. Novel Control for Voltage Boosted Matrix Converter based Wind Energy Conversion System with Practicality

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod; Joshi, Raghuveer Raj; Yadav, Dinesh Kumar; Garg, Rahul Kumar

    2016-06-01

    This paper presents the implementation and investigation of novel voltage boosted matrix converter (MC) based permanent magnet wind energy conversion system (WECS). In this paper, on-line tuned adaptive fuzzy control algorithm cooperated with reversed MC is proposed to yield maximum energy. The control system is implemented on a dSPACE DS1104 real time board. Feasibility of the proposed system has been experimentally verified using a laboratory 1.2 kW prototype of WECS under steady-state and dynamic conditions.

  8. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  9. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  10. Exploiting tRNAs to Boost Virulence

    PubMed Central

    Albers, Suki; Czech, Andreas

    2016-01-01

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage—either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5′- and 3′-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification. PMID:26797637

  11. Parallel system simulation

    SciTech Connect

    Tai, H.M.; Saeks, R.

    1984-03-01

    A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

  12. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  13. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    PubMed

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  14. Severe Obesity May Boost Infection Risk After Heart Surgery

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_159143.html Severe Obesity May Boost Infection Risk After Heart Surgery Excess ... new study suggests. The researchers found that severe obesity was linked to much higher odds of developing ...

  15. Do Hospital ICUs Raise Costs without Boosting Survival?

    MedlinePlus

    ... news/fullstory_160334.html Do Hospital ICUs Raise Costs Without Boosting Survival? Study finds common medical conditions ... hospital deaths, use of invasive procedures and hospital costs, their findings showed that ICU admission rates ranged ...

  16. High-temperature alloys: Single-crystal performance boost

    NASA Astrophysics Data System (ADS)

    Schütze, Michael

    2016-08-01

    Titanium aluminide alloys are lightweight and have attractive properties for high-temperature applications. A new growth method that enables single-crystal production now boosts their mechanical performance.

  17. Inducing Labor May Not Boost C-Section Risk

    MedlinePlus

    ... fullstory_157560.html Inducing Labor May Not Boost C-Section Risk Study also found that prompting delivery ... they were at no greater risk of a C-section -- or any other negative effects for themselves ...

  18. Zika's Delivery Via Mosquito Bite May Boost Its Effect

    MedlinePlus

    ... nlm.nih.gov/medlineplus/news/fullstory_159484.html Zika's Delivery Via Mosquito Bite May Boost Its Effect ... The inflammation caused by a mosquito bite helps Zika and other viruses spread through the body more ...

  19. Healthy Fats in Mediterranean Diet Won't Boost Weight

    MedlinePlus

    ... Fats in Mediterranean Diet Won't Boost Weight Vegetable oils, nuts can be a part of a healthful ... health benefits and includes healthy fats, such as vegetable oils, fish and nuts," Estruch explained in a journal ...

  20. 49. Interior of launch support building, buck boost transformer at ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    49. Interior of launch support building, buck boost transformer at center, view towards southwest - Ellsworth Air Force Base, Delta Flight, Launch Facility, On County Road T512, south of Exit 116 off I-90, Interior, Jackson County, SD

  1. Omega-3 Fish Oil Supplements Might Boost Antidepressants' Effects

    MedlinePlus

    ... gov/medlineplus/news/fullstory_158505.html Omega-3 Fish Oil Supplements Might Boost Antidepressants' Effects Data from ... TUESDAY, April 26, 2016 (HealthDay News) -- Omega-3 fish oil supplements may improve the effectiveness of antidepressants, ...

  2. A Little Excess Weight May Boost Colon Cancer Survival

    MedlinePlus

    ... 158930.html A Little Excess Weight May Boost Colon Cancer Survival Researchers saw an effect, but experts ... a surprise, a new study found that overweight colon cancer patients tended to have better survival than ...

  3. Could Slight Brain Zap During Sleep Boost Memory?

    MedlinePlus

    ... medlineplus.gov/news/fullstory_160135.html Could Slight Brain Zap During Sleep Boost Memory? Small study says ... HealthDay News) -- Stimulating a targeted area of the brain with small doses of weak electricity while you ...

  4. Zika's Delivery Via Mosquito Bite May Boost Its Effect

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_159484.html Zika's Delivery Via Mosquito Bite May Boost Its Effect ... The inflammation caused by a mosquito bite helps Zika and other viruses spread through the body more ...

  5. Omega-3 Fish Oil Supplements Might Boost Antidepressants' Effects

    MedlinePlus

    ... gov/news/fullstory_158505.html Omega-3 Fish Oil Supplements Might Boost Antidepressants' Effects Data from 8 ... April 26, 2016 (HealthDay News) -- Omega-3 fish oil supplements may improve the effectiveness of antidepressants, new ...

  6. Testosterone Therapy May Boost Older Men's Sex Lives

    MedlinePlus

    ... 159622.html Testosterone Therapy May Boost Older Men's Sex Lives Gel hormone treatment led to improved libido ... experienced a moderate but significant improvement in their sex drive, sexual activity and erectile function compared to ...

  7. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur

    2015-10-01

    Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics. The proposed method is employed for the exemplary problem of artificial area identification. Classification of IKONOS multispectral data results in short computational time and overall accuracy of 94.4% comparing to 94.0% obtained by using AdaBoost.M1 with trees and 93.8% achieved using Random Forest. The influence of a manipulation of the final threshold of the strong classifier on classification results is reported.

  8. Weight Loss Surgery May Boost Good Cholesterol in Obese Boys

    MedlinePlus

    ... Loss Surgery May Boost Good Cholesterol in Obese Boys Small study showed surgery also improved protective effects ... Weight loss surgery could help severely obese teenage boys reduce their risk for heart disease by increasing ...

  9. Severe Obesity May Boost Infection Risk After Heart Surgery

    MedlinePlus

    ... nih.gov/medlineplus/news/fullstory_159143.html Severe Obesity May Boost Infection Risk After Heart Surgery Excess ... new study suggests. The researchers found that severe obesity was linked to much higher odds of developing ...

  10. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  11. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  12. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  13. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  14. Partitioning and parallel radiosity

    NASA Astrophysics Data System (ADS)

    Merzouk, S.; Winkler, C.; Paul, J. C.

    1996-03-01

    This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

  15. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  16. Information geometry of U-Boost and Bregman divergence.

    PubMed

    Murata, Noboru; Takenouchi, Takashi; Kanamori, Takafumi; Eguchi, Shinto

    2004-07-01

    We aim at an extension of AdaBoost to U-Boost, in the paradigm to build a stronger classification machine from a set of weak learning machines. A geometric understanding of the Bregman divergence defined by a generic convex function U leads to the U-Boost method in the framework of information geometry extended to the space of the finite measures over a label set. We propose two versions of U-Boost learning algorithms by taking account of whether the domain is restricted to the space of probability functions. In the sequential step, we observe that the two adjacent and the initial classifiers are associated with a right triangle in the scale via the Bregman divergence, called the Pythagorean relation. This leads to a mild convergence property of the U-Boost algorithm as seen in the expectation-maximization algorithm. Statistical discussions for consistency and robustness elucidate the properties of the U-Boost methods based on a stochastic assumption for training data. PMID:15165397

  17. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than line relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. The parallel implementation of a V-cycle multiple semi-coarsened grid (MSG) algorithm or distributed-memory architectures such as the Intel iPSC/860 and Paragon computers is addressed. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. A mapping of an MSG algorithm to distributed-memory architectures that demonstrate how both levels of parallelism can be exploited is described. The results is a robust and effective multigrid algorithm for distributed-memory machines.

  18. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching Yuen

    1991-01-01

    A new Lagrangian formulation of the Euler equation is adopted for the calculation of 2-D supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, a better than six times speed-up was achieved on a 8192-processor CM-2 over a single processor of a CRAY-2.

  19. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  20. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  1. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  2. Effects of Nasal Corticosteroids on Boosts of Systemic Allergen-Specific IgE Production Induced by Nasal Allergen Exposure

    PubMed Central

    Egger, Cornelia; Lupinek, Christian; Ristl, Robin; Lemell, Patrick; Horak, Friedrich; Zieglmayer, Petra; Spitzauer, Susanne; Valenta, Rudolf; Niederberger, Verena

    2015-01-01

    Background Allergen exposure via the respiratory tract and in particular via the nasal mucosa boosts systemic allergen-specific IgE production. Intranasal corticosteroids (INCS) represent a first line treatment of allergic rhinitis but their effects on this boost of allergen-specific IgE production are unclear. Aim Here we aimed to determine in a double-blind, placebo-controlled study whether therapeutic doses of an INCS preparation, i.e., nasal fluticasone propionate, have effects on boosts of allergen-specific IgE following nasal allergen exposure. Methods Subjects (n = 48) suffering from grass and birch pollen allergy were treated with daily fluticasone propionate or placebo nasal spray for four weeks. After two weeks of treatment, subjects underwent nasal provocation with either birch pollen allergen Bet v 1 or grass pollen allergen Phl p 5. Bet v 1 and Phl p 5-specific IgE, IgG1–4, IgM and IgA levels were measured in serum samples obtained at the time of provocation and one, two, four, six and eight weeks thereafter. Results Nasal allergen provocation induced a median increase to 141.1% of serum IgE levels to allergens used for provocation but not to control allergens 4 weeks after provocation. There were no significant differences regarding the boosts of allergen-specific IgE between INCS- and placebo-treated subjects. Conclusion In conclusion, the application of fluticasone propionate had no significant effects on the boosts of systemic allergen-specific IgE production following nasal allergen exposure. Trial Registration http://clinicaltrials.gov/ NCT00755066 PMID:25705889

  3. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  4. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  5. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  6. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  7. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  8. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  9. Maximizing boosted top identification by minimizing N-subjettiness

    NASA Astrophysics Data System (ADS)

    Thaler, Jesse; van Tilburg, Ken

    2012-02-01

    N -subjettiness is a jet shape designed to identify boosted hadronic objects such as top quarks. Given N subjet axes within a jet, N-subjettiness sums the angular distances of jet constituents to their nearest subjet axis. Here, we generalize and improve on N -subjettiness by minimizing over all possible subjet directions, using a new variant of the k-means clustering algorithm. On boosted top benchmark samples from the BOOST2010 workshop, we demonstrate that a simple cut on the 3-subjettiness to 2-subjettiness ratio yields 20% (50%) tagging efficiency for a 0.23% (4.1%) fake rate, making N -subjettiness a highly effective boosted top tagger. N-subjettiness can be modified by adjusting an angular weighting exponent, and we find that the jet broadening measure is preferred for boosted top searches. We also explore multivariate techniques, and show that additional improvements are possible using a modified Fisher discriminant. Finally, we briefly mention how our minimization procedure can be extended to the entire event, allowing the event shape N-jettiness to act as a fixed N cone jet algorithm.

  10. DNA prime-protein boost using subtype consensus Env was effective in eliciting neutralizing antibody responses against subtype BC HIV-1 viruses circulating in China

    PubMed Central

    Zhang, Mingshun; Zhang, Lu; Zhang, Chunhua; Hong, Kunxue; Shao, Yiming; Huang, Zuhu; Wang, Shixia; Lu, Shan

    2012-01-01

    Previously, we have shown that DNA prime-protein boost is effective in eliciting neutralizing antibodies (NAb) against randomly selected HIV-1 isolates. Given the genetic diversity of HIV-1 viruses and the unique predominant subtypes in different geographic regions, it is critical to test the DNA prime-protein boost approach against circulating viral isolates in key HIV endemic areas. In the current study, the same DNA prime-protein boost vaccine was used as in previous studies to investigate the induction of NAb responses against HIV-1 clade BC, a major subtype circulating in China. A codon optimized gp120-BC DNA vaccine, based on the consensus envelope (Env) antigen sequence of clade BC, was constructed and a stable CHO cell line expressing the same consensus BC gp120 protein was produced. The immunogenicity of this consensus gp120-BC was examined in New Zealand White rabbits by either DNA prime-protein boost or protein alone vaccination approaches. High levels of Env-specific antibody responses were elicited by both approaches. However, DNA prime-protein boost but not the protein alone immune sera contained significant levels of NAb against pseudotyped viruses expressing HIV-1 BC Env antigens. Furthermore, high frequencies of CD4 binding site-targeted antibodies were found in the DNA prime- protein boost rabbit sera indicating that the positive NAb may be the result of antibodies against conformationally sensitive epitopes on HIV-1 Env. The findings support that DNA prime-protein boost was effective in eliciting NAb against a key HIV-1 virus subtype in China. This result may lead to the development of regional HIV vaccines through this approach. PMID:23111170

  11. Parallel Total Energy

    Energy Science and Technology Software Center (ESTSC)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  12. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  13. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  14. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  15. Parallel Multigrid Equation Solver

    Energy Science and Technology Software Center (ESTSC)

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  16. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (ESTSC)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  17. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  18. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  19. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  20. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  1. A Programmable Preprocessor for Parallelizing Fortran-90

    SciTech Connect

    Rosing, Matthew; Yabusaki, Steven B.

    1999-07-01

    A programmable preprocessor that generates portable and efficient parallel Fortran-90 code has been successfully used in the development of a variety of environmental transport simulators for the Department of Energy. The tool provides the basic functionality of a traditional preprocessor where directives are embedded in a serial Fortran program and interpreted by the preprocessor to produce parallel Fortran code with MPI calls. The unique aspect of this work is that the user can make additions to, or modify, these directives. The directives reside in a preprocessor library and changes to this library can range from small changes to customize an existing library, to larger changes for porting a library, to completely replacing the library. The preprocessor is programmed with a library of directives written in a C-like language, called DL, that has added support for manipulating Fortran code fragments. The primary benefits to the user are twofold: It is fairly easy for any user to generate efficient, parallel code from Fortran-90 with embedded directives, and the long term viability of the user?s software is guaranteed. This is because the source code will always run on a serial machine (the directives are transparent to standard Fortran compilers), and the preprocessor library can be modified to work with different hardware and software environments. A 4000 line preprocessor library has been written and used to parallelize roughly 50,000 lines of groundwater modeling code. The programs have been ported to a wide range of parallel architectures. Performance of these programs is similar to programs explicitly written for a parallel machine. Binaries of the preprocessor core, as well as the preprocessor library source code used in our groundwater modeling codes are currently available.

  2. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  3. Improve online boosting algorithm from self-learning cascade classifier

    NASA Astrophysics Data System (ADS)

    Luo, Dapeng; Sang, Nong; Huang, Rui; Tong, Xiaojun

    2010-04-01

    Online boosting algorithm has been used in many vision-related applications, such as object detection. However, in order to obtain good detection result, combining a large number of weak classifiers into a strong classifier is required. And those weak classifiers must be updated and improved online. So the training and detection speed will be reduced inevitably. This paper proposes a novel online boosting based learning method, called self-learning cascade classifier. Cascade decision strategy is integrated with the online boosting procedure. The resulting system contains enough number of weak classifiers while keeping computation cost low. The cascade structure is learned and updated online. And the structure complexity can be increased adaptively when detection task is more difficult. Moreover, most of new samples are labeled by tracking automatically. This can greatly reduce the effort by labeler. We present experimental results that demonstrate the efficient and high detection rate of the method.

  4. (In)Direct detection of boosted dark matter

    NASA Astrophysics Data System (ADS)

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse

    2016-05-01

    We present a new multi-component dark matter model with a novel experimental signature that mimics neutral current interactions at neutrino detectors. In our model, the dark matter is composed of two particles, a heavier dominant component that annihilates to produce a boosted lighter component that we refer to as boosted dark matter. The lighter component is relativistic and scatters off electrons in neutrino experiments to produce Cherenkov light. This model combines the indirect detection of the dominant component with the direct detection of the boosted dark matter. Directionality can be used to distinguish the dark matter signal from the atmospheric neutrino background. We discuss the viable region of parameter space in current and future experiments.

  5. On modified boosting algorithm for geographic data applications

    NASA Astrophysics Data System (ADS)

    Iwanowski, Michal; Mulawka, Jan

    2015-09-01

    Boosting algorithms constitute one of the essential tools in modern machine-learning, one of its primary applications being the improvement of classifier accuracy in supervised learning. Most widespread realization of boosting, known as AdaBoost, is based upon the concept of building a complex predictive model out of a group of simple base models. We present an approach for local assessment of base model accuracy and their improved weighting that captures inhomogeneity present in real-life datasets, in particular in those that contain geographic information. Conducted experiments show improvement in classification accuracy and F-scores of the modified algorithm, however more experimentation is required to confirm the exact scope of these improvements.

  6. Boosted Fast Flux Loop Alternative Cooling Assessment

    SciTech Connect

    Glen R. Longhurst; Donna Post Guillen; James R. Parry; Douglas L. Porter; Bruce W. Wallace

    2007-08-01

    The Gas Test Loop (GTL) Project was instituted to develop the means for conducting fast neutron irradiation tests in a domestic radiation facility. It made use of booster fuel to achieve the high neutron flux, a hafnium thermal neutron absorber to attain the high fast-to-thermal flux ratio, a mixed gas temperature control system for maintaining experiment temperatures, and a compressed gas cooling system to remove heat from the experiment capsules and the hafnium thermal neutron absorber. This GTL system was determined to provide a fast (E > 0.1 MeV) flux greater than 1.0E+15 n/cm2-s with a fast-to-thermal flux ratio in the vicinity of 40. However, the estimated system acquisition cost from earlier studies was deemed to be high. That cost was strongly influenced by the compressed gas cooling system for experiment heat removal. Designers were challenged to find a less expensive way to achieve the required cooling. This report documents the results of the investigation leading to an alternatively cooled configuration, referred to now as the Boosted Fast Flux Loop (BFFL). This configuration relies on a composite material comprised of hafnium aluminide (Al3Hf) in an aluminum matrix to transfer heat from the experiment to pressurized water cooling channels while at the same time providing absorption of thermal neutrons. Investigations into the performance this configuration might achieve showed that it should perform at least as well as its gas-cooled predecessor. Physics calculations indicated that the fast neutron flux averaged over the central 40 cm (16 inches) relative to ATR core mid-plane in irradiation spaces would be about 1.04E+15 n/cm2-s. The fast-to-thermal flux ratio would be in excess of 40. Further, the particular configuration of cooling channels was relatively unimportant compared with the total amount of water in the apparatus in determining performance. Thermal analyses conducted on a candidate configuration showed the design of the water coolant and

  7. The Lateral Decubitus Breast Boost: Description, Rationale, and Efficacy

    SciTech Connect

    Ludwig, Michelle S.; McNeese, Marsha D.; Buchholz, Thomas A.; Perkins, George H.; Strom, Eric A.

    2010-01-15

    Purpose: To describe and evaluate the modified lateral decubitus boost, a breast irradiation technique. Patients are repositioned and resimulated for electron boost to minimize the necessary depth for the electron beam and optimize target volume coverage. Methods and Materials: A total of 2,606 patients were treated with post-lumpectomy radiation at our institution between January 1, 2000, and February 1, 2008. Of these, 231 patients underwent resimulation in the lateral decubitus position with electron boost. Distance from skin to the maximal depth of target volume was measured in both the original and boost plans. Age, body mass index (BMI), boost electron energy, and skin reaction were evaluated. Results: Resimulation in the lateral decubitus position reduced the distance from skin to maximal target volume depth in all patients. Average depth reduction by repositioning was 2.12 cm, allowing for an average electron energy reduction of approximately 7 MeV. Mean skin entrance dose was reduced from about 90% to about 85% (p < 0.001). Only 14 patients (6%) experienced moist desquamation in the boost field at the end of treatment. Average BMI of these patients was 30.4 (range, 17.8-50.7). BMI greater than 30 was associated with more depth reduction by repositioning and increased risk of moist desquamation. Conclusions: The lateral decubitus position allows for a decrease in the distance from the skin to the target volume depth, improving electron coverage of the tumor bed while reducing skin entrance dose. This is a well-tolerated regimen for a patient population with a high BMI or deep tumor location.

  8. 2001 BUDGET: Research Gets Hefty Boost in 2001 Defense Budget.

    PubMed

    Malakoff, D

    2000-09-01

    Next year's $289 billion defense budget, which President Bill Clinton signed last month, includes big boosts for a host of science programs, from endangered species research to developing laser weapons. And with the two major presidential candidates pledging further boosts, the Pentagon's portfolio is attracting increasing attention from the life sciences community as well. But some analysts worry that Congress and the Pentagon may be shortchanging long-term, high-risk research in favor of projects with a more certain payoff. PMID:17811142

  9. Boosted Objects: A Probe of Beyond the Standard Model Physics

    SciTech Connect

    Abdesselam, A.; Kuutmann, E.Bergeaas; Bitenc, U.; Brooijmans, G.; Butterworth, J.; Bruckman de Renstrom, P.; Buarque Franzosi, D.; Buckingham, R.; Chapleau, B.; Dasgupta, M.; Davison, A.; Dolen, J.; Ellis, S.; Fassi, F.; Ferrando, J.; Frandsen, M.T.; Frost, J.; Gadfort, T.; Glover, N.; Haas, A.; Halkiadakis, E.; /more authors..

    2012-06-12

    We present the report of the hadronic working group of the BOOST2010 workshop held at the University of Oxford in June 2010. The first part contains a review of the potential of hadronic decays of highly boosted particles as an aid for discovery at the LHC and a discussion of the status of tools developed to meet the challenge of reconstructing and isolating these topologies. In the second part, we present new results comparing the performance of jet grooming techniques and top tagging algorithms on a common set of benchmark channels. We also study the sensitivity of jet substructure observables to the uncertainties in Monte Carlo predictions.

  10. A methodology for boost-glide transport technology planning

    NASA Technical Reports Server (NTRS)

    Repic, E. M.; Olson, G. A.; Milliken, R. J.

    1974-01-01

    A systematic procedure is presented by which the relative economic value of technology factors affecting design, configuration, and operation of boost-glide transport can be evaluated. Use of the methodology results in identification of first-order economic gains potentially achievable by projected advances in each of the definable, hypersonic technologies. Starting with a baseline vehicle, the formulas, procedures and forms which are integral parts of this methodology are developed. A demonstration of the methodology is presented for one specific boost-glide system.

  11. Complexified boost invariance and holographic heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Gubser, Steven S.; van der Schee, Wilke

    2015-01-01

    At strong coupling holographic studies have shown that heavy ion collisions do not obey normal boost invariance. Here we study a modified boost invariance through a complex shift in time, and show that this leads to surprisingly good agreement with numerical holographic computations. When including perturbations the agreement becomes even better, both in the hydrodynamic and the far-from-equilibrium regime. One of the main advantages is an analytic formulation of the stress-energy tensor of the longitudinal dynamics of holographic heavy ion collisions.

  12. Digital parallel-to-series pulse-train converter

    NASA Technical Reports Server (NTRS)

    Hussey, J.

    1971-01-01

    Circuit converts number represented as two level signal on n-bit lines to series of pulses on one of two lines, depending on sign of number. Converter accepts parallel binary input data and produces number of output pulses equal to number represented by input data.

  13. 10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND SOUTH PIER. LOOKING SOUTHEAST. - Route 31 Bridge, New Jersey Route 31, crossing disused main line of Central Railroad of New Jersey (C.R.R.N.J.) (New Jersey Transit's Raritan Valley Line), Hampton, Hunterdon County, NJ

  14. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT

  15. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT

  16. EARLY CHILDHOOD INVESTMENTS SUBSTANTIALLY BOOST ADULT HEALTH

    PubMed Central

    Campbell, Frances; Conti, Gabriella; Heckman, James J.; Moon, Seong Hyeok; Pinto, Rodrigo; Pungello, Elizabeth; Pan, Yi

    2014-01-01

    High-quality early childhood programs have been shown to have substantial benefits in reducing crime, raising earnings, and promoting education. Much less is known about their benefits for adult health. We report the long-term health impacts of one of the oldest and most heavily cited early childhood interventions with long-term follow-up evaluated by the method of randomization: the Carolina Abecedarian Project (ABC). Using recently collected biomedical data, we find that disadvantaged children randomly assigned to treatment have significantly lower prevalence of risk factors for cardiovascular and metabolic diseases in their mid-30s. The evidence is especially strong for males. The mean systolic blood pressure among the control males is 143, while only 126 among the treated. One in four males in the control group is affected by metabolic syndrome, while none in the treatment group is. To reach these conclusions, we address several statistical challenges. We use exact permutation tests to account for small sample sizes and conduct a parallel bootstrap confidence interval analysis to confirm the permutation analysis. We adjust inference to account for the multiple hypotheses tested and for nonrandom attrition. Our evidence shows the potential of early life interventions for preventing disease and promoting health. PMID:24675955

  17. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  18. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  19. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  20. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  1. Seeing in parallel

    SciTech Connect

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.

    1988-01-01

    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  2. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  3. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  4. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  5. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  6. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  7. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  8. Adaptive parallel logic networks

    SciTech Connect

    Martinez, T.R.; Vidal, J.J.

    1988-02-01

    This paper presents a novel class of special purpose processors referred to as ASOCS (adaptive self-organizing concurrent systems). Intended applications include adaptive logic devices, robotics, process control, system malfunction management, and in general, applications of logic reasoning. ASOCS combines massive parallelism with self-organization to attain a distributed mechanism for adaptation. The ASOCS approach is based on an adaptive network composed of many simple computing elements (nodes) which operate in a combinational and asynchronous fashion. Problem specification (programming) is obtained by presenting to the system if-then rules expressed as Boolean conjunctions. New rules are added incrementally. In the current model, when conflicts occur, precedence is given to the most recent inputs. With each rule, desired network response is simply presented to the system, following which the network adjusts itself to maintain consistency and parsimony of representation. Data processing and adaptation form two separate phases of operation. During processing, the network acts as a parallel hardware circuit. Control of the adaptive process is distributed among the network nodes and efficiently exploits parallelism.

  9. Introductory Remarks to Cosmic Background Parallel Sessions.

    NASA Astrophysics Data System (ADS)

    Burigana, Carlo; de Bernardis, Paolo; Masi, Silvia; Norgaard-Nielsen, Hans Ulrik

    2015-01-01

    These are promising times for the study of cosmic microwave background and foregrounds. While, at the date of this meeting, WMAP is close to release its final maps and products, Planck early and intermediate results have been presented with the first release of the compact source catalog, and the presentation of the first cosmological products is approaching. This parallel session is focussed on the astrophysical sky as seen by Planck and other observatories, and on their scientific exploitation, regarding diffuse emissions, sources, galaxy clusters, cosmic infrared background, as well as on critical issues coming from systematic effects and data analysis, in the view of fundamental physics and cosmology perspectives. At the same time, a new generation of CMB anisotropy and polarization experiments is currently operated using large arrays of detectors, boosting the sensitivity and resolution of the surveys to unprecedented levels. Mainstream projects are observations of the polarization of the CMB, looking for the inflationary B-modes at large and intermediate angular scales, fine-scale measurements of the Sunyaev-Zel'dovich effect in clusters of galaxies, and the precise measure of CMB spectrum.

  10. Lock-in-detection-free line-scan stimulated Raman scattering microscopy for near video-rate Raman imaging.

    PubMed

    Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2016-09-01

    We report on the development of a unique lock-in-detection-free line-scan stimulated Raman scattering microscopy technique based on a linear detector with a large full well capacity controlled by a field-programmable gate array (FPGA) for near video-rate Raman imaging. With the use of parallel excitation and detection scheme, the line-scan SRS imaging at 20 frames per second can be acquired with a ∼5-fold lower excitation power density, compared to conventional point-scan SRS imaging. The rapid data communication between the FPGA and the linear detector allows a high line-scanning rate to boost the SRS imaging speed without the need for lock-in detection. We demonstrate this lock-in-detection-free line-scan SRS imaging technique using the 0.5 μm polystyrene and 1.0 μm poly(methyl methacrylate) beads mixed in water, as well as living gastric cancer cells. PMID:27607947

  11. Gentle Nearest Neighbors Boosting over Proper Scoring Rules.

    PubMed

    Nock, Richard; Ali, Wafa Bel Haj; D'Ambrosio, Roberto; Nielsen, Frank; Barlaud, Michel

    2015-01-01

    Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, UNN, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the UNN algorithm, at the possible expense of the algorithm's convergence and performance. In this paper, we propose a lightweight Newton-Raphson alternative optimizing proper scoring rules from a very broad set, and establish formal convergence rates under the boosting framework that compete with those known for UNN. To the best of our knowledge, no such boosting-compliant convergence rates were previously known in the popular Gentle Adaboost's lineage. We provide experiments on a dozen domains, including Caltech and SUN computer vision databases, comparing our approach to major families including support vector machines, (Ada)boosting and stochastic gradient descent. They support three major conclusions: (i) GNNB significantly outperforms UNN, in terms of convergence rate and quality of the outputs, (ii) GNNB performs on par with or better than computationally intensive large margin approaches, (iii) on large domains that rule out those latter approaches for computational reasons, GNNB provides a simple and competitive contender to stochastic gradient descent. Experiments include a divide-and-conquer improvement of GNNB exploiting the link with proper scoring rules optimization. PMID:26353210

  12. Testosterone Therapy May Boost Older Men's Sex Lives

    MedlinePlus

    ... Map FAQs Contact Us Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Latest Health News → Article URL of this page: https://www.nlm.nih.gov/medlineplus/news/fullstory_159622.html Testosterone Therapy May Boost Older Men's Sex Lives Gel hormone treatment led to improved libido ...

  13. Boost compensator for use with internal combustion engine with supercharger

    SciTech Connect

    Asami, T.

    1988-04-12

    A boost compensator for controlling the position of a control rack of a fuel injection pump to supply fuel to an internal combustion with a supercharger in response to a boost pressure to be applied to the engine is described. The control rack is movable in a first direction increasing an amount of fuel to be supplied by the fuel injection pump to the engine and in a second direction, opposite to the first direction, decreasing the amount of fuel. The boost compensator comprises: a push rod disposed for forward and rearward movement in response to the boost pressure; a main lever disposed for angular movement about a first pivot; an auxiliary lever disposed for angular movement about a second pivot; return spring means associated with the first portion of the auxiliary lever for resiliently biasing same in one direction about the second pivot; and abutment means mounted on the second portion of the auxiliary lever and engageable with the second portion of the main lever.

  14. Could Weight-Loss Surgery Boost Odds of Preemie Birth?

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160596.html Could Weight-Loss Surgery Boost Odds of Preemie Birth? Monitoring is ... HealthDay News) -- Mothers-to-be who've had weight-loss surgery may have increased odds for premature delivery, ...

  15. Predicting protein structural class with AdaBoost Learner.

    PubMed

    Niu, Bing; Cai, Yu-Dong; Lu, Wen-Cong; Li, Guo-Zheng; Chou, Kuo-Chen

    2006-01-01

    The structural class is an important feature in characterizing the overall topological folding type of a protein or the domains therein. Prediction of protein structural classification has attracted the attention and efforts from many investigators. In this paper a novel predictor, the AdaBoost Learner, was introduced to deal with this problem. The essence of the AdaBoost Learner is that a combination of many 'weak' learning algorithms, each performing just slightly better than a random guessing algorithm, will generate a 'strong' learning algorithm. Demonstration thru jackknife cross-validation on two working datasets constructed by previous investigators indicated that AdaBoost outperformed other predictors such as SVM (support vector machine), a powerful algorithm widely used in biological literatures. It has not escaped our notice that AdaBoost may hold a high potential for improving the quality in predicting the other protein features as well, such as subcellular location and receptor type, among many others. Or at the very least, it will play a complementary role to many of the existing algorithms in this regard. PMID:16800803

  16. Boosting NAD+ for the prevention and treatment of liver cancer

    PubMed Central

    Djouder, Nabil

    2015-01-01

    Hepatocellular carcinoma (HCC) is the third leading cause of cancer death worldwide yet has limited therapeutic options. We recently demonstrated that inhibition of de novo nicotinamide adenine dinucleotide (NAD+) synthesis is responsible for DNA damage, thereby initiating hepatocarcinogenesis. We propose that boosting NAD+ levels might be used as a prophylactic or therapeutic approach in HCC. PMID:27308492

  17. Boost IORT in Breast Cancer: Body of Evidence

    PubMed Central

    Sedlmayer, Felix; Reitsamer, Roland; Fussl, Christoph; Ziegler, Ingrid; Deutschmann, Heinz; Kopp, Peter

    2014-01-01

    The term IORT (intraoperative radiotherapy) is currently used for various techniques that show decisive differences in dose delivery. The largest evidence for boost IORT preceding whole breast irradiation (WBI) originates from intraoperative electron treatments with single doses around 10 Gy, providing outstandingly low local recurrence rates in any risk constellation also at long term analyses. Compared to other boost methods, an intraoperative treatment has evident advantages as follows. Precision. Direct visualisation of the tumour bed during surgery guarantees an accurate dose delivery. This fact has additionally gained importance in times of primary reconstruction techniques after lumpectomy to optimise cosmetic outcome. IORT is performed before breast tissue is mobilised for plastic purposes. Cosmesis. As a consequence of direct tissue exposure without distension by hematoma/seroma, IORT allows for small treatment volumes and complete skin sparing, both having a positive effect on late tissue tolerance and, hence, cosmetic appearance. Patient Comfort. Boost IORT marginally prolongs the surgical procedure, while significantly shortening postoperative radiotherapy. Its combination with a 3-week hypofractionated external beam radiotherapy to the whole breast (WBI) is presently tested in the HIOB trial (hypofractionated WBI preceded by IORT electron boost), a prospective multicenter trial of the International Society of Intraoperative Radiotherapy (ISIORT). PMID:25258684

  18. Jet Boost Pumps For The Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Meng, Sen Y.

    1991-01-01

    Brief report proposes use of jet boost pumps in conjunction with main pumps supplying liquid hydrogen and liquid oxygen to main engine of Space Shuttle. Main part of pump has no moving parts. Benefits include increased reliability, simplified ducts, and decreased weight.

  19. Boosting NAD(+) for the prevention and treatment of liver cancer.

    PubMed

    Djouder, Nabil

    2015-01-01

    Hepatocellular carcinoma (HCC) is the third leading cause of cancer death worldwide yet has limited therapeutic options. We recently demonstrated that inhibition of de novo nicotinamide adenine dinucleotide (NAD(+)) synthesis is responsible for DNA damage, thereby initiating hepatocarcinogenesis. We propose that boosting NAD(+) levels might be used as a prophylactic or therapeutic approach in HCC. PMID:27308492

  20. Balance-Boosting Footwear Tips for Older People

    MedlinePlus

    ... Home » Learn About Feet » Tips for Healthy Feet Balance-Boosting Footwear Tips for Older People Balance in all aspects of life is a good ... mental equilibrium isn't the only kind of balance that's important in life. Good physical balance can ...

  1. Synthetic aperture radar automatic target recognition using adaptive boosting

    NASA Astrophysics Data System (ADS)

    Sun, Yijun; Liu, Zhipeng; Todorovic, Sinisa; Li, Jian

    2005-05-01

    We propose a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the MSTAR public release database. First, each image chip is pre-processed by extracting fine and raw feature sets, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) net as the base learner. Since the RBF net is a binary classifier, we decompose our multiclass problem into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF net for each binary problem into a code word, which is then "decoded" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature.

  2. Boosting Teachers' Self-Esteem: A Dropout Prevention Strategy.

    ERIC Educational Resources Information Center

    Ruben, Ann Moliver

    Good teachers leave teaching not because pay is low but because of poor working conditions and too little recognition. Since students can be strongly affected by teachers, teachers who feel negatively about themselves can adversely affect students. A five-evening workshop was developed in Dade County, Florida to boost teachers' self-esteem and to…

  3. Repetitive peptide boosting progressively enhances functional memory CTLs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Induction of functional memory CTLs holds promise for fighting critical infectious diseases through vaccination, but so far, no effective regime has been identified. We show here that memory CTLs can be enhanced progressively to high levels by repetitive intravenous boosting with peptide and adjuvan...

  4. Elf Atochem boosts production of CFC substitutes

    SciTech Connect

    Not Available

    1992-05-01

    To carve out a larger share of the market for acceptable chlorofluorocarbon substitutes, Elf Atochem (Paris) is expanding its production of HFC-134a, HCFC-141b and HCFC-142b in the U.S. and in France. This paper reports that the company is putting the finishing touches on a plant at its Pierre-Benite (France) facility, to bring 9,000 m.t./yr (19.8 million lb) of HFC-134a capacity on-line by September. Construction is scheduled to begin next year at the company's Calvert City, Ky., plant, where a 15,000-m.t./yr (33-million-lb) unit for HFC-134a will come onstream by 1995.

  5. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  6. Boosted Lopinavir– Versus Boosted Atazanavir–Containing Regimens and Immunologic, Virologic, and Clinical Outcomes: A Prospective Study of HIV-Infected Individuals in High-Income Countries

    PubMed Central

    2015-01-01

    Background. Current clinical guidelines consider regimens consisting of either ritonavir-boosted atazanavir or ritonavir-boosted lopinavir and a nucleoside reverse transcriptase inhibitor (NRTI) backbone among their recommended and alternative first-line antiretroviral regimens. However, these guidelines are based on limited evidence from randomized clinical trials and clinical experience. Methods. We compared these regimens with respect to clinical, immunologic, and virologic outcomes using data from prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States in the HIV-CAUSAL Collaboration, 2004–2013. Antiretroviral therapy–naive and AIDS-free individuals were followed from the time they started a lopinavir or an atazanavir regimen. We estimated the ‘intention-to-treat’ effect for atazanavir vs lopinavir regimens on each of the outcomes. Results. A total of 6668 individuals started a lopinavir regimen (213 deaths, 457 AIDS-defining illnesses or deaths), and 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths). The adjusted intention-to-treat hazard ratios for atazanavir vs lopinavir regimens were 0.70 (95% confidence interval [CI], .53–.91) for death, 0.67 (95% CI, .55–.82) for AIDS-defining illness or death, and 0.91 (95% CI, .84–.99) for virologic failure at 12 months. The mean 12-month increase in CD4 count was 8.15 (95% CI, −.13 to 16.43) cells/µL higher in the atazanavir group. Estimates differed by NRTI backbone. Conclusions. Our estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a greater 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for atazanavir compared with lopinavir regimens. PMID:25567330

  7. Benefit of Radiation Boost After Whole-Breast Radiotherapy

    SciTech Connect

    Livi, Lorenzo; Borghesi, Simona; Saieva, Calogero; Fambrini, Massimiliano; Iannalfi, Alberto; Greto, Daniela; Paiar, Fabiola; Scoccianti, Silvia; Simontacchi, Gabriele; Bianchi, Simonetta; Cataliotti, Luigi; Biti, Giampaolo

    2009-11-15

    Purpose: To determine whether a boost to the tumor bed after breast-conserving surgery (BCS) and radiotherapy (RT) to the whole breast affects local control and disease-free survival. Methods and Materials: A total of 1,138 patients with pT1 to pT2 breast cancer underwent adjuvant RT at the University of Florence. We analyzed only patients with a minimum follow-up of 1 year (range, 1-20 years), with negative surgical margins. The median age of the patient population was 52.0 years (+-7.9 years). The breast cancer relapse incidence probability was estimated by the Kaplan-Meier method, and differences between patient subgroups were compared by the log rank test. Cox regression models were used to evaluate the risk of breast cancer relapse. Results: On univariate survival analysis, boost to the tumor bed reduced breast cancer recurrence (p < 0.0001). Age and tamoxifen also significantly reduced breast cancer relapse (p = 0.01 and p = 0.014, respectively). On multivariate analysis, the boost and the medium age (45-60 years) were found to be inversely related to breast cancer relapse (hazard ratio [HR], 0.27; 95% confidence interval [95% CI], 0.14-0.52, and HR 0.61; 95% CI, 0.37-0.99, respectively). The effect of the boost was more evident in younger patients (HR, 0.15 and 95% CI, 0.03-0.66 for patients <45 years of age; and HR, 0.31 and 95% CI, 0.13-0.71 for patients 45-60 years) on multivariate analyses stratified by age, although it was not a significant predictor in women older than 60 years. Conclusion: Our results suggest that boost to the tumor bed reduces breast cancer relapse and is more effective in younger patients.

  8. Self-boosting vaccines and their implications for herd immunity

    PubMed Central

    Arinaminpathy, Nimalan; Lavine, Jennie S.; Grenfell, Bryan T.

    2012-01-01

    Advances in vaccine technology over the past two centuries have facilitated far-reaching impact in the control of many infections, and today’s emerging vaccines could likewise open new opportunities in the control of several diseases. Here we consider the potential, population-level effects of a particular class of emerging vaccines that use specific viral vectors to establish long-term, intermittent antigen presentation within a vaccinated host: in essence, “self-boosting” vaccines. In particular, we use mathematical models to explore the potential role of such vaccines in situations where current immunization raises only relatively short-lived protection. Vaccination programs in such cases are generally limited in their ability to raise lasting herd immunity. Moreover, in certain cases mass vaccination can have the counterproductive effect of allowing an increase in severe disease, through reducing opportunities for immunity to be boosted through natural exposure to infection. Such dynamics have been proposed, for example, in relation to pertussis and varicella-zoster virus. In this context we show how self-boosting vaccines could open qualitatively new opportunities, for example by broadening the effective duration of herd immunity that can be achieved with currently used immunogens. At intermediate rates of self-boosting, these vaccines also alleviate the potential counterproductive effects of mass vaccination, through compensating for losses in natural boosting. Importantly, however, we also show how sufficiently high boosting rates may introduce a new regime of unintended consequences, wherein the unvaccinated bear an increased disease burden. Finally, we discuss important caveats and data needs arising from this work. PMID:23169630

  9. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  10. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  11. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  12. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  13. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power boost and power-operated control... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  14. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  15. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  16. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power boost and power-operated control... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... flight and landing in the event of— (1) Any single failure in the power portion of the system; or (2)...

  17. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Power boost and power-operated control system. 29.695 Section 29.695 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or...

  18. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Power boost and power-operated control system. 27.695 Section 27.695 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or...

  19. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  20. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  1. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  2. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  3. Fastpath Speculative Parallelization

    NASA Astrophysics Data System (ADS)

    Spear, Michael F.; Kelsey, Kirk; Bai, Tongxin; Dalessandro, Luke; Scott, Michael L.; Ding, Chen; Wu, Peng

    We describe Fastpath, a system for speculative parallelization of sequential programs on conventional multicore processors. Our system distinguishes between the lead thread, which executes at almost-native speed, and speculative threads, which execute somewhat slower. This allows us to achieve nontrivial speedup, even on two-core machines. We present a mathematical model of potential speedup, parameterized by application characteristics and implementation constants. We also present preliminary results gleaned from two different Fastpath implementations, each derived from an implementation of software transactional memory.

  4. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  5. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  6. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  7. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  8. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  9. Stereotactic Body Radiotherapy: A Promising Treatment Option for the Boost of Oropharyngeal Cancers Not Suitable for Brachytherapy: A Single-Institutional Experience

    SciTech Connect

    Al-Mamgani, Abrahim; Tans, Lisa; Teguh, David N.; Rooij, Peter van; Zwijnenburg, Ellen M.; Levendag, Peter C.

    2012-03-15

    Purpose: To prospectively assess the outcome and toxicity of frameless stereotactic body radiotherapy (SBRT) as a treatment option for boosting primary oropharyngeal cancers (OPC) in patients who not suitable for the standard brachytherapy boost (BTB). Methods and Materials: Between 2005 and 2010, 51 patients with Stage I to IV biopsy-proven OPC who were not suitable for BTB received boosts by means of SBRT (3 times 5.5 Gy, prescribed to the 80% isodose line), after 46 Gy of IMRT to the primary tumor and neck (when indicated). Endpoints of the study were local control (LC), disease-free survival (DFS), overall survival (OS), and acute and late toxicity. Results: After a median follow-up of 18 months (range, 6-65 months), the 2-year actuarial rates of LC, DFS, and OS were 86%, 80%, and 82%, respectively, and the 3-year rates were 70%, 66%, and 54%, respectively. The treatment was well tolerated, as there were no treatment breaks and no Grade 4 or 5 toxicity reported, either acute or chronic. The overall 2-year cumulative incidence of Grade {>=}2 late toxicity was 28%. Of the patients with 2 years with no evidence of disease (n = 20), only 1 patient was still feeding tube dependent and 2 patients had Grade 3 xerostomia. Conclusions: According to our knowledge, this study is the first report of patients with primary OPC who received boosts by means of SBRT. Patients with OPC who are not suitable for the standard BTB can safely and effectively receive boosts by SBRT. With this radiation technique, an excellent outcome was achieved. Furthermore, the SBRT boost did not have a negative impact regarding acute and late side effects.

  10. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  11. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  12. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  13. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  14. Xyce parallel electronic simulator : reference guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  15. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  16. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  17. Gain Purchasing Power the Newfangled Way--On-Line.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    1999-01-01

    Examines how San Diego State University uses computers to cut purchasing costs and boost efficiency and whether their solution can work for other business-to-business needs. How the school developed the totally self-sustaining, on-line and on-time purchasing system is discussed, including solutions to start-up problems. (GR)

  18. Spacecraft boost and abort guidance and control systems requirement study, boost dynamics and control analysis study. Exhibit A: Boost dynamics and control anlaysis

    NASA Technical Reports Server (NTRS)

    Williams, F. E.; Price, J. B.; Lemon, R. S.

    1972-01-01

    The simulation developments for use in dynamics and control analysis during boost from liftoff to orbit insertion are reported. Also included are wind response studies of the NR-GD 161B/B9T delta wing booster/delta wing orbiter configuration, the MSC 036B/280 inch solid rocket motor configuration, the MSC 040A/L0X-propane liquid injection TVC configuration, the MSC 040C/dual solid rocket motor configuration, and the MSC 049/solid rocket motor configuration. All of the latest math models (rigid and flexible body) developed for the MSC/GD Space Shuttle Functional Simulator, are included.

  19. Shortened Intervals during Heterologous Boosting Preserve Memory CD8 T Cell Function but Compromise Longevity.

    PubMed

    Thompson, Emily A; Beura, Lalit K; Nelson, Christine E; Anderson, Kristin G; Vezys, Vaiva

    2016-04-01

    Developing vaccine strategies to generate high numbers of Ag-specific CD8 T cells may be necessary for protection against recalcitrant pathogens. Heterologous prime-boost-boost immunization has been shown to result in large quantities of functional memory CD8 T cells with protective capacities and long-term stability. Completing the serial immunization steps for heterologous prime-boost-boost can be lengthy, leaving the host vulnerable for an extensive period of time during the vaccination process. We show in this study that shortening the intervals between boosting events to 2 wk results in high numbers of functional and protective Ag-specific CD8 T cells. This protection is comparable to that achieved with long-term boosting intervals. Short-boosted Ag-specific CD8 T cells display a canonical memory T cell signature associated with long-lived memory and have identical proliferative potential to long-boosted T cells Both populations robustly respond to antigenic re-exposure. Despite this, short-boosted Ag-specific CD8 T cells continue to contract gradually over time, which correlates to metabolic differences between short- and long-boosted CD8 T cells at early memory time points. Our studies indicate that shortening the interval between boosts can yield abundant, functional Ag-specific CD8 T cells that are poised for immediate protection; however, this is at the expense of forming stable long-term memory. PMID:26903479

  20. LINE-ABOVE-GROUND ATTENUATOR

    DOEpatents

    Wilds, R.B.; Ames, J.R.

    1957-09-24

    The line-above-ground attenuator provides a continuously variable microwave attenuator for a coaxial line that is capable of high attenuation and low insertion loss. The device consists of a short section of the line-above- ground plane type transmission lime, a pair of identical rectangular slabs of lossy material like polytron, whose longitudinal axes are parallel to and indentically spaced away from either side of the line, and a geared mechanism to adjust amd maintain this spaced relationship. This device permits optimum fineness and accuracy of attenuator control which heretofore has been difficult to achieve.

  1. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching-Yuen

    1992-01-01

    This paper adopts a new Lagrangian formulation of the Euler equation for the calculation of two dimensional supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, we have achieved better than six times speed-up on a 8192-processor CM-2 over a single processor of a CRAY-2.

  2. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  3. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  4. Unified Parallel Software

    Energy Science and Technology Software Center (ESTSC)

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  5. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  6. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  7. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  8. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  9. Cell Lines

    PubMed Central

    Cherbas, Lucy; Gong, Lei

    2014-01-01

    We review the properties and uses of cell lines in Drosophila research, emphasizing the variety of lines, the large body of genomic and transcriptional data available for many of the lines, and the variety of ways the lines have been used to provide tools for and insights into the developmental, molecular, and cell biology of Drosophila and mammals. PMID:24434506

  10. High Temperature Boost (HTB) Power Processing Unit (PPU) Formulation Study

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Bradley, Arthur T.; Iannello, Christopher J.; Carr, Gregory A.; Mohammad, Mojarradi M.; Hunter, Don J.; DelCastillo, Linda; Stell, Christopher B.

    2013-01-01

    This technical memorandum is to summarize the Formulation Study conducted during fiscal year 2012 on the High Temperature Boost (HTB) Power Processing Unit (PPU). The effort is authorized and supported by the Game Changing Technology Division, NASA Office of the Chief Technologist. NASA center participation during the formulation includes LaRC, KSC and JPL. The Formulation Study continues into fiscal year 2013. The formulation study has focused on the power processing unit. The team has proposed a modular, power scalable, and new technology enabled High Temperature Boost (HTB) PPU, which has 5-10X improvement in PPU specific power/mass and over 30% in-space solar electric system mass saving.

  11. IMM tracking of a theater ballistic missile during boost phase

    NASA Astrophysics Data System (ADS)

    Hutchins, Robert G.; San Jose, Anthony

    1998-09-01

    Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper addresses the performance of tracking algorithms for TBMs during boost phase and across the transition to ballistic flight. Three families of tracking algorithms are examined: alpha-beta-gamma trackers, Kalman-based trackers, and the interactive multiple model (IMM) tracker. In addition, a variation on the IMM to include prior knowledge of a booster cutoff parameter is examined. Simulated data is used to compare algorithms. Also, the IMM tracker is run on an actual ballistic missile trajectory. Results indicate that IMM trackers show significant advantage in tracking through the model transition represented by booster cutoff.

  12. Boosting bonsai trees for handwritten/printed text discrimination

    NASA Astrophysics Data System (ADS)

    Ricquebourg, Yann; Raymond, Christian; Poirriez, Baptiste; Lemaitre, Aurélie; Coüasnon, Bertrand

    2013-12-01

    Boosting over decision-stumps proved its efficiency in Natural Language Processing essentially with symbolic features, and its good properties (fast, few and not critical parameters, not sensitive to over-fitting) could be of great interest in the numeric world of pixel images. In this article we investigated the use of boosting over small decision trees, in image classification processing, for the discrimination of handwritten/printed text. Then, we conducted experiments to compare it to usual SVM-based classification revealing convincing results with very close performance, but with faster predictions and behaving far less as a black-box. Those promising results tend to make use of this classifier in more complex recognition tasks like multiclass problems.

  13. Investigating light NMSSM pseudoscalar states with boosted ditau tagging

    NASA Astrophysics Data System (ADS)

    Conte, Eric; Fuks, Benjamin; Guo, Jun; Li, Jinmian; Williams, Anthony G.

    2016-05-01

    We study a class of realizations of the Next-to-Minimal Supersymmetric Standard Model that is motivated by dark matter and Higgs data, and in which the lightest pseudoscalar Higgs boson mass is smaller than twice the bottom quark mass and greater than twice the tau lepton mass. In such scenarios, the lightest pseudoscalar Higgs boson can be copiously produced at the LHC from the decay of heavier superpartners and will dominantly further decay into a pair of tau leptons that is generally boosted. We make use of a boosted object tagging technique designed to tag such a ditau jet, and estimate the sensitivity of the LHC to the considered supersymmetric scenarios with 20 to 50 fb-1 of proton-proton collisions at a center-of-mass energy of 13 TeV.

  14. Externally Dispersed Interferometry for Resolution Boosting and Doppler Velocimetry

    SciTech Connect

    Erskine, D J

    2003-12-01

    Externally dispersed interferometry (EDI) is a rapidly advancing technique for wide bandwidth spectroscopy and radial velocimetry. By placing a small angle-independent interferometer near the slit of an existing spectrograph system, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moire pattern, which manifests high detailed spectral information heterodyned down to low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry (under a Doppler shift the entire moir{acute e} pattern shifts in phase). A demonstration of {approx}2x resolution boosting (100,000 from 50,000) on the Lick Obs. echelle spectrograph is shown. Preliminary data indicating {approx}8x resolution boost (170,000 from 20,000) using multiple delays has been taken on a linear grating spectrograph.

  15. Prime-Boost Immunization Strategies against Chikungunya Virus

    PubMed Central

    Lum, Fok-Moon; Kümmerer, Beate M.; Lulla, Aleksei; Lulla, Valeria; García-Arriaza, Juan; Fazakerley, John K.; Roques, Pierre; Le Grand, Roger; Merits, Andres; Ng, Lisa F. P.; Esteban, Mariano

    2014-01-01

    ABSTRACT Chikungunya virus (CHIKV) is a reemerging mosquito-borne alphavirus that causes debilitating arthralgia in humans. Here we describe the development and testing of novel DNA replicon and protein CHIKV vaccine candidates and evaluate their abilities to induce antigen-specific immune responses against CHIKV. We also describe homologous and heterologous prime-boost immunization strategies using novel and previously developed CHIKV vaccine candidates. Immunogenicity and efficacy were studied in a mouse model of CHIKV infection and showed that the DNA replicon and protein antigen were potent vaccine candidates, particularly when used for priming and boosting, respectively. Several prime-boost immunization strategies eliciting unmatched humoral and cellular immune responses were identified. Further characterization by antibody epitope mapping revealed differences in the qualitative immune responses induced by the different vaccine candidates and immunization strategies. Most vaccine modalities resulted in complete protection against wild-type CHIKV infection; however, we did identify circumstances under which certain immunization regimens may lead to enhancement of inflammation upon challenge. These results should help guide the design of CHIKV vaccine studies and will form the basis for further preclinical and clinical evaluation of these vaccine candidates. IMPORTANCE As of today, there is no licensed vaccine to prevent CHIKV infection. In considering potential new vaccine candidates, a vaccine that could raise long-term protective immunity after a single immunization would be preferable. While humoral immunity seems to be central for protection against CHIKV infection, we do not yet fully understand the correlates of protection. Therefore, in the absence of a functional vaccine, there is a need to evaluate a number of different candidates, assessing their merits when they are used either in a single immunization or in a homologous or heterologous prime-boost

  16. Consistent Holographic Description of Boost-Invariant Plasma

    SciTech Connect

    Heller, Michal P.; Surowka, Piotr; Loganayagam, R.; Spalinski, Michal; Vazquez, Samuel E.

    2009-01-30

    Prior attempts to construct the gravity dual of boost-invariant flow of N=4 supersymmetric Yang-Mills gauge theory plasma suffered from apparent curvature singularities in the late-time expansion. This Letter shows how these problems can be resolved by a different choice of expansion parameter. The calculations presented correctly reproduce the plasma energy-momentum tensor within the framework of second-order viscous hydrodynamics.

  17. (In)direct detection of boosted dark matter

    NASA Astrophysics Data System (ADS)

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse

    2014-10-01

    We initiate the study of novel thermal dark matter (DM) scenarios where present-day annihilation of DM in the galactic center produces boosted stable particles in the dark sector. These stable particles are typically a subdominant DM component, but because they are produced with a large Lorentz boost in this process, they can be detected in large volume terrestrial experiments via neutral-current-like interactions with electrons or nuclei. This novel DM signal thus combines the production mechanism associated with indirect detection experiments (i.e. galactic DM annihilation) with the detection mechanism associated with direct detection experiments (i.e. DM scattering off terrestrial targets). Such processes are generically present in multi-component DM scenarios or those with non-minimal DM stabilization symmetries. As a proof of concept, we present a model of two-component thermal relic DM, where the dominant heavy DM species has no tree-level interactions with the standard model and thus largely evades direct and indirect DM bounds. Instead, its thermal relic abundance is set by annihilation into a subdominant lighter DM species, and the latter can be detected in the boosted channel via the same annihilation process occurring today. Especially for dark sector masses in the 10 MeV-10 GeV range, the most promising signals are electron scattering events pointing toward the galactic center. These can be detected in experiments designed for neutrino physics or proton decay, in particular Super-K and its upgrade Hyper-K, as well as the PINGU/MICA extensions of IceCube. This boosted DM phenomenon highlights the distinctive signatures possible from non-minimal dark sectors.

  18. Dark matter conversion as a source of boost factor

    NASA Astrophysics Data System (ADS)

    Liu, Ze-Peng; Wu, Yue-Liang; Zhou, Yu-Feng

    2012-09-01

    In interacting multi-component dark matter (DM) models, the interactions between the DM components can covert relatively heavy DM components into lighter ones at late times after the thermal decoupling. As a consequence, the relic density of the lightest DM component can be greatly enhanced at late times, which can lead to an alternative source of boost factor required to explain the positron and electron excesses reported by the recent DM indirect search experiments.

  19. (In)direct detection of boosted dark matter

    SciTech Connect

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse E-mail: cuiyo@umd.edu E-mail: jthaler@mit.edu

    2014-10-01

    We initiate the study of novel thermal dark matter (DM) scenarios where present-day annihilation of DM in the galactic center produces boosted stable particles in the dark sector. These stable particles are typically a subdominant DM component, but because they are produced with a large Lorentz boost in this process, they can be detected in large volume terrestrial experiments via neutral-current-like interactions with electrons or nuclei. This novel DM signal thus combines the production mechanism associated with indirect detection experiments (i.e. galactic DM annihilation) with the detection mechanism associated with direct detection experiments (i.e. DM scattering off terrestrial targets). Such processes are generically present in multi-component DM scenarios or those with non-minimal DM stabilization symmetries. As a proof of concept, we present a model of two-component thermal relic DM, where the dominant heavy DM species has no tree-level interactions with the standard model and thus largely evades direct and indirect DM bounds. Instead, its thermal relic abundance is set by annihilation into a subdominant lighter DM species, and the latter can be detected in the boosted channel via the same annihilation process occurring today. Especially for dark sector masses in the 10 MeV–10 GeV range, the most promising signals are electron scattering events pointing toward the galactic center. These can be detected in experiments designed for neutrino physics or proton decay, in particular Super-K and its upgrade Hyper-K, as well as the PINGU/MICA extensions of IceCube. This boosted DM phenomenon highlights the distinctive signatures possible from non-minimal dark sectors.

  20. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  1. Boosted dark matter signals uplifted with self-interaction

    NASA Astrophysics Data System (ADS)

    Kong, Kyoungchul; Mohlabeng, Gopolang; Park, Jong-Chul

    2015-04-01

    We explore detection prospects of a non-standard dark sector in the context of boosted dark matter. We focus on a scenario with two dark matter particles of a large mass difference, where the heavier candidate is secluded and interacts with the standard model particles only at loops, escaping existing direct and indirect detection bounds. Yet its pair annihilation in the galactic center or in the Sun may produce boosted stable particles, which could be detected as visible Cherenkov light in large volume neutrino detectors. In such models with multiple candidates, self-interaction of dark matter particles is naturally utilized in the assisted freeze-out mechanism and is corroborated by various cosmological studies such as N-body simulations of structure formation, observations of dwarf galaxies, and the small scale problem. We show that self-interaction of the secluded (heavier) dark matter greatly enhances the capture rate in the Sun and results in promising signals at current and future experiments. We perform a detailed analysis of the boosted dark matter events for Super-Kamiokande, Hyper-Kamiokande and PINGU, including notable effects such as evaporation due to self-interaction and energy loss in the Sun.

  2. Chagas parasite detection in blood images using AdaBoost.

    PubMed

    Uc-Cetina, Víctor; Brito-Loeza, Carlos; Ruiz-Piña, Hugo

    2015-01-01

    The Chagas disease is a potentially life-threatening illness caused by the protozoan parasite, Trypanosoma cruzi. Visual detection of such parasite through microscopic inspection is a tedious and time-consuming task. In this paper, we provide an AdaBoost learning solution to the task of Chagas parasite detection in blood images. We give details of the algorithm and our experimental setup. With this method, we get 100% and 93.25% of sensitivity and specificity, respectively. A ROC comparison with the method most commonly used for the detection of malaria parasites based on support vector machines (SVM) is also provided. Our experimental work shows mainly two things: (1) Chagas parasites can be detected automatically using machine learning methods with high accuracy and (2) AdaBoost + SVM provides better overall detection performance than AdaBoost or SVMs alone. Such results are the best ones known so far for the problem of automatic detection of Chagas parasites through the use of machine learning, computer vision, and image processing methods. PMID:25861375

  3. Plasma Boosted Hydrogen Generation for Vehicle Pollution Reduction

    NASA Astrophysics Data System (ADS)

    Cohn, Daniel R.

    1999-11-01

    Plasma boosted hydrogen generators could improve the environmental quality of vehicles onboard production of hydrogen. (Bromberg,L, Cohn DR, Rabinovich A, Surma JE, Virden J, Compact plasmatron boosted hydrogen generation for vehicular applications. Int J Hydrogen Energy 1999;24) Plasma based devices can provide a rapid response and compact means of converting a wide range of fuels into hydrogen-rich gas. Spark ignition engine operation could facilitate an order of magnitude reduction in Nox generation during the entire driving cycle. Hydrogen-rich gas might also be employed to reduce pollution in Diesel engine vehicles. There also may be applications to fuel cell and turbine vehicles. In addition, plasma boosted hydrogen generation might be employed to facilitate the use of biomass derived oils by onboard conversion into hydrogen-rich gas. Use of biomass derived oils could lead to a net reduction in CO2 production. Plasma based devices facilitate hydrogen production from partial oxidation of hydrocarbon fuels by providing additional enthalpy, reactive species and mixing. Experimental studies of hydrogen production from compact plasma based devices will be discussed.

  4. Stereotactic Body Radiation Therapy Boost in Locally Advanced Pancreatic Cancer

    SciTech Connect

    Seo, Young Seok; Kim, Mi-Sook; Yoo, Sung Yul; Cho, Chul Koo; Yang, Kwang Mo; Yoo, Hyung Jun; Choi, Chul Won; Lee, Dong Han; Kim, Jin; Kim, Min Suk; Kang, Hye Jin; Kim, YoungHan

    2009-12-01

    Purpose: To investigate the clinical application of a stereotactic body radiation therapy (SBRT) boost in locally advanced pancreatic cancer patients with a focus on local efficacy and toxicity. Methods and Materials: We retrospectively reviewed 30 patients with locally advanced and nonmetastatic pancreatic cancer who had been treated between 2004 and 2006. Follow-up duration ranged from 4 to 41 months (median, 14.5 months). A total dose of 40 Gy was delivered in 20 fractions using a conventional three-field technique, and then a single fraction of 14, 15, 16, or 17 Gy SBRT was administered as a boost without a break. Twenty-one patients received chemotherapy. Overall and local progression-free survival were calculated and prognostic factors were evaluated. Results: One-year overall survival and local progression-free survival rates were 60.0% and 70.2%, respectively. One patient (3%) developed Grade 4 toxicity. Carbohydrate antigen 19-9 response was found to be an independent prognostic factor for survival. Conclusions: Our findings indicate that a SBRT boost provides a safe means of increasing radiation dose. Based on the results of this study, we recommend that a well controlled Phase II study be conducted on locally advanced pancreatic cancer.

  5. Boosting target tracking using particle filter with flow control

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Chan, Moses W.

    2013-05-01

    Target detection and tracking with passive infrared (IR) sensors can be challenging due to significant degradation and corruption of target signature by atmospheric transmission and clutter effects. This paper summarizes our efforts in phenomenology modeling of boosting targets with IR sensors, and developing algorithms for tracking targets in the presence of background clutter. On the phenomenology modeling side, the clutter images are generated using a high fidelity end-to-end simulation testbed. It models atmospheric transmission, structured clutter and solar reflections to create realistic background images. The dynamics and intensity of a boosting target are modeled and injected onto the background scene. Pixel level images are then generated with respect to the sensor characteristics. On the tracking analysis side, a particle filter for tracking targets in a sequence of clutter images is developed. The particle filter is augmented with a mechanism to control particle flow. Specifically, velocity feedback is used to constrain and control the particles. The performance of the developed "adaptive" particle filter is verified with tracking of a boosting target in the presence of clutter and occlusion.

  6. Lorentz boost and non-Gaussianity in multifield DBI inflation

    SciTech Connect

    Mizuno, Shuntaro; Arroja, Frederico; Tanaka, Takahiro; Koyama, Kazuya

    2009-07-15

    We show that higher-order actions for cosmological perturbations in the multifield Dirac-Born-Infeld (DBI) inflation model are obtained by a Lorentz boost from the rest frame of the brane to the frame where the brane is moving. We confirm that this simple method provides the same third- and fourth-order actions at leading order in slow roll and in the small sound speed limit as those obtained by the usual Arnowitt-Deser-Misner formalism. As an application, we compute the leading order connected four-point function of the primordial curvature perturbation coming from the intrinsic fourth-order contact interaction in the multifield DBI-inflation model. At third order, the interaction Hamiltonian arises purely by the boost from the second-order action in the rest frame of the brane. The boost acts on the adiabatic and entropy modes in the same way, thus there exists a symmetry between the adiabatic and entropy modes. But at fourth order this symmetry is broken due to the intrinsic fourth-order action in the rest frame and the difference between the Lagrangian and the interaction Hamiltonian. Therefore, contrary to the three-point function, the momentum dependence of the purely adiabatic component and the components including the entropic contributions are different in the four-point function. This suggests that the trispectrum can distinguish the multifield DBI-inflation model from the single field DBI-inflation model.

  7. Chagas Parasite Detection in Blood Images Using AdaBoost

    PubMed Central

    Uc-Cetina, Víctor; Brito-Loeza, Carlos; Ruiz-Piña, Hugo

    2015-01-01

    The Chagas disease is a potentially life-threatening illness caused by the protozoan parasite, Trypanosoma cruzi. Visual detection of such parasite through microscopic inspection is a tedious and time-consuming task. In this paper, we provide an AdaBoost learning solution to the task of Chagas parasite detection in blood images. We give details of the algorithm and our experimental setup. With this method, we get 100% and 93.25% of sensitivity and specificity, respectively. A ROC comparison with the method most commonly used for the detection of malaria parasites based on support vector machines (SVM) is also provided. Our experimental work shows mainly two things: (1) Chagas parasites can be detected automatically using machine learning methods with high accuracy and (2) AdaBoost + SVM provides better overall detection performance than AdaBoost or SVMs alone. Such results are the best ones known so far for the problem of automatic detection of Chagas parasites through the use of machine learning, computer vision, and image processing methods. PMID:25861375

  8. Notch-Boosted Domain Wall Propagation in Magnetic Nanowires

    NASA Astrophysics Data System (ADS)

    Wang, Xiang Rong; Yuan, Hauiyang

    Magnetic domain wall (DW) motion along a nanowire underpins many proposals of spintronic devices. High DW propagation velocity is obviously important because it determines the device speed. Thus it is interesting to search for effective control knobs of DW dynamics. We report a counter-intuitive finding that notches in an otherwise homogeneous magnetic nanowire can boost current-induced domain wall (DW) propagation. DW motion in notch-modulated wires can be classified into three phases: 1) A DW is pinned around a notch when the current density is below the depinning current density. 2) DW propagation velocity above the depinning current density is boosted by notches when non-adiabatic spin-transfer torque strength is smaller than the Gilbert damping constant. The boost can be many-fold. 3) DW propagation velocity is hindered when non-adiabatic spin-transfer torque strength is larger than the Gilbert damping constant. This work was supported by Hong Kong GRF Grants (Nos. 163011151 and 605413) and the Grant from NNSF of China (No. 11374249).

  9. Modeling of laser wakefield acceleration in Lorentz boosted frame using EM-PIC code with spectral solver

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng; Xu, Xinlu; Decyk, Viktor K.; An, Weiming; Vieira, Jorge; Tsung, Frank S.; Fonseca, Ricardo A.; Lu, Wei; Silva, Luis O.; Mori, Warren B.

    2014-06-01

    Simulating laser wakefield acceleration (LWFA) in a Lorentz boosted frame in which the plasma drifts towards the laser with vb can speed up the simulation by factors of γb2=(1. In these simulations the relativistic drifting plasma inevitably induces a high frequency numerical instability that contaminates the interesting physics. Various approaches have been proposed to mitigate this instability. One approach is to solve Maxwell equations in Fourier space (a spectral solver) as this has been shown to suppress the fastest growing modes of this instability in simple test problems using a simple low pass or "ring" or "shell" like filters in Fourier space. We describe the development of a fully parallelized, multi-dimensional, particle-in-cell code that uses a spectral solver to solve Maxwell's equations and that includes the ability to launch a laser using a moving antenna. This new EM-PIC code is called UPIC-EMMA and it is based on the components of the UCLA PIC framework (UPIC). We show that by using UPIC-EMMA, LWFA simulations in the boosted frames with arbitrary γb can be conducted without the presence of the numerical instability. We also compare the results of a few LWFA cases for several values of γb, including lab frame simulations using OSIRIS, an EM-PIC code with a finite-difference time domain (FDTD) Maxwell solver. These comparisons include cases in both linear and nonlinear regimes. We also investigate some issues associated with numerical dispersion in lab and boosted frame simulations and between FDTD and spectral solvers.

  10. A parallel cholinergic brainstem pathway for enhancing locomotor drive

    PubMed Central

    Smetana, Roy; Juvin, Laurent; Dubuc, Réjean; Alford, Simon

    2010-01-01

    The brainstem locomotor system is believed to be organized serially from the mesencephalic locomotor region (MLR) to reticulospinal neurons, which in turn, project to locomotor neurons in the spinal cord. In contrast, we now identify in lampreys, brainstem muscarinoceptive neurons receiving parallel inputs from the MLR and projecting back to reticulospinal cells to amplify and extend durations of locomotor output. These cells respond to muscarine with extended periods of excitation, receive direct muscarinic excitation from the MLR, and project glutamatergic excitation to reticulospinal neurons. Targeted block of muscarine receptors over these neurons profoundly reduces MLR-induced excitation of reticulospinal neurons and markedly slows MLR-evoked locomotion. Their presence forces us to rethink the organization of supraspinal locomotor control, to include a sustained feedforward loop that boosts locomotor output. PMID:20473293

  11. Perception of straightness and parallelism with minimal distance information.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2016-07-01

    The ability of human observers to judge the straightness and parallelism of extended lines has been a neglected topic of study since von Helmholtz's initial observations 150 years ago. He showed that there were significant misperceptions of the straightness of extended lines seen in the peripheral visual field. The present study focused on the perception of extended lines (spanning 90° visual angle) that were directly fixated in the visual environment of a planetarium where there was only minimal information about the distance to the lines. Observers were asked to vary the curvature of 1 or more lines until they appeared to be straight and/or parallel, ignoring any perceived curvature in depth. When the horizon between the ground and the sky was visible, the results showed that observers' judgements of the straightness of a single line were significantly biased away from the veridical, great circle locations, and towards equal elevation settings. Similar biases can be seen in the jet trails of aircraft flying across the sky and in Rogers and Anstis's new moon illusion (Perception, 42(Abstract supplement) 18, 2013, 2016). The biasing effect of the horizon was much smaller when observers were asked to judge the straightness and parallelism of 2 or more extended lines. We interpret the results as showing that, in the absence of adequate distance information, observers tend to perceive the projected lines as lying on an approximately equidistant, hemispherical surface and that their judgements of straightness and parallelism are based on the perceived separation of the lines superimposed on that surface. PMID:27025213

  12. PMESH: A parallel mesh generator

    SciTech Connect

    Hardin, D.D.

    1994-10-21

    The Parallel Mesh Generation (PMESH) Project is a joint LDRD effort by A Division and Engineering to develop a unique mesh generation system that can construct large calculational meshes (of up to 10{sup 9} elements) on massively parallel computers. Such a capability will remove a critical roadblock to unleashing the power of massively parallel processors (MPPs) for physical analysis. PMESH will support a variety of LLNL 3-D physics codes in the areas of electromagnetics, structural mechanics, thermal analysis, and hydrodynamics.

  13. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  14. Parallel processing in the mammalian retina.

    PubMed

    Wässle, Heinz

    2004-10-01

    Our eyes send different 'images' of the outside world to the brain - an image of contours (line drawing), a colour image (watercolour painting) or an image of moving objects (movie). This is commonly referred to as parallel processing, and starts as early as the first synapse of the retina, the cone pedicle. Here, the molecular composition of the transmitter receptors of the postsynaptic neurons defines which images are transferred to the inner retina. Within the second synaptic layer - the inner plexiform layer - circuits that involve complex inhibitory and excitatory interactions represent filters that select 'what the eye tells the brain'. PMID:15378035

  15. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  16. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  17. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  18. Choosing how to boost waxy crude line flow depends on oil's qualities

    SciTech Connect

    El-Emam, N.A.; Bayoumi, A.W.A.; El-Gamal, I.M.; Abu-Zied, A. )

    1993-04-26

    In studies of methods for improving pipeline flow of waxy crudes, dilution by either gasoline or kerosene proved more effective than heating or the addition of flow improving chemicals. Gasoline proved the most effective diluent in the tests of three Egyptian crude oils. Another method - heating - also led to a significant improvement in the rheological properties of the tested crudes. Additionally, a computer program was developed that calculates the pressure-loss reduction resulting from the improved flow properties of the tested crudes regardless of the method used. This program led to the conclusion that dilution by gasoline is the best technique for one type of waxy crude represented by GPY-3, while heating is the best method for the other two crude types represented by M-96 and Khalda. Final selection of the suitable technique for a specific crude, however, must be decided not only on the basis of such technical considerations as are presented here, but also on the economics for each case. The paper describes the techniques; Casson equation; dilution; heating; chemical treatment; and affected pressure-loss.

  19. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  20. CS-Studio Scan System Parallelization

    SciTech Connect

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  1. Fast AdaBoost-Based Face Detection System on a Dynamically Coarse Grain Reconfigurable Architecture

    NASA Astrophysics Data System (ADS)

    Xiao, Jian; Zhang, Jinguo; Zhu, Min; Yang, Jun; Shi, Longxing

    An AdaBoost-based face detection system is proposed, on a Coarse Grain Reconfigurable Architecture (CGRA) named “REMUS-II”. Our work is quite distinguished from previous ones in three aspects. First, a new hardware-software partition method is proposed and the whole face detection system is divided into several parallel tasks implemented on two Reconfigurable Processing Units (RPU) and one micro Processors Unit (µPU) according to their relationships. These tasks communicate with each other by a mailbox mechanism. Second, a strong classifier is treated as a smallest phase of the detection system, and every phase needs to be executed by these tasks in order. A phase of Haar classifier is dynamically mapped onto a Reconfigurable Cell Array (RCA) only when needed, and it's quite different from traditional Field Programmable Gate Array (FPGA) methods in which all the classifiers are fabricated statically. Third, optimized data and configuration word pre-fetch mechanisms are employed to improve the whole system performance. Implementation results show that our approach under 200MHz clock rate can process up-to 17 frames per second on VGA size images, and the detection rate is over 95%. Our system consumes 194mW, and the die size of fabricated chip is 23mm2 using TSMC 65nm standard cell based technology. To the best of our knowledge, this work is the first implementation of the cascade Haar classifier algorithm on a dynamically CGRA platform presented in the literature.

  2. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  3. Measure Lines

    ERIC Educational Resources Information Center

    Crissman, Sally

    2011-01-01

    One tool for enhancing students' work with data in the science classroom is the measure line. As a coteacher and curriculum developer for The Inquiry Project, the author has seen how measure lines--a number line in which the numbers refer to units of measure--help students not only represent data but also analyze it in ways that generate…

  4. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  5. Parallel execution model for Prolog

    SciTech Connect

    Fagin, B.S.

    1987-01-01

    One candidate language for parallel symbolic computing is Prolog. Numerous ways for executing Prolog in parallel have been proposed, but current efforts suffer from several deficiencies. Many cannot support fundamental types of concurrency in Prolog. Other models are of purely theoretical interest, ignoring implementation costs. Detailed simulation studies of execution models are scare; at present little is known about the costs and benefits of executing Prolog in parallel. In this thesis, a new parallel execution model for Prolog is presented: the PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR-parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented, and compilation to the PPP abstract instruction set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

  6. Reordering computations for parallel execution

    NASA Technical Reports Server (NTRS)

    Adams, L.

    1985-01-01

    The computations are reordered in the SOR algorithm to maintain the same asymptotic rate of convergence as the rowwise ordering to obtain parallelism at different levels. A parallel program is written to illustrate these ideas and actual machines for implementation of this program are discussed.

  7. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  8. ParCAT: Parallel Climate Analysis Toolkit

    SciTech Connect

    Smith, Brian E.; Steed, Chad A.; Shipman, Galen M.; Ricciuto, Daniel M.; Thornton, Peter E.; Wehner, Michael; Williams, Dean N.

    2013-01-01

    Climate science is employing increasingly complex models and simulations to analyze the past and predict the future of Earth s climate. This growth in complexity is creating a widening gap between the data being produced and the ability to analyze the datasets. Parallel computing tools are necessary to analyze, compare, and interpret the simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools to efficiently use parallel computing techniques to make analysis of these datasets manageable. The toolkit provides the ability to compute spatio-temporal means, differences between runs or differences between averages of runs, and histograms of the values in a data set. ParCAT is implemented as a command-line utility written in C. This allows for easy integration in other tools and allows for use in scripts. This also makes it possible to run ParCAT on many platforms from laptops to supercomputers. ParCAT outputs NetCDF files so it is compatible with existing utilities such as Panoply and UV-CDAT. This paper describes ParCAT and presents results from some example runs on the Titan system at ORNL.

  9. Vertical bloch line memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R. (Inventor); Stadler, Henry L. (Inventor); Wu, Jiin-chuan (Inventor)

    1995-01-01

    A new read gate design for the vertical Bloch line (VBL) memory is disclosed which offers larger operating margin than the existing read gate designs. In the existing read gate designs, a current is applied to all the stripes. The stripes that contain a VBL pair are chopped, while the stripes that do not contain a VBL pair are not chopped. The information is then detected by inspecting the presence or absence of the bubble. The margin of the chopping current amplitude is very small, and sometimes non-existent. A new method of reading Vertical Bloch Line memory is also disclosed. Instead of using the wall chirality to separate the two binary states, the spatial deflection of the stripe head is used. Also disclosed herein is a compact memory which uses vertical Bloch line (VBL) memory technology for providing data storage. A three-dimensional arrangement in the form of stacks of VBL memory layers is used to achieve high volumetric storage density. High data transfer rate is achieved by operating all the layers in parallel. Using Hall effect sensing, and optical sensing via the Faraday effect to access the data from within the three-dimensional packages, an even higher data transfer rate can be achieved due to parallel operation within each layer.

  10. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  11. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  12. How Citation Boosts Promote Scientific Paradigm Shifts and Nobel Prizes

    PubMed Central

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-01-01

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the “boosting effect” of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying “boost factor” is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract. PMID:21573229

  13. How citation boosts promote scientific paradigm shifts and nobel prizes.

    PubMed

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-01-01

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract. PMID:21573229

  14. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  15. Final Technical Report for the BOOST2013 Workshop. Hosted by the University of Arizona

    SciTech Connect

    Johns, Kenneth

    2015-02-20

    BOOST 2013 was the 5th International Joint Theory/Experiment Workshop on Phenomenology, Reconstruction and Searches for Boosted Objects in High Energy Hadron Collisions. It was locally organized and hosted by the Experimental High Energy Physics Group at the University of Arizona and held at Flagstaff, Arizona on August 12-16, 2013. The workshop provided a forum for theorists and experimentalists to present and discuss the latest findings related to the reconstruction of boosted objects in high energy hadron collisions and their use in searches for new physics. This report gives the outcomes of the BOOST 2013 Workshop.

  16. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  17. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  18. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  19. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  20. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  1. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  2. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  3. Multigrid on massively parallel architectures

    SciTech Connect

    Falgout, R D; Jones, J E

    1999-09-17

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs.

  4. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  5. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  6. Jet substructures of boosted polarized hadronic top quarks

    NASA Astrophysics Data System (ADS)

    Kitadono, Yoshio; Li, Hsiang-nan

    2016-03-01

    We study jet substructures of a boosted polarized top quark, which undergoes the hadronic decay t →b u d ¯, in the perturbative QCD framework, focusing on the energy profile and the differential energy profile. These substructures are factorized into the convolution of a hard top-quark decay kernel with a bottom-quark jet function and a W -boson jet function, where the latter is further factorized into the convolution of a hard W -boson decay kernel with two light-quark jet functions. Computing the hard kernels to leading order in QCD and including the resummation effect in the jet functions, we show that the differential jet energy profile is a useful observable for differentiating the helicity of a boosted hadronic top quark: a right-handed top jet exhibits quick descent of the differential energy profile with the inner test cone radius r , which is attributed to the V -A structure of weak interaction and the dead-cone effect associated with the W -boson jet. The above helicity differentiation may help reveal the chiral structure of physics beyond the standard model at high energies.

  7. Boosting the Light: X-ray Physics in Confinement

    ScienceCinema

    Rhisberger, Ralf [HASYLAB/ DESY

    2010-01-08

    Remarkable effects are observed if light is confined to dimensions comparable to the wavelength of the light. The lifetime of atomic resonances excited by the radiation is strongly reduced in photonic traps, such as cavities or waveguides. Moreover, one observes an anomalous boost of the intensity scattered from the resonant atoms. These phenomena results from the strong enhancement of the photonic density of states in such geometries. Many of these effects are currently being explored in the regime of vsible light due to their relevance for optical information processing. It is thus appealing to study these phenomena also for much shorter wavelengths. This talk illuminates recent experiments where synchrotron x-rays were trapped in planar waveguides to resonantly excite atomos ([57]Fe nuclei_ embedded in them. In fact, one observes that the radiative decay of these excited atoms is strongly accelerated. The temporal acceleration of the decay goes along with a strong boost of the radiation coherently scattered from the confined atmos. This can be exploited to obtain a high signal-to-noise ratio from tiny quantities of material, leading to manifold applications in the investigation of nanostructured materials. One application is the use of ultrathin probe layers to image the internal structure of magnetic layer systems.

  8. Hyperdynamics boost factor achievable with an ideal bias potential

    DOE PAGESBeta

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-20

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintainingmore » high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.« less

  9. Hyperdynamics boost factor achievable with an ideal bias potential

    SciTech Connect

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-20

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintaining high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.

  10. Playing tag with ANN: boosted top identification with pattern recognition

    NASA Astrophysics Data System (ADS)

    Almeida, Leandro G.; Backović, Mihailo; Cliche, Mathieu; Lee, Seung J.; Perelstein, Maxim

    2015-07-01

    Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a "digital image" of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p T in the 1100-1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.

  11. Binarization With Boosting and Oversampling for Multiclass Classification.

    PubMed

    Sen, Ayon; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-05-01

    Using a set of binary classifiers to solve multiclass classification problems has been a popular approach over the years. The decision boundaries learnt by binary classifiers (also called base classifiers) are much simpler than those learnt by multiclass classifiers. This paper proposes a new classification framework, termed binarization with boosting and oversampling (BBO), for efficiently solving multiclass classification problems. The new framework is devised based on the one-versus-all (OVA) binarization technique. Unlike most previous work, BBO employs boosting for solving the hard-to-learn instances and oversampling for handling the class-imbalance problem arising due to OVA binarization. These two features make BBO different from other existing works. Our new framework has been tested extensively on several multiclass supervised and semi-supervised classification problems using five different base classifiers, including neural networks, C4.5, k -nearest neighbor, repeated incremental pruning to produce error reduction, support vector machine, random forest, and learning with local and global consistency. Experimental results show that BBO can exhibit better performance compared to its counterparts on supervised and semi-supervised classification problems. PMID:25955858

  12. Hyperdynamics boost factor achievable with an ideal bias potential

    NASA Astrophysics Data System (ADS)

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-01

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintaining high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Finally, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.

  13. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  14. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  15. Adaptive optics parallel near-confocal scanning ophthalmoscopy.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2016-08-15

    We present an adaptive optics parallel near-confocal scanning ophthalmoscope (AOPCSO) using a digital micromirror device (DMD). The imaging light is modulated to be a line of point sources by the DMD, illuminating the retina simultaneously. By using a high-speed line camera to acquire the image and using adaptive optics to compensate the ocular wave aberration, the AOPCSO can image the living human eye with cellular level resolution at the frame rate of 100 Hz. AOPCSO has been demonstrated with improved spatial resolution in imaging of the living human retina compared with adaptive optics line scan ophthalmoscopy. PMID:27519106

  16. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  17. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  18. Predicting performance of parallel computations

    NASA Technical Reports Server (NTRS)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  19. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  20. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  1. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  2. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  3. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  4. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  5. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  6. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  7. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  8. Efficiency of parallel direct optimization.

    PubMed

    Janies, D A; Wheeler, W C

    2001-03-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. PMID:12240679

  9. Parallel heat transport in integrable and chaotic magnetic fields

    SciTech Connect

    Castillo-Negrete, D. del; Chacon, L.

    2012-05-15

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), {chi}{sub ||} , and the perpendicular, {chi}{sub Up-Tack }, conductivities ({chi}{sub ||} /{chi}{sub Up-Tack} may exceed 10{sup 10} in fusion plasmas); (ii) Nonlocal parallel transport in the limit of small collisionality; and (iii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geometry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island), weakly chaotic (Devil's staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local parallel closures, is non-diffusive, thus casting doubts on the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.

  10. Parallel Vegetation Stripe Formation Through Hydrologic Interactions

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Stieglitz, M.; Engel, V.; Turk, G.

    2009-12-01

    vegetation. With time, the patches aggregate and spread laterally in the direction of the downhill flow. To enhance understanding of fundamental processes that govern parallel stripe formation, we employ advanced visualization techniques to improve simulation: Line Integral Convolution for flow visualization and Voronoi Tesselation Algorithm for tracer visualization. We have applied the model to examine ecosystems that are characterized by parallel stripes such as the S&R system in the Everglades (See Engel et al, session H64).

  11. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... system. 27.695 Section 27.695 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated control system is used, an alternate system must be immediately available that allows continued...

  12. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... system. 29.695 Section 29.695 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated control system is used, an alternate system must be immediately available that allows continued...

  13. Enhanced algorithm performance for land cover classification from remotely sensed data using bagging and boosting

    USGS Publications Warehouse

    Chan, J.C.-W.; Huang, C.; DeFries, R.

    2001-01-01

    Two ensemble methods, bagging and boosting, were investigated for improving algorithm performance. Our results confirmed the theoretical explanation [1] that bagging improves unstable, but not stable, learning algorithms. While boosting enhanced accuracy of a weak learner, its behavior is subject to the characteristics of each learning algorithm.

  14. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  15. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  16. Cracks and Lines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    6 June 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) picture shows an odd area of the south polar region that has sets of fine, nearly parallel lines running from the northeast (upper right) toward southwest (lower left) and a darker, wider set of cracks with a major trend running almost perpendicular to the finer lines. The appearance of these features is enhanced by seasonal frost. Dark areas have no frost, bright areas still have frozen carbon dioxide ice. In summer, the ice would be gone and the cracks and lines less obvious when viewed from orbit. Although unknown, wind might be responsible for forming the fine set of lines, and perhaps freeze-thaw cycles of ground ice or structural deformation would have contributed to formation of the wider cracks. The image is located near 85.0oS, 324.0oW, and covers an area about 1.5 km (nearly 1 mi) across. The scene is illuminated by sunlight from the upper left.

  17. The economics of parallel trade.

    PubMed

    Danzon, P M

    1998-03-01

    The potential for parallel trade in the European Union (EU) has grown with the accession of low price countries and the harmonisation of registration requirements. Parallel trade implies a conflict between the principle of autonomy of member states to set their own pharmaceutical prices, the principle of free trade and the industrial policy goal of promoting innovative research and development (R&D). Parallel trade in pharmaceuticals does not yield the normal efficiency gains from trade because countries achieve low pharmaceutical prices by aggressive regulation, not through superior efficiency. In fact, parallel trade reduces economic welfare by undermining price differentials between markets. Pharmaceutical R&D is a global joint cost of serving all consumers worldwide; it accounts for roughly 30% of total costs. Optimal (welfare maximising) pricing to cover joint costs (Ramsey pricing) requires setting different prices in different markets, based on inverse demand elasticities. By contrast, parallel trade and regulation based on international price comparisons tend to force price convergence across markets. In response, manufacturers attempt to set a uniform 'euro' price. The primary losers from 'euro' pricing will be consumers in low income countries who will face higher prices or loss of access to new drugs. In the long run, even higher income countries are likely to be worse off with uniform prices, because fewer drugs will be developed. One policy option to preserve price differentials is to exempt on-patent products from parallel trade. An alternative is confidential contracting between individual manufacturers and governments to provide country-specific ex post discounts from the single 'euro' wholesale price, similar to rebates used by managed care in the US. This would preserve differentials in transactions prices even if parallel trade forces convergence of wholesale prices. PMID:10178655

  18. Investigation of the Centaur boost pump overspeed condition at main engine shutdown on the Titan Centaur TC-2 flight

    NASA Technical Reports Server (NTRS)

    Baud, K. W.

    1975-01-01

    An investigation was conducted to evaluate a potential boost pump overspeed condition which could exist on the Titan/Centaur launch vehicle after main engine shut-off. Preliminary analyses indicated that the acceleration imparted to the unloaded boost pump-turbine assembly, caused by purging residual hydrogen peroxide from the turbine supply lines, could result in a pump-turbine overspeed. Previous test experience indicated that turbine damage occurs at speeds in excess of 75,000 rpm. Detailed theoretical analyses, in conjunction with pump tests, were conducted to establish the maximum pump-turbine speed at main engine shut-off. The analyses predicted a maximum speed of 68,000 rpm. Testing showed the pump-turbine speed to be 66,700 rpm in the overspeed condition. Inasmuch as both the analysis and tests showed the overspeed to be sufficiently less than the speed at which damage could occur, it was concluded that no corrective action would be required for the launch vehicle.

  19. Metabolic engineering of resveratrol and other longevity boosting compounds.

    SciTech Connect

    Wang, Y; Chen, H; Yu, O

    2010-09-16

    Resveratrol, a compound commonly found in red wine, has attracted many attentions recently. It is a diphenolic natural product accumulated in grapes and a few other species under stress conditions. It possesses a special ability to increase the life span of eukaryotic organisms, ranging from yeast, to fruit fly, to obese mouse. The demand for resveratrol as a food and nutrition supplement has increased significantly in recent years. Extensive work has been carried out to increase the production of resveratrol in plants and microbes. In this review, we will discuss the biosynthetic pathway of resveratrol and engineering methods to heterologously express the pathway in various organisms. We will outline the shortcuts and limitations of common engineering efforts. We will also discuss briefly the features and engineering challenges of other longevity boosting compounds.

  20. Boosting magnetic reconnection by viscosity and thermal conduction

    NASA Astrophysics Data System (ADS)

    Minoshima, Takashi; Miyoshi, Takahiro; Imada, Shinsuke

    2016-07-01

    Nonlinear evolution of magnetic reconnection is investigated by means of magnetohydrodynamic simulations including uniform resistivity, uniform viscosity, and anisotropic thermal conduction. When viscosity exceeds resistivity (the magnetic Prandtl number P r m > 1 ), the viscous dissipation dominates outflow dynamics and leads to the decrease in the plasma density inside a current sheet. The low-density current sheet supports the excitation of the vortex. The thickness of the vortex is broader than that of the current for P r m > 1 . The broader vortex flow more efficiently carries the upstream magnetic flux toward the reconnection region, and consequently, boosts the reconnection. The reconnection rate increases with viscosity provided that thermal conduction is fast enough to take away the thermal energy increased by the viscous dissipation (the fluid Prandtl number Pr < 1). The result suggests the need to control the Prandtl numbers for the reconnection against the conventional resistive model.

  1. Boosting standard order sets utilization through clinical decision support.

    PubMed

    Li, Haomin; Zhang, Yinsheng; Cheng, Haixia; Lu, Xudong; Duan, Huilong

    2013-01-01

    Well-designed standard order sets have the potential to integrate and coordinate care by communicating best practices through multiple disciplines, levels of care, and services. However, there are several challenges which certainly affected the benefits expected from standard order sets. To boost standard order sets utilization, a problem-oriented knowledge delivery solution was proposed in this study to facilitate access of standard order sets and evaluation of its treatment effect. In this solution, standard order sets were created along with diagnostic rule sets which can trigger a CDS-based reminder to help clinician quickly discovery hidden clinical problems and corresponding standard order sets during ordering. Those rule set also provide indicators for targeted evaluation of standard order sets during treatment. A prototype system was developed based on this solution and will be presented at Medinfo 2013. PMID:23920727

  2. Writing about testing worries boosts exam performance in the classroom.

    PubMed

    Ramirez, Gerardo; Beilock, Sian L

    2011-01-14

    Two laboratory and two randomized field experiments tested a psychological intervention designed to improve students' scores on high-stakes exams and to increase our understanding of why pressure-filled exam situations undermine some students' performance. We expected that sitting for an important exam leads to worries about the situation and its consequences that undermine test performance. We tested whether having students write down their thoughts about an upcoming test could improve test performance. The intervention, a brief expressive writing assignment that occurred immediately before taking an important test, significantly improved students' exam scores, especially for students habitually anxious about test taking. Simply writing about one's worries before a high-stakes exam can boost test scores. PMID:21233387

  3. Usefulness of effective field theory for boosted Higgs production

    SciTech Connect

    Dawson, S.; Lewis, I. M.; Zeng, Mao

    2015-04-07

    The Higgs + jet channel at the LHC is sensitive to the effects of new physics both in the total rate and in the transverse momentum distribution at high pT. We examine the production process using an effective field theory (EFT) language and discussing the possibility of determining the nature of the underlying high-scale physics from boosted Higgs production. The effects of heavy color triplet scalars and top partner fermions with TeV scale masses are considered as examples and Higgs-gluon couplings of dimension-5 and dimension-7 are included in the EFT. As a byproduct of our study, we examine the region of validity of the EFT. Dimension-7 contributions in realistic new physics models give effects in the high pT tail of the Higgs signal which are so tiny that they are likely to be unobservable.

  4. A mechatronic power boosting design for piezoelectric generators

    SciTech Connect

    Liu, Haili; Liang, Junrui Ge, Cong

    2015-10-05

    It was shown that the piezoelectric power generation can be boosted by using the synchronized switch power conditioning circuits. This letter reports a self-powered and self-sensing mechatronic design in substitute of the auxiliary electronics towards a compact and universal synchronized switch solution. The design criteria are derived based on the conceptual waveforms and a two-degree-of-freedom analytical model. Experimental result shows that, compared to the standard bridge rectifier interface, the mechatronic design leads to an extra 111% increase of generated power from the prototyped piezoelectric generator under the same deflection magnitude excitation. The proposed design has introduced a valuable physical insight of electromechanical synergy towards the improvement of piezoelectric power generation.

  5. An update on Shankhpushpi, a cognition-boosting Ayurvedic medicine.

    PubMed

    Sethiya, Neeraj Kumar; Nahata, Alok; Mishra, Sri Hari; Dixit, Vinod Kumar

    2009-11-01

    Shankhpushpi is an Ayurvedic drug used for its action on the central nervous system, especially for boosting memory and improving intellect. Quantum of information gained from Ayurvedic and other Sanskrit literature revealed the existence of four different plant species under the name of Shankhpushpi, which is used in various Ayurvedic prescriptions described in ancient texts, singly or in combination with other herbs. The sources comprise of entire herbs with following botanicals viz., Convulvulus pluricaulis Choisy. (Convulvulaceae), Evolvulus alsinoides Linn. (Convulvulaceae), Clitoria ternatea Linn. (Papilionaceae) and Canscora decussata Schult. (Gentianaceae). A review on the available scientific information in terms of pharmacognostical characteristics, chemical constituents, pharmacological activities, preclinical and clinical applications of controversial sources of Shankhpushpi is prepared with a view to review scientific work undertaken on Shankhpushpi. It may provide parameters of differentiation and permit appreciation of variability of drug action by use of different botanical sources. PMID:19912732

  6. A mechatronic power boosting design for piezoelectric generators

    NASA Astrophysics Data System (ADS)

    Liu, Haili; Liang, Junrui; Ge, Cong

    2015-10-01

    It was shown that the piezoelectric power generation can be boosted by using the synchronized switch power conditioning circuits. This letter reports a self-powered and self-sensing mechatronic design in substitute of the auxiliary electronics towards a compact and universal synchronized switch solution. The design criteria are derived based on the conceptual waveforms and a two-degree-of-freedom analytical model. Experimental result shows that, compared to the standard bridge rectifier interface, the mechatronic design leads to an extra 111% increase of generated power from the prototyped piezoelectric generator under the same deflection magnitude excitation. The proposed design has introduced a valuable physical insight of electromechanical synergy towards the improvement of piezoelectric power generation.

  7. Measuring Intuition: Nonconscious Emotional Information Boosts Decision Accuracy and Confidence.

    PubMed

    Lufityanto, Galang; Donkin, Chris; Pearson, Joel

    2016-05-01

    The long-held popular notion of intuition has garnered much attention both academically and popularly. Although most people agree that there is such a phenomenon as intuition, involving emotionally charged, rapid, unconscious processes, little compelling evidence supports this notion. Here, we introduce a technique in which subliminal emotional information is presented to subjects while they make fully conscious sensory decisions. Our behavioral and physiological data, along with evidence-accumulator models, show that nonconscious emotional information can boost accuracy and confidence in a concurrent emotion-free decision task, while also speeding up response times. Moreover, these effects were contingent on the specific predictive arrangement of the nonconscious emotional valence and motion direction in the decisional stimulus. A model that simultaneously accumulates evidence from both physiological skin conductance and conscious decisional information provides an accurate description of the data. These findings support the notion that nonconscious emotions can bias concurrent nonemotional behavior-a process of intuition. PMID:27052557

  8. Buck-Buck- Boost Regulatr (B3R)

    NASA Astrophysics Data System (ADS)

    Mourra, Olivier; Fernandez, Arturo; Landstroem, Sven; Tonicello, Ferdinando

    2011-10-01

    In a satellite, the main function of a Power Conditioning Unit (PCU) is to manage the energy coming from several power sources (usually solar arrays and battery) and to deliver it continuously to the users in an appropriate form during the overall mission. The objective of this paper is to present an electronic switching DC-DC converter called Buck-Buck-Boost Regulator (B3R) that could be used as a modular and recurrent solution in a PCU for regulated or un- regulated 28Vsatellite power bus classes. The power conversion stages of the B3R topology are first described. Then theoretical equations and practical tests illustrate how the converter operates in term of power conversion, control loops performances and efficiency. The paper finally provides some examples of single point failure tolerant implementation using the B3R.

  9. Boosting thermoelectric efficiency using time-dependent control

    PubMed Central

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-01-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency. PMID:26464021

  10. Defined three-dimensional microenvironments boost induction of pluripotency.

    PubMed

    Caiazzo, Massimiliano; Okawa, Yuya; Ranga, Adrian; Piersigilli, Alessandra; Tabata, Yoji; Lutolf, Matthias P

    2016-03-01

    Since the discovery of induced pluripotent stem cells (iPSCs), numerous approaches have been explored to improve the original protocol, which is based on a two-dimensional (2D) cell-culture system. Surprisingly, nothing is known about the effect of a more biologically faithful 3D environment on somatic-cell reprogramming. Here, we report a systematic analysis of how reprogramming of somatic cells occurs within engineered 3D extracellular matrices. By modulating microenvironmental stiffness, degradability and biochemical composition, we have identified a previously unknown role for biophysical effectors in the promotion of iPSC generation. We find that the physical cell confinement imposed by the 3D microenvironment boosts reprogramming through an accelerated mesenchymal-to-epithelial transition and increased epigenetic remodelling. We conclude that 3D microenvironmental signals act synergistically with reprogramming transcription factors to increase somatic plasticity. PMID:26752655

  11. Metabolic engineering of resveratrol and other longevity boosting compounds.

    PubMed

    Wang, Yechun; Chen, Hui; Yu, Oliver

    2010-01-01

    Resveratrol, a compound commonly found in red wine, has attracted many attentions recently. It is a diphenolic natural product accumulated in grapes and a few other species under stress conditions. It possesses a special ability to increase the life span of eukaryotic organisms, ranging from yeast, to fruit fly, to obese mouse. The demand for resveratrol as a food and nutrition supplement has increased significantly in recent years. Extensive work has been carried out to increase the production of resveratrol in plants and microbes. In this review, we will discuss the biosynthetic pathway of resveratrol and engineering methods to heterologously express the pathway in various organisms. We will outline the shortcuts and limitations of common engineering efforts. We will also discuss briefly the features and engineering challenges of other longevity boosting compounds. PMID:20848556

  12. Mutual boosting of the saturation scales in colliding nuclei

    NASA Astrophysics Data System (ADS)

    Kopeliovich, B. Z.; Pirner, H. J.; Potashnikova, I. K.; Schmidt, Iván

    2011-03-01

    Saturation of small-x gluons in a nucleus, which has the form of transverse momentum broadening of projectile gluons in pA collisions in the nuclear rest frame, leads to a modification of the parton distribution functions in the beam compared with pp collisions. The DGLAP driven gluon distribution turns out to be suppressed at large x, but significantly enhanced at x ≪ 1. This is a high twist effect. In the case of nucleus-nucleus collisions all participating nucleons on both sides get enriched in gluon density at small x, which leads to a further boosting of the saturation scale. We derive reciprocity equations for the saturation scales corresponding to a collision of two nuclei. The solution of these equations for central collisions of two heavy nuclei demonstrate a significant, up to several times, enhancement of QsA2, in AA compared with pA collisions.

  13. Defined three-dimensional microenvironments boost induction of pluripotency

    NASA Astrophysics Data System (ADS)

    Caiazzo, Massimiliano; Okawa, Yuya; Ranga, Adrian; Piersigilli, Alessandra; Tabata, Yoji; Lutolf, Matthias P.

    2016-03-01

    Since the discovery of induced pluripotent stem cells (iPSCs), numerous approaches have been explored to improve the original protocol, which is based on a two-dimensional (2D) cell-culture system. Surprisingly, nothing is known about the effect of a more biologically faithful 3D environment on somatic-cell reprogramming. Here, we report a systematic analysis of how reprogramming of somatic cells occurs within engineered 3D extracellular matrices. By modulating microenvironmental stiffness, degradability and biochemical composition, we have identified a previously unknown role for biophysical effectors in the promotion of iPSC generation. We find that the physical cell confinement imposed by the 3D microenvironment boosts reprogramming through an accelerated mesenchymal-to-epithelial transition and increased epigenetic remodelling. We conclude that 3D microenvironmental signals act synergistically with reprogramming transcription factors to increase somatic plasticity.

  14. Instance transfer learning with multisource dynamic TrAdaBoost.

    PubMed

    Zhang, Qian; Li, Haigang; Zhang, Yong; Li, Ming

    2014-01-01

    Since the transfer learning can employ knowledge in relative domains to help the learning tasks in current target domain, compared with the traditional learning it shows the advantages of reducing the learning cost and improving the learning efficiency. Focused on the situation that sample data from the transfer source domain and the target domain have similar distribution, an instance transfer learning method based on multisource dynamic TrAdaBoost is proposed in this paper. In this method, knowledge from multiple source domains is used well to avoid negative transfer; furthermore, the information that is conducive to target task learning is obtained to train candidate classifiers. The theoretical analysis suggests that the proposed algorithm improves the capability that weight entropy drifts from source to target instances by means of adding the dynamic factor, and the classification effectiveness is better than single source transfer. Finally, experimental results show that the proposed algorithm has higher classification accuracy. PMID:25152906

  15. Boosting thermoelectric efficiency using time-dependent control.

    PubMed

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-01-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency. PMID:26464021

  16. Boosting association rule mining in large datasets via Gibbs sampling.

    PubMed

    Qian, Guoqi; Rao, Calyampudi Radhakrishna; Sun, Xiaoying; Wu, Yuehua

    2016-05-01

    Current algorithms for association rule mining from transaction data are mostly deterministic and enumerative. They can be computationally intractable even for mining a dataset containing just a few hundred transaction items, if no action is taken to constrain the search space. In this paper, we develop a Gibbs-sampling-induced stochastic search procedure to randomly sample association rules from the itemset space, and perform rule mining from the reduced transaction dataset generated by the sample. Also a general rule importance measure is proposed to direct the stochastic search so that, as a result of the randomly generated association rules constituting an ergodic Markov chain, the overall most important rules in the itemset space can be uncovered from the reduced dataset with probability 1 in the limit. In the simulation study and a real genomic data example, we show how to boost association rule mining by an integrated use of the stochastic search and the Apriori algorithm. PMID:27091963

  17. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  18. Bounded Parallel-Batch Scheduling on Unrelated Parallel Machines

    NASA Astrophysics Data System (ADS)

    Miao, Cuixia; Zhang, Yuzhong; Wang, Chengfei

    In this paper, we consider the bounded parallel-batch scheduling problem on unrelated parallel machines. Problems R m |B|F are NP-hard for any objective function F. For this reason, we discuss the special case with p ij = p i for i = 1, 2, ⋯ , m , j = 1, 2, ⋯ , n. We give optimal algorithms for the general scheduling to minimize total weighted completion time, makespan and the number of tardy jobs. And we design pseudo-polynomial time algorithms for the case with rejection penalty to minimize the makespan and the total weighted completion time plus the total penalty of the rejected jobs, respectively.

  19. High-dose simultaneously integrated breast boost using intensity-modulated radiotherapy and inverse optimization

    SciTech Connect

    Hurkmans, Coen W. . E-mail: coen.hurkmans@cze.nl; Meijer, Gert J.; Vliet-Vroegindeweij, Corine van; Cassee, Jorien

    2006-11-01

    Purpose: Recently a Phase III randomized trial has started comparing a boost of 16 Gy as part of whole-breast irradiation to a high boost of 26 Gy in young women. Our main aim was to develop an efficient simultaneously integrated boost (SIB) technique for the high-dose arm of the trial. Methods and Materials: Treatment planning was performed for 5 left-sided and 5 right-sided tumors. A tangential field intensity-modulated radiotherapy technique added to a sequentially planned 3-field boost (SEQ) was compared with a simultaneously planned technique (SIB) using inverse optimization. Normalized total dose (NTD)-corrected dose volume histogram parameters were calculated and compared. Results: The intended NTD was produced by 31 fractions of 1.66 Gy to the whole breast and 2.38 Gy to the boost volume. The average volume of the PTV-breast and PTV-boost receiving more than 95% of the prescribed dose was 97% or more for both techniques. Also, the mean lung dose and mean heart dose did not differ much between the techniques, with on average 3.5 Gy and 2.6 Gy for the SEQ and 3.8 Gy and 2.6 Gy for the SIB, respectively. However, the SIB resulted in a significantly more conformal irradiation of the PTV-boost. The volume of the PTV-breast, excluding the PTV-boost, receiving a dose higher than 95% of the boost dose could be reduced considerably using the SIB as compared with the SEQ from 129 cc (range, 48-262 cc) to 58 cc (range, 30-102 cc). Conclusions: A high-dose simultaneously integrated breast boost technique has been developed. The unwanted excessive dose to the breast was significantly reduced.

  20. Ductal Carcinoma in Situ-The Influence of the Radiotherapy Boost on Local Control

    SciTech Connect

    Wong, Philip; Lambert, Christine; Agnihotram, Ramanakumar V.; David, Marc; Duclos, Marie; Freeman, Carolyn R.

    2012-02-01

    Purpose: Local recurrence (LR) of ductal carcinoma in situ (DCIS) is reduced by whole-breast irradiation after breast-conserving surgery (BCS). However, the benefit of adding a radiotherapy boost to the surgical cavity for DCIS is unclear. We sought to determine the impact of the boost on LR in patients with DCIS treated at the McGill University Health Centre. Methods and Materials: A total of 220 consecutive cases of DCIS treated with BCS and radiotherapy between January 2000 and December 2006 were reviewed. Of the patients, 36% received a radiotherapy boost to the surgical cavity. Median follow-up was 46 months for the boost and no-boost groups. Kaplan-Meier survival analyses and Cox regression analyses were performed. Results: Compared with the no-boost group, patients in the boost group more frequently had positive and <0.1-cm margins (48% vs. 8%) (p < 0.0001) and more frequently were in higher-risk categories as defined by the Van Nuys Prognostic (VNP) index (p = 0.006). Despite being at higher risk for LR, none (0/79) of the patients who received a boost experienced LR, whereas 8 of 141 patients who did not receive a boost experienced an in-breast LR (log-rank p = 0.03). Univariate analysis of prognostic factors (age, tumor size, margin status, histological grade, necrosis, and VNP risk category) revealed only the presence of necrosis to significantly correlate with LR (log-rank p = 0.003). The whole-breast irradiation dose and fractionation schedule did not affect LR rate. Conclusions: Our results suggest that the use of a radiotherapy boost improves local control in DCIS and may outweigh the poor prognostic effect of necrosis.

  1. Boosting feature selection for Neural Network based regression.

    PubMed

    Bailly, Kevin; Milgram, Maurice

    2009-01-01

    The head pose estimation problem is well known to be a challenging task in computer vision and is a useful tool for several applications involving human-computer interaction. This problem can be stated as a regression one where the input is an image and the output is pan and tilt angles. Finding the optimal regression is a hard problem because of the high dimensionality of the input (number of image pixels) and the large variety of morphologies and illumination. We propose a new method combining a boosting strategy for feature selection and a neural network for the regression. Potential features are a very large set of Haar-like wavelets which are well known to be adapted to face image processing. To achieve the feature selection, a new Fuzzy Functional Criterion (FFC) is introduced which is able to evaluate the link between a feature and the output without any estimation of the joint probability density function as in the Mutual Information. The boosting strategy uses this criterion at each step: features are evaluated by the FFC using weights on examples computed from the error produced by the neural network trained at the previous step. Tests are carried out on the commonly used Pointing 04 database and compared with three state-of-the-art methods. We also evaluate the accuracy of the estimation on FacePix, a database with a high angular resolution. Our method is compared positively to a Convolutional Neural Network, which is well known to incorporate feature extraction in its first layers. PMID:19616404

  2. Boosted di-boson from a mixed heavy stop

    SciTech Connect

    Ghosh, Diptimoy

    2013-12-01

    The lighter mass eigenstate ($\\widetilde{t}_1$) of the two top squarks, the scalar superpartners of the top quark, is extremely difficult to discover if it is almost degenerate with the lightest neutralino ($\\widetilde{\\chi}_1^0$), the lightest and stable supersymmetric particle in the R-parity conserving supersymmetry. The current experimental bound on $\\widetilde{t}_1$ mass in this scenario stands only around 200 GeV. For such a light $\\widetilde{t}_1$, the heavier top squark ($\\widetilde{t}_2$) can also be around the TeV scale. Moreover, the high value of the higgs ($h$) mass prefers the left and right handed top squarks to be highly mixed allowing the possibility of a considerable branching ratio for $\\widetilde{t}_2 \\to \\widetilde{t}_1 h$ and $\\widetilde{t}_2 \\to \\widetilde{t}_1 Z$. In this paper, we explore the above possibility together with the pair production of $\\widetilde{t}_2$ $\\widetilde{t}_2^*$ giving rise to the spectacular di-boson + missing transverse energy final state. For an approximately 1 TeV $\\widetilde{t}_2$ and a few hundred GeV $\\widetilde{t}_1$ the final state particles can be moderately boosted which encourages us to propose a novel search strategy employing the jet substructure technique to tag the boosted $h$ and $Z$. The reconstruction of the $h$ and $Z$ momenta also allows us to construct the stransverse mass $M_{T2}$ providing an additional efficient handle to fight the backgrounds. We show that a 4--5$\\sigma$ signal can be observed at the 14 TeV LHC for $\\sim$ 1 TeV $\\widetilde{t}_2$ with 100 fb$^{-1}$ integrated luminosity.

  3. Esophageal Cancer Dose Escalation Using a Simultaneous Integrated Boost Technique

    SciTech Connect

    Welsh, James; Palmer, Matthew B.; Ajani, Jaffer A.; Liao Zhongxing; Swisher, Steven G.; Hofstetter, Wayne L.; Allen, Pamela K.; Settle, Steven H.; Gomez, Daniel; Likhacheva, Anna; Cox, James D.; Komaki, Ritsuko

    2012-01-01

    Purpose: We previously showed that 75% of radiation therapy (RT) failures in patients with unresectable esophageal cancer are in the gross tumor volume (GTV). We performed a planning study to evaluate if a simultaneous integrated boost (SIB) technique could selectively deliver a boost dose of radiation to the GTV in patients with esophageal cancer. Methods and Materials: Treatment plans were generated using four different approaches (two-dimensional conformal radiotherapy [2D-CRT] to 50.4 Gy, 2D-CRT to 64.8 Gy, intensity-modulated RT [IMRT] to 50.4 Gy, and SIB-IMRT to 64.8 Gy) and optimized for 10 patients with distal esophageal cancer. All plans were constructed to deliver the target dose in 28 fractions using heterogeneity corrections. Isodose distributions were evaluated for target coverage and normal tissue exposure. Results: The 50.4 Gy IMRT plan was associated with significant reductions in mean cardiac, pulmonary, and hepatic doses relative to the 50.4 Gy 2D-CRT plan. The 64.8 Gy SIB-IMRT plan produced a 28% increase in GTV dose and comparable normal tissue doses as the 50.4 Gy IMRT plan; compared with the 50.4 Gy 2D-CRT plan, the 64.8 Gy SIB-IMRT produced significant dose reductions to all critical structures (heart, lung, liver, and spinal cord). Conclusions: The use of SIB-IMRT allowed us to selectively increase the dose to the GTV, the area at highest risk of failure, while simultaneously reducing the dose to the normal heart, lung, and liver. Clinical implications warrant systematic evaluation.

  4. Efficient identification of boosted semileptonic top quarks at the LHC

    NASA Astrophysics Data System (ADS)

    Rehermann, Keith; Tweedie, Brock

    2011-03-01

    Top quarks produced in multi-TeV processes will have large Lorentz boosts, and their decay products will be highly collimated. In semileptonic decay modes, this often leads to the merging of the b-jet and the hard lepton according to standard event reconstructions, which can complicate new physics searches. Here we explore ways of efficiently recovering this signal in the muon channel at the LHC. We perform a particle-level study of events with muons produced inside of boosted tops, as well as in generic QCD jets and from W-strahlung off of hard quarks. We characterize the discriminating power of cuts previously explored in the literature, as well two new ones. We find a particularly powerful isolation variable which can potentially reject light QCD jets with hard embedded muons at the 103 level while retaining 80˜90% of the tops. This can also be fruitfully combined with other cuts for O(1) greater discrimination. For W-strahlung, a simple p T -scaled maximum Δ R cut performs comparably to a highly idealized top-mass reconstruction, rejecting an O(1) fraction of the background with percent-scale loss of signal. Using these results, we suggest a set of well-motivated baseline cuts for any physics analysis involving semileptonic top quarks at TeV-scale momenta, using neither b-tagging nor missing energy as discriminators. We demonstrate the utility of our cuts in searching for resonances in the tbar{t} invariant mass spectrum. For example, our results suggest that 100 fb-1 of data from a 14 TeV LHC could be used to discover a warped KK gluon up to 4.5 TeV or higher.

  5. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  6. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  7. Line-on-Line Coincidence: A New Type of Epitaxy Found in Organic-Organic Heterolayers

    NASA Astrophysics Data System (ADS)

    Mannsfeld, Stefan C.; Leo, Karl; Fritz, Torsten

    2005-02-01

    We propose a new type of epitaxy, line-on-line coincidence (LOL), which explains the ordering in the organic-organic heterolayer system PTCDA on HBC on graphite. LOL epitaxy is similar to point-on-line coincidence (POL) in the sense that all overlayer molecules lie on parallel, equally spaced lines. The key difference to POL is that these lines are not restricted to primitive lattice lines of the substrate lattice. Potential energy calculations demonstrate that this new type of epitaxy is indeed characterized by a minimum in the overlayer-substrate interaction potential.

  8. A voltage regulator system with dynamic bandwidth boosting for passive UHF RFID transponders

    NASA Astrophysics Data System (ADS)

    Jinpeng, Shen; Xin'an, Wang; Shan, Liu; Shoucheng, Li; Zhengkun, Ruan

    2013-10-01

    This paper presents a voltage regulator system for passive UHF RFID transponders, which contains a rectifier, a limiter, and a regulator. The rectifier achieves power by rectifying the incoming RF energy. Due to the huge variation of the rectified voltage, a limiter at the rectifier output is used to clamp the rectified voltage. In this paper, the design of a limiter circuit is discussed in detail, which can provide a stable limiting voltage with low sensitivity to temperature variation and process dispersion. The key aspect of the voltage regulator system is the dynamic bandwidth boosting in the regulator. By sensing the excess current that is bypassed in the limiter during periods of excess energy, the bias current as well as the bandwidth of the regulator are increased, the output supply voltage can recover quickly from line transients during the periods of no RF energy to a full blast of RF energy. This voltage regulator system is implemented in a 0.18 μm CMOS process.

  9. Substrate oscillations boost recombinant protein release from Escherichia coli.

    PubMed

    Jazini, Mohammadhadi; Herwig, Christoph

    2014-05-01

    Intracellular production of recombinant proteins in prokaryotes necessitates subsequent disruption of cells for protein recovery. Since the cell disruption and subsequent purification steps largely contribute to the total production cost, scalable tools for protein release into the extracellular space is of utmost importance. Although there are several ways for enhancing protein release, changing culture conditions is rather a simple and scalable approach compared to, for example, molecular cell design. This contribution aimed at quantitatively studying process technological means to boost protein release of a periplasmatic recombinant protein (alkaline phosphatase) from E. coli. Quantitative analysis of protein in independent bioreactor runs could demonstrate that a defined oscillatory feeding profile was found to improve protein release, about 60 %, compared to the conventional constant feeding rate. The process technology included an oscillatory post-induction feed profile with the frequency of 4 min. The feed rate was oscillated triangularly between a maximum (1.3-fold of the maximum feed rate achieved at the end of the fed-batch phase) and a minimum (45 % of the maximum). The significant improvement indicates the potential to maximize the production rate, while this oscillatory feed profile can be easily scaled to industrial processes. Moreover, quantitative analysis of the primary metabolism revealed that the carbon dioxide yield can be used to identify the preferred feeding profile. This approach is therefore in line with the initiative of process analytical technology for science-based process understanding in process development and process control strategies. PMID:24114459

  10. A parallel pipelined dataflow trigger processor

    SciTech Connect

    Lee, C.; Miller, G.; Kaplan, D.M.; Sa, J. ); Hsiung, Y.B. ); Carey, T.; Jeppesen, R. )

    1991-04-01

    This paper describes a parallel pipelined data flow trigger processor which is used in Fermilab E789. E789 is an experiment to study low-multiplicity decays of particles containing b or c quarks. The processor consists of an upstream vertex processor and a downstream track processor. The algorithms which reconstruct the postulated particle paths and calculate particle origin are implemented via interconnected function-specific hardware modules. The algorithm is directly dependent upon the organization of the modules, the specific arrangement of the inter-module cabling, on-board memory data. The processor provides an indication of the presence of at least one interesting particle pair in the current event by asserting Read on its Read/Skip output. The Read assertion is then used as a trigger to capture all of the event's data for subsequent extensive off-line analysis.

  11. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  12. PARAVT: Parallel Voronoi Tessellation code

    NASA Astrophysics Data System (ADS)

    Gonzalez, Roberto E.

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  13. Massively parallel MRI detector arrays

    NASA Astrophysics Data System (ADS)

    Keil, Boris; Wald, Lawrence L.

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays.

  14. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  15. Massively parallel MRI detector arrays.

    PubMed

    Keil, Boris; Wald, Lawrence L

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called "ultimate" SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  16. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  17. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  18. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  19. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  20. Comparison of line-by-line and molecular band IR modeling of high altitude missile plume

    NASA Astrophysics Data System (ADS)

    Beier, Kurt; Lindermeir, Erwin

    2007-05-01

    This paper deals with the quantitative comparison of modeling IR radiation emitted from the plume of a tactical ballistic missile (TBM) in the boost phase with two different model approaches. IR spectra of the missile plume are calculated both with the high spectral resolution line-by-line radiation model FASCODE 3 (fast atmospheric signature code) using the databases HITEMP or HITRAN and the low spectral resolution IR radiation model NIRATAM (NATO InfraRed Air TArget Model) using molecular band model technique. The influence of the atmosphere on the IR spectra as viewed from a space born sensor is taken into account. The results show, that using an elaborate line-by-line radiation model can improve the accuracy in computation of IR signature compared to the simpler but faster molecular band model technique used in NIRATAM.

  1. Anti-parallel and Component Reconnection at the Magnetopause

    NASA Astrophysics Data System (ADS)

    Trattner, K. J.; Mulcock, J. S.; Petrinec, S. M.; Fuselier, S. A.

    2007-05-01

    Reconnection at the magnetopause is clearly the dominant mechanism by which magnetic fields in different regions change topology to create open magnetic field lines that allow energy and momentum to flow into the magnetosphere. Observations and data analysis methods have reached the maturity to address one of the major outstanding questions about magnetic reconnection: The location of the reconnection site. There are two scenarios discussed in the literature, a) anti-parallel reconnection where shear angles between the magnetospheric field and the IMF are near 180 degrees, and b) component reconnection where shear angles are as low as 50 degrees. One popular component reconnection model is the tilted neutral line model. Both reconnection scenarios have a profound impact on the location of the X-line and plasma transfer into the magnetosphere. We have analyzed 3D plasma measurements observed by the Polar satellite in the northern hemisphere cusp region during southward IMF conditions. These 3D plasma measurements are used to estimate the distance to the reconnection line by using the low-velocity cutoff technique for precipitating and mirrored magnetosheath populations in the cusp. The calculated distances are subsequently traced back along geomagnetic field lines to the expected reconnection sites at the magnetopause. The Polar survey of northern cusp passes reveal that both reconnection scenarios occur at the magnetopause. The IMF clock angle appears to be the dominant parameter in causing either the anti-parallel or the tilted X-line reconnection scenario.

  2. Real-time photodisplacement imaging using parallel excitation and parallel heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Nakata, Toshihiko; Ninomiya, Takanori

    2005-05-01

    A parallel photodisplacement technique that achieves real-time imaging of subsurface structures is presented. In this technique, a linear region of photothermal displacement is excited by a line-focused intensity-modulated laser beam and detected with a parallel heterodyne interferometer using a charge-coupled device linear image sensor as a detector. Because of integration and sampling effects of the sensor, the interference light is spatiotemporally multiplexed. To extract the spatially resolved photodisplacement component from the sensor signal, a scheme of phase-shifting light integration combined with a Fourier analysis technique is developed for parallel interferometry. The frequencies of several control signals, including the heterodyne beat signal, modulation signal, and sensor gate signal, are optimized so as to eliminate undesirable components, allowing only the displacement component to be extracted. Two-dimensional subsurface lattice defects in silicon are clearly imaged at a remarkable speed of only 0.26s for an area of 256×256pixels. Thus, the proposed technique allows for real-time imaging more than 10 000 times faster than conventional photoacoustic microscopy.

  3. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  4. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  5. CyberKnife Boost for Patients with Cervical Cancer Unable to Undergo Brachytherapy

    PubMed Central

    Haas, Jonathan Andrew; Witten, Matthew R.; Clancey, Owen; Episcopia, Karen; Accordino, Diane; Chalas, Eva

    2012-01-01

    Standard radiation therapy for patients undergoing primary chemosensitized radiation for carcinomas of the cervix usually consists of external beam radiation followed by an intracavitary brachytherapy boost. On occasion, the brachytherapy boost cannot be performed due to unfavorable anatomy or because of coexisting medical conditions. We examined the safety and efficacy of using CyberKnife stereotactic body radiotherapy (SBRT) as a boost to the cervix after external beam radiation in those patients unable to have brachytherapy to give a more effective dose to the cervix than with conventional external beam radiation alone. Six consecutive patients with anatomic or medical conditions precluding a tandem and ovoid boost were treated with combined external beam radiation and CyberKnife boost to the cervix. Five patients received 45 Gy to the pelvis with serial intensity-modulated radiation therapy boost to the uterus and cervix to a dose of 61.2 Gy. These five patients received an SBRT boost to the cervix to a dose of 20 Gy in five fractions of 4 Gy each. One patient was treated to the pelvis to a dose of 45 Gy with an external beam boost to the uterus and cervix to a dose of 50.4 Gy. This patient received an SBRT boost to the cervix to a dose of 19.5 Gy in three fractions of 6.5 Gy. Five percent volumes of the bladder and rectum were kept to ≤75 Gy in all patients (i.e., V75 Gy ≤ 5%). All of the patients remain locally controlled with no evidence of disease following treatment. Grade 1 diarrhea occurred in 4/6 patients during the conventional external beam radiation. There has been no grade 3 or 4 rectal or bladder toxicity. There were no toxicities observed following SBRT boost. At a median follow-up of 14 months, CyberKnife radiosurgical boost is well tolerated and efficacious in providing a boost to patients with cervix cancer who are unable to undergo brachytherapy boost. Further follow-up is required to see if these results remain

  6. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-12-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

  7. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  8. ITER LHe Plants Parallel Operation

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Bonneton, M.; Chalifour, M.; Chang, H.-S.; Chodimella, C.; Monneret, E.; Vincent, G.; Flavien, G.; Fabre, Y.; Grillot, D.

    The ITER Cryogenic System includes three identical liquid helium (LHe) plants, with a total average cooling capacity equivalent to 75 kW at 4.5 K.The LHe plants provide the 4.5 K cooling power to the magnets and cryopumps. They are designed to operate in parallel and to handle heavy load variations.In this proceedingwe will describe the presentstatusof the ITER LHe plants with emphasis on i) the project schedule, ii) the plantscharacteristics/layout and iii) the basic principles and control strategies for a stable operation of the three LHe plants in parallel.

  9. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  10. Parallelization of the SIR code

    NASA Astrophysics Data System (ADS)

    Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jurčak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

    A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

  11. Dynamically reconfigurable optical interconnect architecture for parallel multiprocessor systems

    NASA Astrophysics Data System (ADS)

    Girard, Mary M.; Husbands, Charles R.; Antoszewska, Reza

    1991-12-01

    The progress in parallel processing technology in recent years has resulted in increased requirements to process large amounts of data in real time. The massively parallel architectures proposed for these applications require the use of a high speed interconnect system to achieve processor-to-processor connectivity without incurring excessive delays. The characteristics of optical components permit high speed operation while the nonconductive nature of the optical medium eliminates ground loop and transmission line problems normally associated with a conductive medium. The MITRE Corp. is evaluating an optical wavelength division multiple access interconnect network design to improve interconnectivity within parallel processor systems and to allow reconfigurability of processor communication paths. This paper describes the architecture and control of and highlights the results from an 8- channel multiprocessor prototype with effective throughput of 3.2 Gigabits per second (Gbps).

  12. Measures of effectiveness for BMD mid-course tracking on MIMD massively parallel computers

    SciTech Connect

    VanDyke, J.P.; Tomkins, J.L.; Furnish, M.D.

    1995-05-01

    The TRC code, a mid-course tracking code for ballistic missiles, has previously been implemented on a 1024-processor MIMD (Multiple Instruction -- Multiple Data) massively parallel computer. Measures of Effectiveness (MOE) for this algorithm have been developed for this computing environment. The MOE code is run in parallel with the TRC code. Particularly useful MOEs include the number of missed objects (real objects for which the TRC algorithm did not construct a track); of ghost tracks (tracks not corresponding to a real object); of redundant tracks (multiple tracks corresponding to a single real object); and of unresolved objects (multiple objects corresponding to a single track). All of these are expressed as a function of time, and tend to maximize during the time in which real objects are spawned (multiple reentry vehicles per post-boost vehicle). As well, it is possible to measure the track-truth separation as a function of time. A set of calculations is presented illustrating these MOEs as a function of time for a case with 99 post-boost vehicles, each of which spawns 9 reentry vehicles.

  13. Flux-line-lattice stability and dynamics

    NASA Astrophysics Data System (ADS)

    Glyde, H. R.; Moleko, L. K.; Findeisen, P.

    1992-02-01

    The mechanical stability of a flux-line lattice (FLL) having parameters appropriate for the high-Tc superconductors is determined using the self-consistent phonon theory of lattice dynamics. Nearly parallel flux lines (FL's) are assumed and FL pinning is neglected. The FLL becomes unstable when a phonon frequency goes to zero. At instability the rms vibrational amplitude diverges and the FL's can no longer be localized. In Bi2Sr2CaCuO2O8, the instability line as a function of temperature and magnetic field lies below but in reasonable agreement with the observed irreversibility line. In YBa2Cu3O7, it lies significantly below. The present instability line is a reliable upper bound to the FLL melting line. Identifying instability with melting, we find the Lindemann criterion of melting does not hold. However, the present instability lines and the melting lines obtained by Houghton et al. are found to have similar shape.

  14. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  15. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  16. Coupled parallel waveguide semiconductor laser

    NASA Technical Reports Server (NTRS)

    Katz, J.; Kapon, E.; Lindsey, C.; Rav-Noy, Z.; Margalit, S.; Yariv, A.; Mukai, S.

    1984-01-01

    The operation of a new type of tunable laser, where the two separately controlled individual lasers are placed vertically in parallel, has been demonstrated. One of the cavities ('control' cavity) is operated below threshold and assists the longitudinal mode selection and tuning of the other laser. With a minor modification, the same device can operate as an independent two-wavelength laser source.

  17. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  18. Impact of the Radiation Boost on Outcomes After Breast-Conserving Surgery and Radiation

    SciTech Connect

    Murphy, Colin; Anderson, Penny R.; Li Tianyu; Bleicher, Richard J.; Sigurdson, Elin R.; Goldstein, Lori J.; Swaby, Ramona; Denlinger, Crystal; Dushkin, Holly; Nicolaou, Nicos; Freedman, Gary M.

    2011-09-01

    Purpose: We examined the impact of radiation tumor bed boost parameters in early-stage breast cancer on local control and cosmetic outcomes. Methods and Materials: A total of 3,186 women underwent postlumpectomy whole-breast radiation with a tumor bed boost for Tis to T2 breast cancer from 1970 to 2008. Boost parameters analyzed included size, energy, dose, and technique. Endpoints were local control, cosmesis, and fibrosis. The Kaplan-Meier method was used to estimate actuarial incidence, and a Cox proportional hazard model was used to determine independent predictors of outcomes on multivariate analysis (MVA). The median follow-up was 78 months (range, 1-305 months). Results: The crude cosmetic results were excellent in 54%, good in 41%, and fair/poor in 5% of patients. The 10-year estimate of an excellent cosmesis was 66%. On MVA, independent predictors for excellent cosmesis were use of electron boost, lower electron energy, adjuvant systemic therapy, and whole-breast IMRT. Fibrosis was reported in 8.4% of patients. The actuarial incidence of fibrosis was 11% at 5 years and 17% at 10 years. On MVA, independent predictors of fibrosis were larger cup size and higher boost energy. The 10-year actuarial local failure was 6.3%. There was no significant difference in local control by boost method, cut-out size, dose, or energy. Conclusions: Likelihood of excellent cosmesis or fibrosis are associated with boost technique, electron energy, and cup size. However, because of high local control and rare incidence of fair/poor cosmesis with a boost, the anatomy of the patient and tumor cavity should ultimately determine the necessary boost parameters.

  19. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  20. Retroperitoneal Sarcoma (RPS) High Risk Gross Tumor Volume Boost (HR GTV Boost) Contour Delineation Agreement Among NRG Sarcoma Radiation and Surgical Oncologists

    PubMed Central

    Baldini, Elizabeth H.; Bosch, Walter; Kane, John M.; Abrams, Ross A.; Salerno, Kilian E.; Deville, Curtiland; Raut, Chandrajit P.; Petersen, Ivy A.; Chen, Yen-Lin; Mullen, John T.; Millikan, Keith W.; Karakousis, Giorgos; Kendrick, Michael L.; DeLaney, Thomas F.; Wang, Dian

    2015-01-01

    Purpose Curative intent management of retroperitoneal sarcoma (RPS) requires gross total resection. Preoperative radiotherapy (RT) often is used as an adjuvant to surgery, but recurrence rates remain high. To enhance RT efficacy with acceptable tolerance, there is interest in delivering “boost doses” of RT to high-risk areas of gross tumor volume (HR GTV) judged to be at risk for positive resection margins. We sought to evaluate variability in HR GTV boost target volume delineation among collaborating sarcoma radiation and surgical oncologist teams. Methods Radiation planning CT scans for three cases of RPS were distributed to seven paired radiation and surgical oncologist teams at six institutions. Teams contoured HR GTV boost volumes for each case. Analysis of contour agreement was performed using the simultaneous truth and performance level estimation (STAPLE) algorithm and kappa statistics. Results HRGTV boost volume contour agreement between the seven teams was “substantial” or “moderate” for all cases. Agreement was best on the torso wall posteriorly (abutting posterior chest abdominal wall) and medially (abutting ipsilateral para-vertebral space and great vessels). Contours varied more significantly abutting visceral organs due to differing surgical opinions regarding planned partial organ resection. Conclusions Agreement of RPS HRGTV boost volumes between sarcoma radiation and surgical oncologist teams was substantial to moderate. Differences were most striking in regions abutting visceral organs, highlighting the importance of collaboration between the radiation and surgical oncologist for “individualized” target delineation on the basis of areas deemed at risk and planned resection. PMID:26018727