Science.gov

Sample records for lines boost parallel

  1. Learning and Parallelization Boost Constraint Search

    ERIC Educational Resources Information Center

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  2. Development of a high speed parallel hybrid boost bearing

    NASA Technical Reports Server (NTRS)

    Winn, L. W.; Eusepi, M. W.

    1973-01-01

    The analysis, design, and testing of the hybrid boost bearing are discussed. The hybrid boost bearing consists of a fluid film bearing coupled in parallel with a rolling element bearing. This coupling arrangement makes use of the inherent advantages of both the fluid film and rolling element bearing and at the same time minimizes their disadvantages and limitations. The analytical optimization studies that lead to the final fluid film bearing design are reported. The bearing consisted of a centrifugally-pressurized planar fluid film thrust bearing with oil feed through the shaft center. An analysis of the test ball bearing is also presented. The experimental determination of the hybrid bearing characteristics obtained on the basis of individual bearing component tests and a combined hybrid bearing assembly is discussed and compared to the analytically determined performance characteristics.

  3. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  4. On-line inverse multiple instance boosting for classifier grids

    PubMed Central

    Sternig, Sabine; Roth, Peter M.; Bischof, Horst

    2012-01-01

    Classifier grids have shown to be a considerable choice for object detection from static cameras. By applying a single classifier per image location the classifier’s complexity can be reduced and more specific and thus more accurate classifiers can be estimated. In addition, by using an on-line learner a highly adaptive but stable detection system can be obtained. Even though long-term stability has been demonstrated such systems still suffer from short-term drifting if an object is not moving over a long period of time. The goal of this work is to overcome this problem and thus to increase the recall while preserving the accuracy. In particular, we adapt ideas from multiple instance learning (MIL) for on-line boosting. In contrast to standard MIL approaches, which assume an ambiguity on the positive samples, we apply this concept to the negative samples: inverse multiple instance learning. By introducing temporal bags consisting of background images operating on different time scales, we can ensure that each bag contains at least one sample having a negative label, providing the theoretical requirements. The experimental results demonstrate superior classification results in presence of non-moving objects. PMID:22556453

  5. Parallel acoustic delay lines for photoacoustic tomography

    PubMed Central

    Yapici, Murat Kaya; Kim, Chulhong; Chang, Cheng-Chung; Jeon, Mansik; Guo, Zijian; Cai, Xin

    2012-01-01

    Abstract. Achieving real-time photoacoustic (PA) tomography typically requires multi-element ultrasound transducer arrays and their associated multiple data acquisition (DAQ) electronics to receive PA waves simultaneously. We report the first demonstration of a photoacoustic tomography (PAT) system using optical fiber-based parallel acoustic delay lines (PADLs). By employing PADLs to introduce specific time delays, the PA signals (on the order of a few micro seconds) can be forced to arrive at the ultrasonic transducers at different times. As a result, time-delayed PA signals in multiple channels can be ultimately received and processed in a serial manner with a single-element transducer, followed by single-channel DAQ electronics. Our results show that an optically absorbing target in an optically scattering medium can be photoacoustically imaged using the newly developed PADL-based PAT system. Potentially, this approach could be adopted to significantly reduce the complexity and cost of ultrasonic array receiver systems. PMID:23139043

  6. Experimental verification of internal parameter in magnetically coupled boost used as PV optimizer in parallel association

    NASA Astrophysics Data System (ADS)

    Sawicki, Jean-Paul; Saint-Eve, Frédéric; Petit, Pierre; Aillerie, Michel

    2017-02-01

    This paper presents results of experiments aimed to verify a formula able to compute duty cycle in the case of pulse width modulation control for a DC-DC converter designed and realized in laboratory. This converter, called Magnetically Coupled Boost (MCB) is sized to step up only one photovoltaic module voltage to supply directly grid inverters. Duty cycle formula will be checked in a first time by identifying internal parameter, auto-transformer ratio, and in a second time by checking stability of operating point on the side of photovoltaic module. Thinking on nature of generator source and load connected to converter leads to imagine additional experiments to decide if auto-transformer ratio parameter could be used with fixed value or on the contrary with adaptive value. Effects of load variations on converter behavior or impact of possible shading on photovoltaic module are also mentioned, with aim to design robust control laws, in the case of parallel association, designed to compensate unwanted effects due to output voltage coupling.

  7. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  8. VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN QUARRY AREA, FACING NORTHEAST - Granite Hill Plantation, Quarry No. 2, South side of State Route 16, 1.3 miles northeast east of Sparta, Sparta, Hancock County, GA

  9. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  10. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  11. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  12. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  13. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  14. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  15. Self-adaptive asymmetric on-line boosting for detecting anatomical structures

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Tajbakhsh, Nima; Xue, Wenzhe; Liang, Jianming

    2012-03-01

    In this paper, we propose a self-adaptive, asymmetric on-line boosting (SAAOB) method for detecting anatomical structures in CT pulmonary angiography (CTPA). SAAOB is novel in that it exploits a new asymmetric loss criterion with self-adaptability according to the ratio of exposed positive and negative samples and in that it has an advanced rule to update sample's importance weight taking account of both classification result and sample's label. Our presented method is evaluated by detecting three distinct thoracic structures, the carina, the pulmonary trunk and the aortic arch, in both balanced and imbalanced conditions.

  16. Parallel line analysis: multifunctional software for the biomedical sciences

    NASA Technical Reports Server (NTRS)

    Swank, P. R.; Lewis, M. L.; Damron, K. L.; Morrison, D. R.

    1990-01-01

    An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time.

  17. Sequential organogenesis sets two parallel sensory lines in medaka

    PubMed Central

    Seleit, Ali; Krämer, Isabel; Ambrosio, Elizabeth; Dross, Nicolas; Engel, Ulrike

    2017-01-01

    Animal organs are typically formed during embryogenesis by following one specific developmental programme. Here, we report that neuromast organs are generated by two distinct and sequential programmes that result in parallel sensory lines in medaka embryos. A ventral posterior lateral line (pLL) is composed of neuromasts deposited by collectively migrating cells whereas a midline pLL is formed by individually migrating cells. Despite the variable number of neuromasts among embryos, the sequential programmes that we describe here fix an invariable ratio between ventral and midline neuromasts. Mechanistically, we show that the formation of both types of neuromasts depends on the chemokine receptor genes cxcr4b and cxcr7b, illustrating how common molecules can mediate different morphogenetic processes. Altogether, we reveal a self-organising feature of the lateral line system that ensures a proper distribution of sensory organs along the body axis. PMID:28087632

  18. Harmonic resonance on parallel high voltage transmission lines

    SciTech Connect

    Harries, J.R.; Randall, J.L.

    1997-01-01

    The Bonneville Power Administration (BPA) has received complaints of telephone interference over a wide area of northwestern Washington State for several years. However, until 1995 investigations had proved inconclusive as either the source of the harmonics or the operating conditions changed whenever investigators arrived. The 2,100 Hz interference had been noticed at several optically isolated telephone exchanges. The area of complaint corresponded to electric service areas near the transmission line corridors of the BPA Custer-Monroe 500-kV lines. High 2,100 Hz field strength was measured near the 500-kV lines and also under lower voltage lines served from stations along the transmission line corridor. Tests and studies made with the Alternative Transients Program version of the Electromagnetic Transients Program (EMTP) were able to define the phenomena and isolate the source. Harmonic resonance has been observed, measured and modeled on parallel 500-kV lines that are about one wavelength at 2,100 Hz, the 35th harmonic. A seemingly small harmonic injection at one location on the system causes significant problems some distance away such as telephone interference.

  19. Parallel field line and stream line tracing algorithms for space physics applications

    NASA Astrophysics Data System (ADS)

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  20. On-line near infrared monitoring of glycerol-boosted anaerobic digestion processes: evaluation of process analytical technologies.

    PubMed

    Holm-Nielsen, Jens Bo; Lomborg, Carina Juel; Oleskowicz-Popiel, Piotr; Esbensen, Kim H

    2008-02-01

    A study of NIR as a tool for process monitoring of thermophilic anaerobic digestion boosted by glycerol has been carried out, aiming at developing simple and robust Process Analytical Technology modalities for on-line surveillance in full scale biogas plants. Three 5 L laboratory fermenters equipped with on-line NIR sensor and special sampling stations were used as a basis for chemometric multivariate calibration. NIR characterisation using Transflexive Embedded Near Infra-Red Sensor (TENIRS) equipment integrated into an external recurrent loop on the fermentation reactors, allows for representative sampling, of the highly heterogeneous fermentation bio slurries. Glycerol is an important by-product from the increasing European bio-diesel production. Glycerol addition can boost biogas yields, if not exceeding a limiting 5-7 g L(-1) concentration inside the fermenter-further increase can cause strong imbalance in the anaerobic digestion process. A secondary objective was to evaluate the effect of addition of glycerol, in a spiking experiment which introduced increasing organic overloading as monitored by volatile fatty acids (VFA) levels. High correlation between on-line NIR determinations of glycerol and VFA contents has been documented. Chemometric regression models (PLS) between glycerol and NIR spectra needed no outlier removals and only one PLS-component was required. Test set validation resulted in excellent measures of prediction performance, precision: r(2) = 0.96 and accuracy = 1.04, slope of predicted versus reference fitting. Similar prediction statistics for acetic acid, iso-butanoic acid and total VFA proves that process NIR spectroscopy is able to quantify all pertinent levels of both volatile fatty acids and glycerol.

  1. Parallel line raster eliminates ambiguities in reading timing of pulses less than 500 microseconds apart

    NASA Technical Reports Server (NTRS)

    Horne, A. P.

    1966-01-01

    Parallel horizontal line raster is used for precision timing of events occurring less than 500 microseconds apart for observation of hypervelocity phenomena. The raster uses a staircase vertical deflection and eliminates ambiguities in reading timing of pulses close to the end of each line.

  2. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  3. Study of electric fields parallel to the magnetic lines of force using artificially injected energetic electrons

    NASA Technical Reports Server (NTRS)

    Wilhelm, K.; Bernstein, W.; Whalen, B. A.

    1980-01-01

    Electron beam experiments using rocket-borne instrumentation will be discussed. The observations indicate that reflections of energetic electrons may occur at possible electric field configurations parallel to the direction of the magnetic lines of force in an altitude range of several thousand kilometers above the ionosphere.

  4. Designing linings of mutually influencing parallel shallow circular tunnels under seismic effects of earthquake

    NASA Astrophysics Data System (ADS)

    Sammal, A. S.; Antsiferov, S. V.; Deev, P. V.

    2016-09-01

    The paper deals with seismic design of parallel shallow tunnel linings, which is based on identifying the most unfavorable lining stress states under the effects of long longitudinal and shear seismic waves propagating through the cross section of the tunnel in different directions and combinations. For this purpose, the sum and difference of normal tangential stresses on lining internal outline caused by waves of different types are investigated on the extreme relative to the angle of incidence. The method allows analytic plotting of a curve illustrating structure stresses. The paper gives an example of design calculation.

  5. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  6. ACS Grism Parallel Survey of Emission- line Galaxies at Redshift z Apl 7

    NASA Astrophysics Data System (ADS)

    Yan, Lin

    2002-07-01

    We propose an ACS grism parallel survey to search for emission-line galaxies toward 50 random lines of sight over the redshift interval 0 < z Apl 7. We request ACS parallel observations of duration more than one orbit at high galactic latitude to identify 300 HAlpha emission-line galaxies at 0.2 Apl z Apl 0.5, 720 O IILambda3727 emission-line galaxies at 0.3 Apl z Apl 1.68, and Apg 1000 Ly-alpha emission-line galaxies at 3 Apl z Apl 7 with total emission line flux f Apg 2* 10^-17 ergs s^-1 cm^-2 over 578 arcmin^2. We will obtain direct images with the F814W and F606W filters and dispersed images with the WFC/G800L grism at each position. The direct images will serve to provide a zeroth order model both for wavelength calibration of the extracted 1D spectra and for determining extraction apertures of the corresponding dispersed images. The primary scientific objectives are as follows: {1} We will establish a uniform sample of HAlpha and O II emission-line galaxies at z<1.7 in order to obtain accurate measurements of co-moving star formation rate density versus redshift over this redshift range. {2} We will study the spatial and statistical distribution of star formation rate intensity in individual galaxies using the spatially resolved emission-line morphology in the grism images. And {3} we will study high-redshift universe using Ly-alpha emitting galaxies identified at z Apl 7 in the survey. The data will be available to the community immediately as they are obtained.

  7. High-speed, digitally refocused retinal imaging with line-field parallel swept source OCT

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Ginner, Laurin; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-03-01

    MHz OCT allows mitigating undesired influence of motion artifacts during retinal assessment, but comes in state-of-the-art point scanning OCT at the price of increased system complexity. By changing the paradigm from scanning to parallel OCT for in vivo retinal imaging the three-dimensional (3D) acquisition time is reduced without a trade-off between speed, sensitivity and technological requirements. Furthermore, the intrinsic phase stability allows for applying digital refocusing methods increasing the in-focus imaging depth range. Line field parallel interferometric imaging (LPSI) is utilizing a commercially available swept source, a single-axis galvo-scanner and a line scan camera for recording 3D data with up to 1MHz A-scan rate. Besides line-focus illumination and parallel detection, we mitigate the necessity for high-speed sensor and laser technology by holographic full-range imaging, which allows for increasing the imaging speed by low sampling of the optical spectrum. High B-scan rates up to 1kHz further allow for implementation of lable-free optical angiography in 3D by calculating the inter B-scan speckle variance. We achieve a detection sensitivity of 93.5 (96.5) dB at an equivalent A-scan rate of 1 (0.6) MHz and present 3D in vivo retinal structural and functional imaging utilizing digital refocusing. Our results demonstrate for the first time competitive imaging sensitivity, resolution and speed with a parallel OCT modality. LPSI is in fact currently the fastest OCT device applied to retinal imaging and operating at a central wavelength window around 800 nm with a detection sensitivity of higher than 93.5 dB.

  8. A robust real-time laser measurement method based on noncoding parallel multi-line

    NASA Astrophysics Data System (ADS)

    Zhang, Chenbo; Cui, Haihua; Yin, Wei; Yang, Liu

    2016-11-01

    Single line scanning is the main method in traditional 3D hand-held laser scanning, however its reconstruction speed is very slow and cumulative error is very large. Therefore, we propose a method to reconstruct the 3D profile by parallel multi-line 3D hand-held laser scanning. Firstly, we process the two images that contain multi-line laser stripes shot by the binocular cameras, and then the laser stripe centers will be extracted accurately. Then we use the approach of stereo vision principle, polar constraint and laser plane constraint to match the laser stripes of the left image and the right image correctly and reconstruct them quickly. Our experimental results prove the feasibility of this method, which improves the scanning speed and increases the scanning area greatly.

  9. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  10. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  11. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  12. A computer program based on parallel line assay for analysis of skin tests.

    PubMed

    Martín, S; Cuesta, P; Rico, P; Cortés, C

    1997-01-01

    A computer program for the analysis of differences or changes in skin sensitivity has been developed. It is based on parallel line assay, and its main features are its ability to conduct a validation process which ensures that the data from skin tests conform to the conditions imposed by the analysis which is carried out (regression, parallelism, etc.), the estimation of the difference or change in skin sensitivity, and the determination of the 95% and 99% confidence intervals of this estimation. This program is capable of managing data from independent groups, as well as paired data, and it may be applied to the comparison of allergen extracts, with the aim of determining their biologic activity, as well as to the analysis of changes in skin sensitivity appearing as a consequence of treatment such as immunotherapy.

  13. Acceleration on stretched meshes with line-implicit LU-SGS in parallel implementation

    NASA Astrophysics Data System (ADS)

    Otero, Evelyn; Eliasson, Peter

    2015-02-01

    The implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver is combined with the line-implicit technique to improve convergence on the very anisotropic grids necessary for resolving the boundary layers. The computational fluid dynamics code used is Edge, a Navier-Stokes flow solver for unstructured grids based on a dual grid and edge-based formulation. Multigrid acceleration is applied with the intention to accelerate the convergence to steady state. LU-SGS works in parallel and gives better linear scaling with respect to the number of processors, than the explicit scheme. The ordering techniques investigated have shown that node numbering does influence the convergence and that the orderings from Delaunay and advancing front generation were among the best tested. 2D Reynolds-averaged Navier-Stokes computations have clearly shown the strong efficiency of our novel approach line-implicit LU-SGS which is four times faster than implicit LU-SGS and line-implicit Runge-Kutta. Implicit LU-SGS for Euler and line-implicit LU-SGS for Reynolds-averaged Navier-Stokes are at least twice faster than explicit and line-implicit Runge-Kutta, respectively, for 2D and 3D cases. For 3D Reynolds-averaged Navier-Stokes, multigrid did not accelerate the convergence and therefore may not be needed.

  14. Line-field parallel swept source MHz OCT for structural and functional retinal imaging.

    PubMed

    Fechtig, Daniel J; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M; Drexler, Wolfgang; Leitgeb, Rainer A

    2015-03-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation.

  15. Line-field parallel swept source MHz OCT for structural and functional retinal imaging

    PubMed Central

    Fechtig, Daniel J.; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-01-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation. PMID:25798298

  16. Retinal photoreceptor imaging with high-speed line-field parallel spectral domain OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ginner, Laurin; Fechtig, Daniel J.; Schmoll, Tilman; Wurster, Lara M.; Pircher, Michael; Leitgeb, Rainer A.; Drexler, Wolfgang

    2016-03-01

    We present retinal photoreceptor imaging with a line-field parallel spectral domain OCT modality, utilizing a commercially available 2D CMOS detector array operating at and imaging speed of 500 B-scans/s. Our results demonstrate for the first time in vivo structural and functional retinal assessment with a line-field OCT setup providing sufficient sensitivity, lateral and axial resolution and 3D acquisition rates in order to resolve individual photoreceptor cells. The setup comprises a Michelson interferometer illuminated by a broadband light source, where a line-focus is formed via a cylindrical lens and the back-propagated light from sample and reference arm is detected by a 2D array after passing a diffraction grating. The spot size of the line-focus on the retina is 5μm, which corresponds to a PSF of 50μm and an oversampling factor of 3.6 at the detector plane, respectively. A full 3D stack was recorded in only 0.8 s. We show representative enface images, tomograms and phase-difference maps of cone photoreceptors with a lateral FOV close to 2°. The high-speed capability and the phase stability due to parallel illumination and detection may potentially lead to novel structural and functional diagnostic tools on a cellular and microvascular imaging level. Furthermore, the presented system enables competitive imaging results as compared to respective point scanning modalities and facilitates utilizing software based digital aberration correction algorithms for achieving 3D isotropic resolution across the full FOV.

  17. Retinal photoreceptor imaging with high-speed line-field parallel spectral domain OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Ginner, Laurin; Kumar, Abhishek; Pircher, Michael; Schmoll, Tilman; Wurster, Lara M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2016-03-01

    We present retinal photoreceptor imaging with a line-field parallel spectral domain OCT modality, utilizing a commercially available 2D CMOS detector array operating at and imaging speed of 500 B-scans/s. Our results demonstrate for the first time in vivo structural and functional retinal assessment with a line-field OCT setup providing sufficient sensitivity, lateral and axial resolution and 3D acquisition rates in order to resolve individual photoreceptor cells. The phase stability of the system is manifested by the high phase-correlation across the lateral FOV on the level of individual photoreceptors. The setup comprises a Michelson interferometer illuminated by a broadband light source, where a line-focus is formed via a cylindrical lens and the back-propagated light from sample and reference arm is detected by a 2D array after passing a diffraction grating. The spot size of the line-focus on the retina is 5μm, which corresponds to a PSF of 50μm and an oversampling factor of 3.6 at the detector plane, respectively. A full 3D stack was recorded in only 0.8 s. We show representative enface images, tomograms and phase-difference maps of cone photoreceptors with a lateral FOV close to 2°. The high-speed capability and the phase stability due to parallel illumination and detection may potentially lead to novel structural and functional diagnostic tools on a cellular and microvascular imaging level. Furthermore, the presented system enables competitive imaging results as compared to respective point scanning modalities and facilitates utilizing software based digital aberration correction algorithms for achieving 3D isotropic resolution across the full FOV.

  18. Oxygen boost pump study

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An oxygen boost pump is described which can be used to charge the high pressure oxygen tank in the extravehicular activity equipment from spacecraft supply. The only interface with the spacecraft is the +06 6.205 Pa supply line. The breadboard study results and oxygen tank survey are summarized and the results of the flight-type prototype design and analysis are presented.

  19. Visual globes, celestial spheres, and the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Rogers, Cassandra

    2009-01-01

    Helmholtz's famous distorted chessboard pattern has been used to make the point that perception of the straightness of peripherally viewed lines is not always veridical. Helmholtz showed that the curved lines of his chessboard pattern appear to be straight when viewed from a critical distance and he argued that, at this distance, the contours stimulated particular 'direction circles' in the field of fixation. We measured the magnitude of the distortion of peripherally viewed contours, and found that the straightness of elongated contours is indeed misperceived in the direction reported by Helmholtz, but that the magnitude of the effect varies with viewing conditions. On the basis of theoretical considerations, we conclude that there cannot, in principle, be particular retinal loci ('loci' is used here in the sense of an arc or an extended set of points that provide a basis for judging collinearity) to underpin our judgments of the straightness and parallelity of peripheral contours, because such judgments also require information about the 3-D surface upon which the contours are located. Moreover, we show experimentally that the contours in the real world that are judged to be straight and parallel can stimulate quite different retinal loci, depending on the shape of the 3-D surface upon which they are drawn.

  20. An on-line learning tracking of non-rigid target combining multiple-instance boosting and level set

    NASA Astrophysics Data System (ADS)

    Chen, Mingming; Cai, Jingju

    2013-10-01

    Visual tracking algorithms based on online boosting generally use a rectangular bounding box to represent the position of the target, while actually the shape of the target is always irregular. This will cause the classifier to learn the features of the non-target parts in the rectangle region, thereby the performance of the classifier is reduced, and drift would happen. To avoid the limitations of the bounding-box, we propose a novel tracking-by-detection algorithm involving the level set segmentation, which ensures the classifier only learn the features of the real target area in the tracking box. Because the shape of the target only changes a little between two adjacent frames and the current level set algorithm can avoid the re-initialization of the signed distance function, it only takes a few iterations to converge to the position of the target contour in the next frame. We also make some improvement on the level set energy function so that the zero level set would have less possible to converge to the false contour. In addition, we use gradient boost to improve the original multi-instance learning (MIL) algorithm like the WMILtracker, which greatly speed up the tracker. Our algorithm outperforms the original MILtracker both on speed and precision. Compared with the WMILtracker, our algorithm runs at a almost same speed, but we can avoid the drift caused by background learning, so the precision is better.

  1. Parametric analysis of hollow conductor parallel and coaxial transmission lines for high frequency space power distribution

    NASA Technical Reports Server (NTRS)

    Jeffries, K. S.; Renz, D. D.

    1984-01-01

    A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.

  2. A new cascaded control strategy for paralleled line-interactive UPS with LCL filter

    NASA Astrophysics Data System (ADS)

    Zhang, X. Y.; Zhang, X. H.; Li, L.; Luo, F.; Zhang, Y. S.

    2016-08-01

    Traditional uninterrupted power supply (UPS) is difficult to meet the output voltage quality and grid-side power quality requirements at the same time, and usually has some disadvantage, such as multi-stage conversion, complex structure, or harmonic current pollution to the utility grid and so on. A three-phase three-level paralleled line-interactive UPS with LCL filter is presented in this paper. It can achieve the output voltage quality and grid-side power quality control simultaneously with only single-conversion power stage, but the multi-objective control strategy design is difficult. Based on the detailed analysis of the circuit structure and operation mechanism, a new cascaded control strategy for the power, voltage, and current is proposed. An outer current control loop based on the resonant control theory is designed to ensure the grid-side power quality. An inner voltage control loop based on the capacitance voltage and capacitance current feedback is designed to ensure the output voltage quality and avoid the resonance peak of the LCL filter. Improved repetitive controller is added to reduce the distortion of the output voltage. The setting of the controller parameters is detailed discussed. A 100kVA UPS prototype is built and experiments under the unbalanced resistive load and nonlinear load are carried out. Theoretical analysis and experimental results show the effectiveness of the control strategy. The paralleled line-interactive UPS can not only remain constant three-phase balanced output voltage, but also has the comprehensive power quality management functions with three-phase balanced grid active power input, low THD of output voltage and grid current, and reactive power compensation. The UPS is a green friendly load to the utility.

  3. A micromachined silicon parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT)

    NASA Astrophysics Data System (ADS)

    Cho, Young Y.; Chang, Cheng-Chung; Wang, Lihong V.; Zou, Jun

    2015-03-01

    To achieve real-time photoacoustic tomography (PAT), massive transducer arrays and data acquisition (DAQ) electronics are needed to receive the PA signals simultaneously, which results in complex and high-cost ultrasound receiver systems. To address this issue, we have developed a new PA data acquisition approach using acoustic time delay. Optical fibers were used as parallel acoustic delay lines (PADLs) to create different time delays in multiple channels of PA signals. This makes the PA signals reach a single-element transducer at different times. As a result, they can be properly received by single-channel DAQ electronics. However, due to their small diameter and fragility, using optical fiber as acoustic delay lines poses a number of challenges in the design, construction and packaging of the PADLs, thereby limiting their performances and use in real imaging applications. In this paper, we report the development of new silicon PADLs, which are directly made from silicon wafers using advanced micromachining technologies. The silicon PADLs have very low acoustic attenuation and distortion. A linear array of 16 silicon PADLs were assembled into a handheld package with one common input port and one common output port. To demonstrate its real-time PAT capability, the silicon PADL array (with its output port interfaced with a single-element transducer) was used to receive 16 channels of PA signals simultaneously from a tissue-mimicking optical phantom sample. The reconstructed PA image matches well with the imaging target. Therefore, the silicon PADL array can provide a 16× reduction in the ultrasound DAQ channels for real-time PAT.

  4. Parallel secretion of pancreastatin and somatostatin from human pancreastatin producing cell line (QGP-1N).

    PubMed

    Funakoshi, A; Tateishi, K; Kitayama, N; Jimi, A; Matsuoka, Y; Kono, A

    1993-05-01

    In this investigation we studied pancreastatin (PST) secretion from a human PST producing cell line (QGP-1N) in response to various secretagogues. Immunocytochemical study revealed the immunoreactivity of PST and somatostatin (SMT) in the same cells of a monolayer culture. Ki-ras DNA point mutation on codon 12 was found. Carbachol stimulated secretion of PST and SMT and intracellular Ca2+ mobilization in the range of 10(-6)-10(-4) M. The secretion and Ca2+ mobilization were inhibited by atropine, a muscarinic receptor antagonist. Phorbol ester and calcium ionophore (A23187) stimulated secretion of PST and SMT. The removal of extracellular calcium suppressed both secretions throughout stimulation with 10(-5) M carbachol. Fluoride, a well-known activator of guanine nucleotide binding (G) protein, stimulated intracellular Ca2+ mobilization and secretion of PST and SMT in a dose-dependent manner in the range of 5-40 mM. Also, 10(-5) M carbachol and 20 mM fluoride stimulated inositol 1,4,5-triphosphate production. However, cholecystokinin and gastrin-releasing peptide did not stimulate Ca2+ mobilization or secretion of the two peptides. These results suggest that secretion of PST and SMT from QGP-1N cells is regulated mainly by acetylcholine in a parallel fashion through muscarinic receptors coupled to the activation of polyphosphoinositide breakdown by a G-protein and that increases in intracellular Ca2+ and protein kinase C play an important role in stimulus-secretion coupling.

  5. Non-parallel stability analysis of three-dimensional boundary layers along an infinite attachment line

    NASA Astrophysics Data System (ADS)

    Itoh, Nobutake

    2000-09-01

    Instability of a non-parallel similar-boundary-layer flow to small and wavy disturbances is governed by partial differential equations with respect to the non-dimensional vertical coordinate ζ and the local Reynolds number R1 based on chordwise velocity of external stream and a boundary-layer thickness. In the particular case of swept Hiemenz flow, the equations admit a series solution expanded in inverse powers of R12 and then are decomposed into an infinite sequence of ordinary differential systems with the leading one posing an eigenvalue problem to determine the first approximation to the complex dispersion relation. Numerical estimation of the series solution indicates a much lower critical Reynolds number of the so-called oblique-wave instability than the classical value Rc=583 of the spanwise-traveling Tollmien-Schlichting instability. Extension of the formulation to general Falkner-Skan-Cooke boundary layers is proposed in the form of a double power series with respect to 1/ R12 and a small parameter ɛ denoting the difference of the Falkner-Skan parameter m from the attachment-line value m=1.

  6. Extraction of loess shoulder-line based on the parallel GVF snake model in the loess hilly area of China

    NASA Astrophysics Data System (ADS)

    Song, Xiaodong; Tang, Guoan; Li, Fayuan; Jiang, Ling; Zhou, Yi; Qian, Kejian

    2013-03-01

    Loess shoulder-lines are the most critical terrain feature in representing and modeling the landforms of the Loess Plateau of China. Existing algorithms usually fail in obtaining a continuous shoulder-line for complicated surface, DEM quality and algorithm limitation. This paper proposes a new method, by which gradient vector flow (GVF) snake model is employed to generate an integrated contour which could connect the discontinuous fragments of shoulder-line. Moreover, a new criterion for the selection of initial seeds is created for the snake model, which takes the value of median smoothing of the local neighborhood regions. By doing this, we can extract the adjacent boundary of loess positive-negative terrains from the shoulder-line zones, which build a basis to found the real shoulder-lines by the gradient vector flow. However, the computational burden of this method remains heavy for large DEM dataset. In this study, a parallel computing scheme of the cluster for automatic shoulder-line extraction is proposed and implemented with a parallel GVF snake model. After analyzing the principle of the method, the paper develops an effective parallel algorithm integrating both single program multiple data (SPMD) and master/slave (M/S) programming modes. Based on domain decomposition of DEM data, each partition is decomposed regularly and calculated simultaneously. The experimental results on different DEM datasets indicate that parallel programming can achieve the main objective of distinctly reducing execution time without losing accuracy compared with the sequential model. The hybrid algorithm in this study achieves a mean shoulder-line offset of 15.8 m, a quite satisfied result in both accuracy and efficiency compared with published extraction methods.

  7. The new moon illusion and the role of perspective in the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2015-01-01

    In the new moon illusion, the sun does not appear to be in a direction perpendicular to the boundary between the lit and dark sides of the moon, and aircraft jet trails appear to follow curved paths across the sky. In both cases, lines that are physically straight and parallel to the horizon appear to be curved. These observations prompted us to investigate the neglected question of how we are able to judge the straightness and parallelism of extended lines. To do this, we asked observers to judge the 2-D alignment of three artificial "stars" projected onto the dome of the Saint Petersburg Planetarium that varied in both their elevation and their separation in horizontal azimuth. The results showed that observers make substantial, systematic errors, biasing their judgments away from the veridical great-circle locations and toward equal-elevation settings. These findings further demonstrate that whenever information about the distance of extended lines or isolated points is insufficient, observers tend to assume equidistance, and as a consequence, their straightness judgments are biased toward the angular separation of straight and parallel lines.

  8. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures

    PubMed Central

    Guan, Jungang; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Mattausch, Hans Jürgen

    2017-01-01

    The Hough Transform (HT) is a method for extracting straight lines from an edge image. The main limitations of the HT for usage in actual applications are computation time and storage requirements. This paper reports a hardware architecture for HT implementation on a Field Programmable Gate Array (FPGA) with parallelized voting procedure. The 2-dimensional accumulator array, namely the Hough space in parametric form (ρ, θ), for computing the strength of each line by a voting mechanism is mapped on a 1-dimensional array with regular increments of θ. Then, this Hough space is divided into a number of parallel parts. The computation of (ρ, θ) for the edge pixels and the voting procedure for straight-line determination are therefore executable in parallel. In addition, a synchronized initialization for the Hough space further increases the speed of straight-line detection, so that XGA video processing becomes possible. The designed prototype system has been synthesized on a DE4 platform with a Stratix-IV FPGA device. In the application of road-lane detection, the average processing speed of this HT implementation is 5.4 ms per XGA-frame at 200 MHz working frequency. PMID:28146101

  9. Sustainable Materials Management (SMM) Web Academy Webinar: Wasted Food to Energy: How 6 Water Resource Recovery Facilities are Boosting Biogas Production & the Bottom Line

    EPA Pesticide Factsheets

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  10. Boost type PWM HVDC transmission system

    SciTech Connect

    Ooi, B.T.; Wang, X. . Dept. of Electrical Engineering)

    1991-10-01

    This paper reports that conventional HVdc is built around the mercury arc rectifier or the thyristor which requires line commutation. The advances of fast, high power GTO's and future devices such as MCT's with turn off capabilities, are bringing PWM techniques within the range of HVdc applications. By combining PWM techniques to the boost type bridge topology, one has an alternate system of HVdc Transmission. On the ac side, the converter station has active controls over: the voltage amplitude, the voltage angle and the frequency. On the dc side, parallel connections facilitate multi-terminal load sharing by simple local controls so that redundant communication channels are not required. Bidirectional power through each station is accomplished by the reversal of the direction of dc current flow. These claims have been substantiated by experimental results from laboratory size multi-terminal models.

  11. Quantitative Profiling of Protein Tyrosine Kinases in Human Cancer Cell Lines by Multiplexed Parallel Reaction Monitoring Assays*

    PubMed Central

    Kim, Hye-Jung; Lin, De; Lee, Hyoung-Joo; Li, Ming; Liebler, Daniel C.

    2016-01-01

    Protein tyrosine kinases (PTKs) play key roles in cellular signal transduction, cell cycle regulation, cell division, and cell differentiation. Dysregulation of PTK-activated pathways, often by receptor overexpression, gene amplification, or genetic mutation, is a causal factor underlying numerous cancers. In this study, we have developed a parallel reaction monitoring-based assay for quantitative profiling of 83 PTKs. The assay detects 308 proteotypic peptides from 54 receptor tyrosine kinases and 29 nonreceptor tyrosine kinases in a single run. Quantitative comparisons were based on the labeled reference peptide method. We implemented the assay in four cell models: 1) a comparison of proliferating versus epidermal growth factor-stimulated A431 cells, 2) a comparison of SW480Null (mutant APC) and SW480APC (APC restored) colon tumor cell lines, and 3) a comparison of 10 colorectal cancer cell lines with different genomic abnormalities, and 4) lung cancer cell lines with either susceptibility (11–18) or acquired resistance (11–18R) to the epidermal growth factor receptor tyrosine kinase inhibitor erlotinib. We observed distinct PTK expression changes that were induced by stimuli, genomic features or drug resistance, which were consistent with previous reports. However, most of the measured expression differences were novel observations. For example, acquired resistance to erlotinib in the 11–18 cell model was associated not only with previously reported up-regulation of MET, but also with up-regulation of FLK2 and down-regulation of LYN and PTK7. Immunoblot analyses and shotgun proteomics data were highly consistent with parallel reaction monitoring data. Multiplexed parallel reaction monitoring assays provide a targeted, systems-level profiling approach to evaluate cancer-related proteotypes and adaptations. Data are available through Proteome eXchange Accession PXD002706. PMID:26631510

  12. Resolving magnetic field line stochasticity and parallel thermal transport in MHD simulations

    SciTech Connect

    Nishimura, Y.; Callen, J.D.; Hegna, C.C.

    1998-12-31

    Heat transport along braided, or chaotic magnetic field lines is a key to understand the disruptive phase of tokamak operations, both the major disruption and the internal disruption (sawtooth oscillation). Recent sawtooth experimental results in the Tokamak Fusion Test Reactor (TFTR) have inferred that magnetic field line stochasticity in the vicinity of the q = 1 inversion radius plays an important role in rapid changes in the magnetic field structures and resultant thermal transport. In this study, the characteristic Lyapunov exponents and spatial correlation of field line behaviors are calculated to extract the characteristic scale length of the microscopic magnetic field structure (which is important for net radial global transport). These statistical values are used to model the effect of finite thermal transport along magnetic field lines in a physically consistent manner.

  13. The proposed planning method as a parallel element to a real service system for dynamic sharing of service lines.

    PubMed

    Klampfer, Saša; Chowdhury, Amor

    2015-07-01

    This paper presents a solution to the bottleneck problem with dynamic sharing or leasing of service capacities. From this perspective the use of the proposed method as a parallel element in service capacities sharing is very important, because it enables minimization of the number of interfaces, and consequently of the number of leased lines, with a combination of two service systems with time-opposite peak loads. In this paper we present a new approach, methodology, models and algorithms which solve the problems of dynamic leasing and sharing of service capacities.

  14. Wave-particle interaction in parallel transport of long mean-free-path plasmas along open field magnetic field lines

    NASA Astrophysics Data System (ADS)

    Guo, Zehua; Tang, Xianzhu

    2012-03-01

    A tokamak fusion reactor dumps a large amount of heat and particle flux to the divertor through the scrape-off plasma (SOL). Situation exists either by necessity or through deliberate design that the SOL plasma attains long mean-free-path along large segments of the open field lines. The rapid parallel streaming of electrons requires a large parallel electric field to maintain ambipolarity. The confining effect of the parallel electric field on electrons leads to a trap/passing boundary in the velocity space for electrons. In the normal situation where the upstream electron source populates both the trapped and passing region, a mechanism must exist to produce a flux across the electron trap/passing boundary. In a short mean-free-path plasma, this is provided by collisions. For long mean-free-path plasmas, wave-particle interaction is the primary candidate for detrapping the electrons. Here we present simulation results and a theoretical analysis using a model distribution function of trapped electrons. The dominating electromagnetic plasma instability and the associated collisionless scattering, that produces both particle and energy fluxes across the electron trap/passing boundary in velocity space, are discussed.

  15. Parallel-Plate Transmission Line Type of EMP Simulators: Systematic Review and Recommendations.

    DTIC Science & Technology

    1980-05-01

    have condensed the available information on two types of pulsers (Van de Graaff and Marx ) with the view of providing a working knowledge of these EMP...Pulser equivalent circuit 20 11.3 Marx Generator 22 a) Equivalent circuit 25 III CONICAL-PLATE TRANSMISSION LINES 31 I11.1 Impedance 33 111.2 Fields 39...Graaff pulse generator (used, for instance, in the ARES facility) and the Marx pulse generator employed in the ATLAS I facility. This section furnishes

  16. Kinetic PIC simulations of reconnection signal propagation parallel to magnetic field lines: Implifications for substorms

    NASA Astrophysics Data System (ADS)

    Shay, M. A.; Drake, J. F.

    2009-12-01

    In a recent substorm case study using THEMIS data [1], it was inferred that auroral intensification occurred 96 seconds after reconnection onset initiated a substorm in the magnetotail. These conclusions have been the subject of some controversy [2,3]. The time delay between reconnection and auroral intensification requires a propagation speed significantly faster than can be explained by Alfvén waves. Kinetic Alfvén waves, however, can be much faster and could possibly explain the time lag. To test this possiblity, we simulate large scale reconnection events with the kinetic PIC code P3D and examine the disturbances on a magnetic field line as it propagates through a reconnection region. In the regions near the separatrices but relatively far from the x-line, the propagation physics is expected to be governed by the physics of kinetic Alfvén waves. Indeed, we find that the propagation speed of the magnetic disturbance roughly scales with kinetic Alfvén speeds. We also examine energization of electrons due to this disturbance. Consequences for our understanding of substorms will be discussed. [1] Angelopoulos, V. et al., Science, 321, 931, 2008. [2] Lui, A. T. Y., Science, 324, 1391-b, 2009. [3] Angelopoulos, V. et al., Science, 324, 1391-c, 2009.

  17. High-voltage isolation transformer for sub-nanosecond rise time pulses constructed with annular parallel-strip transmission lines.

    PubMed

    Homma, Akira

    2011-07-01

    A novel annular parallel-strip transmission line was devised to construct high-voltage high-speed pulse isolation transformers. The transmission lines can easily realize stable high-voltage operation and good impedance matching between primary and secondary circuits. The time constant for the step response of the transformer was calculated by introducing a simple low-frequency equivalent circuit model. Results show that the relation between the time constant and low-cut-off frequency of the transformer conforms to the theory of the general first-order linear time-invariant system. Results also show that the test transformer composed of the new transmission lines can transmit about 600 ps rise time pulses across the dc potential difference of more than 150 kV with insertion loss of -2.5 dB. The measured effective time constant of 12 ns agreed exactly with the theoretically predicted value. For practical applications involving the delivery of synchronized trigger signals to a dc high-voltage electron gun station, the transformer described in this paper exhibited advantages over methods using fiber optic cables for the signal transfer system. This transformer has no jitter or breakdown problems that invariably occur in active circuit components.

  18. Boosted ellipsoid ARTMAP

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Georgios C.; Georgiopoulos, Michael; Verzi, Steven J.; Heileman, Gregory L.

    2002-03-01

    Ellipsoid ARTMAP (EAM) is an adaptive-resonance-theory neural network architecture that is capable of successfully performing classification tasks using incremental learning. EAM achieves its task by summarizing labeled input data via hyper-ellipsoidal structures (categories). A major property of EAM, when using off-line fast learning, is that it perfectly learns its training set after training has completed. Depending on the classification problems at hand, this fact implies that off-line EAM training may potentially suffer from over-fitting. For such problems we present an enhancement to the basic Ellipsoid ARTMAP architecture, namely Boosted Ellipsoid ARTMAP (bEAM), that is designed to simultaneously improve the generalization properties and reduce the number of created categories for EAM's off-line fast learning. This is being accomplished by forcing EAM to be tolerant about occasional misclassification errors during fast learning. An additional advantage provided by bEAM's desing is the capability of learning inconsistent cases, that is, learning identical patterns with contradicting class labels. After we present the theory behind bEAM's enhancements, we provide some preliminary experimental results, which compare the new variant to the original EAM network, Probabilistic EAM and three different variants of the Restricted Coulomb Energy neural network on the square-in-a-square classification problem.

  19. Bidirectional buck boost converter

    DOEpatents

    Esser, A.A.M.

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero. 20 figs.

  20. Bidirectional buck boost converter

    DOEpatents

    Esser, Albert Andreas Maria

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero.

  1. Parallel extraction columns and parallel analytical columns coupled with liquid chromatography/tandem mass spectrometry for on-line simultaneous quantification of a drug candidate and its six metabolites in dog plasma.

    PubMed

    Xia, Y Q; Hop, C E; Liu, D Q; Vincent, S H; Chiu, S H

    2001-01-01

    A method with parallel extraction columns and parallel analytical columns (PEC-PAC) for on-line high-flow liquid chromatography/tandem mass spectrometry (LC/MS/MS) was developed and validated for simultaneous quantification of a drug candidate and its six metabolites in dog plasma. Two on-line extraction columns were used in parallel for sample extraction and two analytical columns were used in parallel for separation and analysis. The plasma samples, after addition of an internal standard solution, were directly injected onto the PEC-PAC system for purification and analysis. This method allowed the use of one of the extraction columns for analyte purification while the other was being equilibrated. Similarly, one of the analytical columns was employed to separate the analytes while the other was undergoing equilibration. Therefore, the time needed for re-conditioning both extraction and analytical columns was not added to the total analysis time, which resulted in a shorter run time and higher throughput. Moreover, the on-line column extraction LC/MS/MS method made it possible to extract and analyze all seven analytes simultaneously with good precision and accuracy despite their chemical class diversity that included primary, secondary and tertiary amines, an alcohol, an aldehyde and a carboxylic acid. The method was validated with the standard curve ranging from 5.00 to 5000 ng/mL. The intra- and inter-day precision was no more than 8% CV and the assay accuracy was between 95 and 107%.

  2. LDA boost classification: boosting by topics

    NASA Astrophysics Data System (ADS)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  3. Boosting Lyα and He II λ1640 Line Fluxes from Population III Galaxies: Stochastic IMF Sampling and Departures from Case-B

    NASA Astrophysics Data System (ADS)

    Mas-Ribas, Lluís; Dijkstra, Mark; Forero-Romero, Jaime E.

    2016-12-01

    We revisit calculations of nebular hydrogen Lyα and He ii λ1640 line strengths for Population III (Pop III) galaxies, undergoing continuous, and bursts of, star formation. We focus on initial mass functions (IMFs) motivated by recent theoretical studies, which generally span a lower range of stellar masses than earlier works. We also account for case-B departures and the stochastic sampling of the IMF. In agreement with previous work, we find that departures from case-B can enhance the Lyα flux by a factor of a few, but we argue that this enhancement is driven mainly by collisional excitation and ionization, and not due to photoionization from the n = 2 state of atomic hydrogen. The increased sensitivity of the Lyα flux to the high-energy end of the galaxy spectrum makes it more subject to stochastic sampling of the IMF. The latter introduces a dispersion in the predicted nebular line fluxes around the deterministic value by as much as a factor of ∼4. In contrast, the stochastic sampling of the IMF has less impact on the emerging Lyman Werner photon flux. When case-B departures and stochasticity effects are combined, nebular line emission from Pop III galaxies can be up to one order of magnitude brighter than predicted by “standard” calculations that do not include these effects. This enhances the prospects for detection with future facilities such as the James Webb Space Telescope and large, ground-based telescopes.

  4. Resonance line transfer calculations by doubling thin layers. I - Comparison with other techniques. II - The use of the R-parallel redistribution function. [planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Yelle, Roger V.; Wallace, Lloyd

    1989-01-01

    A versatile and efficient technique for the solution of the resonance line scattering problem with frequency redistribution in planetary atmospheres is introduced. Similar to the doubling approach commonly used in monochromatic scattering problems, the technique has been extended to include the frequency dependence of the radiation field. Methods for solving problems with external or internal sources and coupled spectral lines are presented, along with comparison of some sample calculations with results from Monte Carlo and Feautrier techniques. The doubling technique has also been applied to the solution of resonance line scattering problems where the R-parallel redistribution function is appropriate, both neglecting and including polarization as developed by Yelle and Wallace (1989). With the constraint that the atmosphere is illuminated from the zenith, the only difficulty of consequence is that of performing precise frequency integrations over the line profiles. With that problem solved, it is no longer necessary to use the Monte Carlo method to solve this class of problem.

  5. Development and qualification of the parallel line model for the estimation of human influenza haemagglutinin content using the single radial immunodiffusion assay.

    PubMed

    van Kessel, G; Geels, M J; de Weerd, S; Buijs, L J; de Bruijni, M A M; Glansbeek, H L; van den Bosch, J F; Heldens, J G; van den Heuvel, E R

    2012-01-05

    Infection with human influenza virus leads to serious respiratory disease. Vaccination is the most common and effective prophylactic measure to prevent influenza. Influenza vaccine manufacturing and release is controlled by the correct determination of the potency-defining haemagglutinin (HA) content. This determination is historically done by single radial immunodiffusion (SRID), which utilizes a statistical slope-ratio model to estimate the actual HA content. In this paper we describe the development and qualification of a parallel line model for analysis of HA quantification by SRID in cell culture-derived whole virus final monovalent and trivalent influenza vaccines. We evaluated plate layout, sample randomization, and validity of data and statistical model. The parallel line model was shown to be robust and reproducible. The precision studies for HA content demonstrated 3.8-5.0% repeatability and 3.8%-7.9% intermediate precision. Furthermore, system suitability criteria were developed to guarantee long-term stability of this assay in a regulated production environment. SRID is fraught with methodological and logistical difficulties and the determination of the HA content requires the acceptance of new and modern release assays, but until that moment, the described parallel line model represents a significant and robust update for the current global influenza vaccine release assay.

  6. Online Bagging and Boosting

    NASA Technical Reports Server (NTRS)

    Oza, Nikunji C.

    2005-01-01

    Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.

  7. GPU-based, parallel-line, omni-directional integration of measured acceleration field to obtain the 3D pressure distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Cao; Katz, Joseph

    2016-11-01

    A PIV based method to reconstruct the volumetric pressure field by direct integration of the 3D material acceleration directions has been developed. Extending the 2D virtual-boundary omni-directional method (Omni2D, Liu & Katz, 2013), the new 3D parallel-line omni-directional method (Omni3D) integrates the material acceleration along parallel lines aligned in multiple directions. Their angles are set by a spherical virtual grid. The integration is parallelized on a Tesla K40c GPU, which reduced the computing time from three hours to one minute for a single realization. To validate its performance, this method is utilized to calculate the 3D pressure fields in isotropic turbulence and channel flow using the JHU DNS Databases (http://turbulence.pha.jhu.edu). Both integration of the DNS acceleration as well as acceleration from synthetic 3D particles are tested. Results are compared to other method, e.g. solution to the Pressure Poisson Equation (e.g. PPE, Ghaemi et al., 2012) with Bernoulli based Dirichlet boundary conditions, and the Omni2D method. The error in Omni3D prediction is uniformly low, and its sensitivity to acceleration errors is local. It agrees with the PPE/Bernoulli prediction away from the Dirichlet boundary. The Omni3D method is also applied to experimental data obtained using tomographic PIV, and results are correlated with deformation of a compliant wall. ONR.

  8. StructBoost: Boosting Methods for Predicting Structured Output Variables.

    PubMed

    Chunhua Shen; Guosheng Lin; van den Hengel, Anton

    2014-10-01

    Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Inspired by structured support vector machines (SSVM), here we propose a new boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation.

  9. Exercise boosts immune response.

    PubMed

    Sander, Ruth

    2012-06-29

    Ageing is associated with a decline in normal functioning of the immune system described as 'immunosenescence'. This contributes to poorer vaccine response and increased incidence of infection and malignancy seen in older people. Regular exercise can enhance vaccination response, increase T-cells and boost the function of the natural killer cells in the immune system. Exercise also lowers levels of the inflammatory cytokines that cause the 'inflamm-ageing' that is thought to play a role in conditions including cardiovascular disease; type 2 diabetes; Alzheimer's disease; osteoporosis and some cancers.

  10. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    NASA Astrophysics Data System (ADS)

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boron-lined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter-Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  11. Observation of hole injection boost via two parallel paths in Pentacene thin-film transistors by employing Pentacene: 4, 4″-tris(3-methylphenylphenylamino) triphenylamine: MoO{sub 3} buffer layer

    SciTech Connect

    Yan, Pingrui; Liu, Ziyang; Liu, Dongyang; Wang, Xuehui; Yue, Shouzhen; Zhao, Yi; Zhang, Shiming

    2014-11-01

    Pentacene organic thin-film transistors (OTFTs) were prepared by introducing 4, 4″-tris(3-methylphenylphenylamino) triphenylamine (m-MTDATA): MoO{sub 3}, Pentacene: MoO{sub 3}, and Pentacene: m-MTDATA: MoO{sub 3} as buffer layers. These OTFTs all showed significant performance improvement comparing to the reference device. Significantly, we observe that the device employing Pentacene: m-MTDATA: MoO{sub 3} buffer layer can both take advantage of charge transfer complexes formed in the m-MTDATA: MoO{sub 3} device and suitable energy level alignment existed in the Pentacene: MoO{sub 3} device. These two parallel paths led to a high mobility, low threshold voltage, and contact resistance of 0.72 cm{sup 2}/V s, −13.4 V, and 0.83 kΩ at V{sub ds} = − 100 V. This work enriches the understanding of MoO{sub 3} doped organic materials for applications in OTFTs.

  12. Analytic boosted boson discrimination

    DOE PAGES

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits.more » By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less

  13. Analytic boosted boson discrimination

    SciTech Connect

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

  14. Boosted Beta Regression

    PubMed Central

    Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas

    2013-01-01

    Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706

  15. Collisional Line Mixing in Parallel and Perpendicular Bands of Linear Molecules by a Non-Markovian Approach

    NASA Astrophysics Data System (ADS)

    Buldyreva, Jeanna

    2013-06-01

    Reliable modeling of radiative transfer in planetary atmospheres requires accounting for the collisional line mixing effects in the regions of closely spaced vibrotational lines as well as in the spectral wings. Because of too high CPU cost of calculations from ab initio potential energy surfaces (if available), the relaxation matrix describing the influence of collisions is usually built by dynamical scaling laws, such as Energy-Corrected Sudden law. Theoretical approaches currently used for calculation of absorption near the band center are based on the impact approximation (Markovian collisions without memory effects) and wings are modeled via introducing some empirical parameters [1,2]. Operating with the traditional non-symmetric metric in the Liouville space, these approaches need corrections of the ECS-modeled relaxation matrix elements ("relaxation times" and "renormalization procedure") in order to ensure the fundamental relations of detailed balance and sum rules.We present an extension to the infrared absorption case of the previously developed [3] for rototranslational Raman scattering spectra of linear molecules non-Markovian approach of ECS-type. Owing to the specific choice of symmetrized metric in the Liouville space, the relaxation matrix is corrected for initial bath-molecule correlations and satisfies non-Markovian sum rules and detailed balance. A few standard ECS parameters determined by fitting to experimental linewidths of the isotropic Q-branch enable i) retrieval of these isolated-line parameters for other spectroscopies (IR absorption and anisotropic Raman scattering); ii) reproducing of experimental intensities of these spectra. Besides including vibrational angular momenta in the IR bending shapes, Coriolis effects are also accounted for. The efficiency of the method is demonstrated on OCS-He and CO_2-CO_2 spectra up to 300 and 60 atm, respectively. F. Niro, C. Boulet, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 88, 483

  16. Line Mixing in Parallel and Perpendicular Bands of CO2: A Further Test of the Refined Robert-Bonamy Formalism

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Qiancheng; Tipping, R. H.

    2015-01-01

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modeling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modeling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (sigma yields sigma and sigma yields pi) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  17. Line mixing in parallel and perpendicular bands of CO2: A further test of the refined Robert-Bonamy formalism.

    PubMed

    Boulet, C; Ma, Q; Tipping, R H

    2015-09-28

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modelling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modelling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (Σ → Σ and Σ → Π) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  18. Quantification of crotamine, a small basic myotoxin, in South American rattlesnake (Crotalus durissus terrificus) venom by enzyme-linked immunosorbent assay with parallel-lines analysis.

    PubMed

    Oguiur, N; Camargo, M E; da Silva, A R; Horton, D S

    2000-03-01

    Intraspecific variation in Crotalus durissus terrificus venom composition was studied in relation to crotamine activity. Crotamine induces paralysis in extension of hind legs of mice and myonecrosis in skeletal muscle cells. To determine whether the venom of crotamine-negative rattlesnake contains a quantity of myotoxin incapable of inducing paralysis, we have developed a very sensitivity immunological assay method, an enzyme-linked immunoabsorbent assay (ELISA), capable of detecting 0.6 ng of purified crotamine. The parallel-lines analysis of ELISA data showed to be useful because it shows the reliability of the experimental conditions. A variation in the amount of myotoxin in the crotamine-positive venom was observed, but not less than 0.1 mg of crotamine per mg of venom. It was not possible to detect it in crotamine-negative venom even at high venom concentrations.

  19. An open-source, massively parallel code for non-LTE synthesis and inversion of spectral lines and Zeeman-induced Stokes profiles

    NASA Astrophysics Data System (ADS)

    Socas-Navarro, H.; de la Cruz Rodríguez, J.; Asensio Ramos, A.; Trujillo Bueno, J.; Ruiz Cobo, B.

    2015-05-01

    With the advent of a new generation of solar telescopes and instrumentation, interpreting chromospheric observations (in particular, spectropolarimetry) requires new, suitable diagnostic tools. This paper describes a new code, NICOLE, that has been designed for Stokes non-LTE radiative transfer, for synthesis and inversion of spectral lines and Zeeman-induced polarization profiles, spanning a wide range of atmospheric heights from the photosphere to the chromosphere. The code features a number of unique features and capabilities and has been built from scratch with a powerful parallelization scheme that makes it suitable for application on massive datasets using large supercomputers. The source code is written entirely in Fortran 90/2003 and complies strictly with the ANSI standards to ensure maximum compatibility and portability. It is being publicly released, with the idea of facilitating future branching by other groups to augment its capabilities. The source code is currently hosted at the following repository: http://https://github.com/hsocasnavarro/NICOLE

  20. Multifunctionalization of cetuximab with bioorthogonal chemistries and parallel EGFR profiling of cell-lines using imaging, FACS and immunoprecipitation approaches.

    PubMed

    Reschke, Melanie L; Uprety, Rajendra; Bodhinayake, Imithri; Banu, Matei; Boockvar, John A; Sauve, Anthony A

    2014-12-01

    The ability to derivatize antibodies is currently limited by the chemical structure of antibodies as polypeptides. Modern methods of bioorthogonal and biocompatible chemical modifications could make antibody functionalization more predictable and easier, without compromising the functions of the antibody. To explore this concept, we modified the well-known anti-epidermal growth factor receptor (EGFR) drug, cetuximab (Erbitux®), with 5-azido-2-nitro-benzoyl (ANB) modifications by optimization of an acylation protocol. We then show that the resulting ANB-cetuximab can be reliably modified with dyes (TAMRA and carboxyrhodamine) or a novel synthesized cyclooctyne modified biotin. The resulting dye- and biotin-modified cetuximabs were then tested across several assay platforms with several cell lines including U87, LN229, F98EGFR, F98WT and HEK293 cells. The assay platforms included fluorescence microscopy, FACS and biotin-avidin based immunoprecipitation methods. The modified antibody performs consistently in all of these assay platforms, reliably determining relative abundances of EGFR expression on EGFR expressing cells (LN229 and F98EGFR) and failing to cross react with weak to negative EGFR expressing cells (U87, F98WT and HEK293). The ease of achieving diverse and assay relevant functionalizations as well as the consequent rapid construction of highly correlated antigen expression data sets highlights the power of bioorthogonal and biocompatible methods to conjugate macromolecules. These data provide a proof of concept for a multifunctionalization strategy that leverages the biochemical versatility and antigen specificity of antibodies.

  1. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGES

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  2. PORTA: A three-dimensional multilevel radiative transfer code for modeling the intensity and polarization of spectral lines with massively parallel computers

    NASA Astrophysics Data System (ADS)

    Štěpán, Jiří; Trujillo Bueno, Javier

    2013-09-01

    The interpretation of the intensity and polarization of the spectral line radiation produced in the atmosphere of the Sun and of other stars requires solving a radiative transfer problem that can be very complex, especially when the main interest lies in modeling the spectral line polarization produced by scattering processes and the Hanle and Zeeman effects. One of the difficulties is that the plasma of a stellar atmosphere can be highly inhomogeneous and dynamic, which implies the need to solve the non-equilibrium problem of the generation and transfer of polarized radiation in realistic three-dimensional (3D) stellar atmospheric models. Here we present PORTA, an efficient multilevel radiative transfer code we have developed for the simulation of the spectral line polarization caused by scattering processes and the Hanle and Zeeman effects in 3D models of stellar atmospheres. The numerical method of solution is based on the non-linear multigrid iterative method and on a novel short-characteristics formal solver of the Stokes-vector transfer equation which uses monotonic Bézier interpolation. Therefore, with PORTA the computing time needed to obtain at each spatial grid point the self-consistent values of the atomic density matrix (which quantifies the excitation state of the atomic system) scales linearly with the total number of grid points. Another crucial feature of PORTA is its parallelization strategy, which allows us to speed up the numerical solution of complicated 3D problems by several orders of magnitude with respect to sequential radiative transfer approaches, given its excellent linear scaling with the number of available processors. The PORTA code can also be conveniently applied to solve the simpler 3D radiative transfer problem of unpolarized radiation in multilevel systems.

  3. Lines

    ERIC Educational Resources Information Center

    Mires, Peter B.

    2006-01-01

    National Geography Standards for the middle school years generally stress the teaching of latitude and longitude. There are many creative ways to explain the great grid that encircles our planet, but the author has found that students in his college-level geography courses especially enjoy human-interest stories associated with lines of latitude…

  4. Long-term effectiveness of initiating non-nucleoside reverse transcriptase inhibitor- versus ritonavir-boosted protease inhibitor-based antiretroviral therapy: implications for first-line therapy choice in resource-limited settings

    PubMed Central

    Lima, Viviane D; Hull, Mark; McVea, David; Chau, William; Harrigan, P Richard; Montaner, Julio SG

    2016-01-01

    Introduction In many resource-limited settings, combination antiretroviral therapy (cART) failure is diagnosed clinically or immunologically. As such, there is a high likelihood that patients may stay on a virologically failing regimen for a substantial period of time. Here, we compared the long-term impact of initiating non-nucleoside reverse transcriptase inhibitor (NNRTI)- versus boosted protease inhibitor (bPI)-based cART in British Columbia (BC), Canada. Methods We followed prospectively 3925 ART-naïve patients who started NNRTIs (N=1963, 50%) or bPIs (N=1962; 50%) from 1 January 2000 until 30 June 2013 in BC. At six months, we assessed whether patients virologically failed therapy (a plasma viral load (pVL) >50 copies/mL), and we stratified them based on the pVL at the time of failure ≤500 versus >500 copies/mL. We then followed these patients for another six months and calculated their probability of achieving subsequent viral suppression (pVL <50 copies/mL twice consecutively) and of developing drug resistance. These probabilities were adjusted for fixed and time-varying factors, including cART adherence. Results At six months, virologic failure rates were 9.5 and 14.3 cases per 100 person-months for NNRTI and bPI initiators, respectively. NNRTI initiators who failed with a pVL ≤500 copies/mL had a 16% higher probability of achieving subsequent suppression at 12 months than bPI initiators (0.81 (25th–75th percentile 0.75–0.83) vs. 0.72 (0.61–0.75)). However, if failing NNRTI initiators had a pVL >500 copies/mL, they had a 20% lower probability of suppressing at 12 months than pVL-matched bPI initiators (0.37 (0.29–0.45) vs. 0.46 (0.38–0.54)). In terms of evolving HIV drug resistance, those who failed on NNRTI performed worse than bPI in all scenarios, especially if they failed with a viral load >500 copies/mL. Conclusions Our results show that patients who virologically failed at six months on NNRTI and continued on the same regimen had a

  5. At-line nanofractionation with parallel mass spectrometry and bioactivity assessment for the rapid screening of thrombin and factor Xa inhibitors in snake venoms.

    PubMed

    Mladic, Marija; Zietek, Barbara M; Iyer, Janaki Krishnamoorthy; Hermarij, Philip; Niessen, Wilfried M A; Somsen, Govert W; Kini, R Manjunatha; Kool, Jeroen

    2016-02-01

    Snake venoms comprise complex mixtures of peptides and proteins causing modulation of diverse physiological functions upon envenomation of the prey organism. The components of snake venoms are studied as research tools and as potential drug candidates. However, the bioactivity determination with subsequent identification and purification of the bioactive compounds is a demanding and often laborious effort involving different analytical and pharmacological techniques. This study describes the development and optimization of an integrated analytical approach for activity profiling and identification of venom constituents targeting the cardiovascular system, thrombin and factor Xa enzymes in particular. The approach developed encompasses reversed-phase liquid chromatography (RPLC) analysis of a crude snake venom with parallel mass spectrometry (MS) and bioactivity analysis. The analytical and pharmacological part in this approach are linked using at-line nanofractionation. This implies that the bioactivity is assessed after high-resolution nanofractionation (6 s/well) onto high-density 384-well microtiter plates and subsequent freeze drying of the plates. The nanofractionation and bioassay conditions were optimized for maintaining LC resolution and achieving good bioassay sensitivity. The developed integrated analytical approach was successfully applied for the fast screening of snake venoms for compounds affecting thrombin and factor Xa activity. Parallel accurate MS measurements provided correlation of observed bioactivity to peptide/protein masses. This resulted in identification of a few interesting peptides with activity towards the drug target factor Xa from a screening campaign involving venoms of 39 snake species. Besides this, many positive protease activity peaks were observed in most venoms analysed. These protease fingerprint chromatograms were found to be similar for evolutionary closely related species and as such might serve as generic snake protease

  6. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  7. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  8. Can you boost your metabolism?

    MedlinePlus

    ... can boost your metabolism. Eating foods like green tea, caffeine, or hot chili peppers will not help ... Randell RK, Jeukendrup AE. The effect of green tea extract on fat oxidation at rest and during ...

  9. Improved semi-supervised online boosting for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Yicui; Qi, Lin; Tan, Shukun

    2016-10-01

    The advantage of an online semi-supervised boosting method which takes object tracking problem as a classification problem, is training a binary classifier from labeled and unlabeled examples. Appropriate object features are selected based on real time changes in the object. However, the online semi-supervised boosting method faces one key problem: The traditional self-training using the classification results to update the classifier itself, often leads to drifting or tracking failure, due to the accumulated error during each update of the tracker. To overcome the disadvantages of semi-supervised online boosting based on object tracking methods, the contribution of this paper is an improved online semi-supervised boosting method, in which the learning process is guided by positive (P) and negative (N) constraints, termed P-N constraints, which restrict the labeling of the unlabeled samples. First, we train the classification by an online semi-supervised boosting. Then, this classification is used to process the next frame. Finally, the classification is analyzed by the P-N constraints, which are used to verify if the labels of unlabeled data assigned by the classifier are in line with the assumptions made about positive and negative samples. The proposed algorithm can effectively improve the discriminative ability of the classifier and significantly alleviate the drifting problem in tracking applications. In the experiments, we demonstrate real-time tracking of our tracker on several challenging test sequences where our tracker outperforms other related on-line tracking methods and achieves promising tracking performance.

  10. Early Boost and Slow Consolidation in Motor Skill Learning

    ERIC Educational Resources Information Center

    Hotermans, Christophe; Peigneux, Philippe; de Noordhout, Alain Maertens; Moonen, Gustave; Maquet, Pierre

    2006-01-01

    Motor skill learning is a dynamic process that continues covertly after training has ended and eventually leads to delayed increments in performance. Current theories suggest that this off-line improvement takes time and appears only after several hours. Here we show an early transient and short-lived boost in performance, emerging as early as…

  11. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  12. Resolving boosted jets with XCone

    NASA Astrophysics Data System (ADS)

    Thaler, Jesse; Wilkason, Thomas F.

    2015-12-01

    We show how the recently proposed XCone jet algorithm [1] smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies — dijet resonances, Higgs decays to bottom quarks, and all-hadronic top pairs — that demonstrate the physics applications of XCone over a wide kinematic range.

  13. Representing Arbitrary Boosts for Undergraduates.

    ERIC Educational Resources Information Center

    Frahm, Charles P.

    1979-01-01

    Presented is a derivation for the matrix representation of an arbitrary boost, a Lorentz transformation without rotation, suitable for undergraduate students with modest backgrounds in mathematics and relativity. The derivation uses standard vector and matrix techniques along with the well-known form for a special Lorentz transformation. (BT)

  14. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  15. High Efficient Universal Buck Boost Solar Array Regulator SAR Module

    NASA Astrophysics Data System (ADS)

    Kimmelmann, Stefan; Knorr, Wolfgang

    2014-08-01

    The high efficient universal Buck Boost Solar Array Regulator (SAR) module concept is applicable for a wide range of input and output voltages. The single point failure tolerant SAR module contains 3 power converters for the transfer of the SAR power to the battery dominated power bus. The converters are operating parallel in a 2 out of 3 redundancy and are driven by two different controllers. The output power of one module can be adjusted up to 1KW depending on the requirements. The maximum power point tracker (MPPT) is placed on a separate small printed circuit board and can be used if no external tracker signal is delivered. Depending on the mode and load conditions an efficiency of more than 97% is achievable. The stable control performance is achieved by implementing the magnetic current sense detection. The sensed power coil current is used in Buck and Boost control mode.

  16. Reweighting with Boosted Decision Trees

    NASA Astrophysics Data System (ADS)

    Rogozhnikov, Alex

    2016-10-01

    Machine learning tools are commonly used in modern high energy physics (HEP) experiments. Different models, such as boosted decision trees (BDT) and artificial neural networks (ANN), are widely used in analyses and even in the software triggers [1]. In most cases, these are classification models used to select the “signal” events from data. Monte Carlo simulated events typically take part in training of these models. While the results of the simulation are expected to be close to real data, in practical cases there is notable disagreement between simulated and observed data. In order to use available simulation in training, corrections must be introduced to generated data. One common approach is reweighting — assigning weights to the simulated events. We present a novel method of event reweighting based on boosted decision trees. The problem of checking the quality of reweighting step in analyses is also discussed.

  17. Interferometric resolution boosting for spectrographs

    SciTech Connect

    Erskine, D J; Edelstein, J

    2004-05-25

    Externally dispersed interferometry (EDI) is a technique for enhancing the performance of spectrographs for wide bandwidth high resolution spectroscopy and Doppler radial velocimetry. By placing a small angle-independent interferometer near the slit of a spectrograph, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moir{acute e} pattern, which manifests high detailed spectral information heterodyned down to detectably low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry. Previous demonstrations of {approx}2.5x resolution boost used an interferometer having a single fixed delay. We report new data indicating {approx}6x Gaussian resolution boost (140,000 from a spectrograph with 25,000 native resolving power), taken by using multiple exposures at widely different interferometer delays.

  18. Online boosting for vehicle detection.

    PubMed

    Chang, Wen-Chung; Cho, Chih-Wei

    2010-06-01

    This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.

  19. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  20. Electric rockets get a boost

    SciTech Connect

    Ashley, S.

    1995-12-01

    This article reports that xenon-ion thrusters are expected to replace conventional chemical rockets in many nonlaunch propulsion tasks, such as controlling satellite orbits and sending space probes on long exploratory missions. The space age dawned some four decades ago with the arrival of powerful chemical rockets that could propel vehicles fast enough to escape the grasp of earth`s gravity. Today, chemical rocket engines still provide the only means to boost payloads into orbit and beyond. The less glamorous but equally important job of moving vessels around in space, however, may soon be assumed by a fundamentally different rocket engine technology that has been long in development--electric propulsion.

  1. Where boosted significances come from

    NASA Astrophysics Data System (ADS)

    Plehn, Tilman; Schichtel, Peter; Wiegand, Daniel

    2014-03-01

    In an era of increasingly advanced experimental analysis techniques it is crucial to understand which phase space regions contribute a signal extraction from backgrounds. Based on the Neyman-Pearson lemma we compute the maximum significance for a signal extraction as an integral over phase space regions. We then study to what degree boosted Higgs strategies benefit ZH and tt¯H searches and which transverse momenta of the Higgs are most promising. We find that Higgs and top taggers are the appropriate tools, but would profit from a targeted optimization towards smaller transverse momenta. MadMax is available as an add-on to MadGraph 5.

  2. Study on reduction in electric field, charged voltage, ion current and ion density under HVDC transmission lines by parallel shield wires

    SciTech Connect

    Amano, Y.; Sunaga, Y.

    1989-04-01

    An important problem in the design and operation of HVDC transmission lines is to reduce electrical field effects such as ion flow electrification of objects, electric field, ion current and ion density at ground level in the vicinity of HVDC lines. Several models of shield wire were tested with the Shiobara HVDC test line. The models contain typical stranded wires that are generally used to reduce field effects at ground level, neutral conductors placed at lower parts of the DC line, and an ''earth corona model'' to cancel positive or negative ions intentionally by generating ions having opposite polarity to ions flowing into the wire. This report describes the experimental results of the effects of these shield wires and a method to predict shielding effects.

  3. Stochastic approximation boosting for incomplete data problems.

    PubMed

    Sexton, Joseph; Laake, Petter

    2009-12-01

    Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.

  4. Recursive bias estimation and L2 boosting

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  5. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  6. Comparing Two Non-parallel Regression Lines with the Parametric Alternative to Analysis of Covariance Using SPSS-X or SAS--the Johnson-Neyman Technique.

    ERIC Educational Resources Information Center

    Karpman, Mitchell

    1986-01-01

    The Johnson-Neyman (JN) technique is a parametric alternative to analysis of covariance that permits nonparallel regression lines. This article presents computer programs for J-N using the transformational languages of SPSS-X and SAS. The programs are designed for two groups and one covariate. (Author/JAZ)

  7. Series Connected Buck-Boost Regulator

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G. (Inventor)

    2006-01-01

    A Series Connected Buck-Boost Regulator (SCBBR) that switches only a fraction of the input power, resulting in relatively high efficiencies. The SCBBR has multiple operating modes including a buck, a boost, and a current limiting mode, so that an output voltage of the SCBBR ranges from below the source voltage to above the source voltage.

  8. Boost-phase discrimination research

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.; Feiereisen, William J.

    1993-01-01

    The final report describes the combined work of the Computational Chemistry and Aerothermodynamics branches within the Thermosciences Division at NASA Ames Research Center directed at understanding the signatures of shock-heated air. Considerable progress was made in determining accurate transition probabilities for the important band systems of NO that account for much of the emission in the ultraviolet region. Research carried out under this project showed that in order to reproduce the observed radiation from the bow shock region of missiles in their boost phase it is necessary to include the Burnett terms in the constituent equation, account for the non-Boltzmann energy distribution, correctly model the NO formation and rotational excitation process, and use accurate transition probabilities for the NO band systems. This work resulted in significant improvements in the computer code NEQAIR that models both the radiation and fluid dynamics in the shock region.

  9. GPU-based parallel clustered differential pulse code modulation

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Li, Wenze; Kong, Wanqiu

    2015-10-01

    Hyperspectral remote sensing technology is widely used in marine remote sensing, geological exploration, atmospheric and environmental remote sensing. Owing to the rapid development of hyperspectral remote sensing technology, resolution of hyperspectral image has got a huge boost. Thus data size of hyperspectral image is becoming larger. In order to reduce their saving and transmission cost, lossless compression for hyperspectral image has become an important research topic. In recent years, large numbers of algorithms have been proposed to reduce the redundancy between different spectra. Among of them, the most classical and expansible algorithm is the Clustered Differential Pulse Code Modulation (CDPCM) algorithm. This algorithm contains three parts: first clusters all spectral lines, then trains linear predictors for each band. Secondly, use these predictors to predict pixels, and get the residual image by subtraction between original image and predicted image. Finally, encode the residual image. However, the process of calculating predictors is timecosting. In order to improve the processing speed, we propose a parallel C-DPCM based on CUDA (Compute Unified Device Architecture) with GPU. Recently, general-purpose computing based on GPUs has been greatly developed. The capacity of GPU improves rapidly by increasing the number of processing units and storage control units. CUDA is a parallel computing platform and programming model created by NVIDIA. It gives developers direct access to the virtual instruction set and memory of the parallel computational elements in GPUs. Our core idea is to achieve the calculation of predictors in parallel. By respectively adopting global memory, shared memory and register memory, we finally get a decent speedup.

  10. Orthodontics Align Crooked Teeth and Boost Self-Esteem

    MedlinePlus

    ... desktop! more... Orthodontics Align Crooked Teeth and Boost Self- esteem Article Chapters Orthodontics Align Crooked Teeth and Boost Self- esteem Orthodontics print full article print this chapter email ...

  11. Riemann curvature of a boosted spacetime geometry

    NASA Astrophysics Data System (ADS)

    Battista, Emmanuele; Esposito, Giampiero; Scudellaro, Paolo; Tramontano, Francesco

    2016-10-01

    The ultrarelativistic boosting procedure had been applied in the literature to map the metric of Schwarzschild-de Sitter spacetime into a metric describing de Sitter spacetime plus a shock-wave singularity located on a null hypersurface. This paper evaluates the Riemann curvature tensor of the boosted Schwarzschild-de Sitter metric by means of numerical calculations, which make it possible to reach the ultrarelativistic regime gradually by letting the boost velocity approach the speed of light. Thus, for the first time in the literature, the singular limit of curvature, through Dirac’s δ distribution and its derivatives, is numerically evaluated for this class of spacetimes. Moreover, the analysis of the Kretschmann invariant and the geodesic equation shows that the spacetime possesses a “scalar curvature singularity” within a 3-sphere and it is possible to define what we here call “boosted horizon”, a sort of elastic wall where all particles are surprisingly pushed away, as numerical analysis demonstrates. This seems to suggest that such “boosted geometries” are ruled by a sort of “antigravity effect” since all geodesics seem to refuse to enter the “boosted horizon” and are “reflected” by it, even though their initial conditions are aimed at driving the particles toward the “boosted horizon” itself. Eventually, the equivalence with the coordinate shift method is invoked in order to demonstrate that all δ2 terms appearing in the Riemann curvature tensor give vanishing contribution in distributional sense.

  12. Rapid bioanalysis of vancomycin in serum and urine by high-performance liquid chromatography tandem mass spectrometry using on-line sample extraction and parallel analytical columns.

    PubMed

    Cass, R T; Villa, J S; Karr, D E; Schmidt, D E

    2001-01-01

    A novel high-performance liquid chromatography tandem mass spectrometry (LC/MS/MS) method is described for the determination of vancomycin in serum and urine. After the addition of internal standard (teicoplanin), serum and urine samples were directly injected onto an HPLC system consisting of an extraction column and dual analytical columns. The columns are plumbed through two switching valves. A six-port valve directs extraction column effluent either to waste or to an analytical column. A ten-port valve simultaneously permits equilibration of one analytical column while the other is used for sample analysis. Thus, off-line analytical column equilibration time does not require mass spectrometer time, freeing the detector for increased sample throughput. The on-line sample extraction step takes 15 seconds followed by gradient chromatography taking another 90 seconds. Having minimal sample pretreatment the method is both simple and fast. This system has been used to successfully develop a validated positive-ion electrospray bioanalytical method for the quantitation of vancomycin. Detection of vancomycin was accurate and precise, with a limit of detection of 1 ng/mL in serum and urine. The calibration curves for vancomycin in rat, dog and primate were linear in a concentration range of 0.001-10 microg/mL for serum and urine. This method has been successfully applied to determine the concentration of vancomycin in rat, dog and primate serum and urine samples from pharmacokinetic and urinary excretion studies.

  13. Boosting Wigner's nj-symbols

    NASA Astrophysics Data System (ADS)

    Speziale, Simone

    2017-03-01

    We study the SL (2 ,ℂ ) Clebsch-Gordan coefficients appearing in the Lorentzian EPRL spin foam amplitudes for loop quantum gravity. We show how the amplitudes decompose into SU(2) nj- symbols at the vertices and integrals over boosts at the edges. The integrals define edge amplitudes that can be evaluated analytically using and adapting results in the literature, leading to a pure state sum model formulation. This procedure introduces virtual representations which, in a manner reminiscent of virtual momenta in Feynman amplitudes, are off-shell of the simplicity constraints present in the theory, but with the integrands that peak at the on-shell values. We point out some properties of the edge amplitudes which are helpful for numerical and analytical evaluations of spin foam amplitudes, and suggest among other things a simpler model useful for calculations of certain lowest order amplitudes. As an application, we estimate the large spin scaling behaviour of the simpler model, on a closed foam with all 4-valent edges and Euler characteristic χ , to be Nχ -5 E +V /2. The paper contains a review and an extension of the results on SL (2 ,ℂ ) Clebsch-Gordan coefficients among unitary representations of the principal series that can be useful beyond their application to quantum gravity considered here.

  14. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-12-31

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  15. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-01-01

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  16. Boosting Manufacturing through Modular Chemical Process Intensification

    SciTech Connect

    2016-12-09

    Manufacturing USA's Rapid Advancement in Process Intensification Deployment Institute will focus on developing breakthrough technologies to boost domestic energy productivity and energy efficiency by 20 percent in five years through manufacturing processes.

  17. Boosting Manufacturing through Modular Chemical Process Intensification

    ScienceCinema

    None

    2017-01-06

    Manufacturing USA's Rapid Advancement in Process Intensification Deployment Institute will focus on developing breakthrough technologies to boost domestic energy productivity and energy efficiency by 20 percent in five years through manufacturing processes.

  18. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    SciTech Connect

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  19. Processing Semblances Induced through Inter-Postsynaptic Functional LINKs, Presumed Biological Parallels of K-Lines Proposed for Building Artificial Intelligence.

    PubMed

    Vadakkan, Kunjumon I

    2011-01-01

    The internal sensation of memory, which is available only to the owner of an individual nervous system, is difficult to analyze for its basic elements of operation. We hypothesize that associative learning induces the formation of functional LINK between the postsynapses. During memory retrieval, the activation of either postsynapse re-activates the functional LINK evoking a semblance of sensory activity arriving at its opposite postsynapse, nature of which defines the basic unit of internal sensation - namely, the semblion. In neuronal networks that undergo continuous oscillatory activity at certain levels of their organization re-activation of functional LINKs is expected to induce semblions, enabling the system to continuously learn, self-organize, and demonstrate instantiation, features that can be utilized for developing artificial intelligence (AI). This paper also explains suitability of the inter-postsynaptic functional LINKs to meet the expectations of Minsky's K-lines, basic elements of a memory theory generated to develop AI and methods to replicate semblances outside the nervous system.

  20. Processing Semblances Induced through Inter-Postsynaptic Functional LINKs, Presumed Biological Parallels of K-Lines Proposed for Building Artificial Intelligence

    PubMed Central

    Vadakkan, Kunjumon I.

    2011-01-01

    The internal sensation of memory, which is available only to the owner of an individual nervous system, is difficult to analyze for its basic elements of operation. We hypothesize that associative learning induces the formation of functional LINK between the postsynapses. During memory retrieval, the activation of either postsynapse re-activates the functional LINK evoking a semblance of sensory activity arriving at its opposite postsynapse, nature of which defines the basic unit of internal sensation – namely, the semblion. In neuronal networks that undergo continuous oscillatory activity at certain levels of their organization re-activation of functional LINKs is expected to induce semblions, enabling the system to continuously learn, self-organize, and demonstrate instantiation, features that can be utilized for developing artificial intelligence (AI). This paper also explains suitability of the inter-postsynaptic functional LINKs to meet the expectations of Minsky’s K-lines, basic elements of a memory theory generated to develop AI and methods to replicate semblances outside the nervous system. PMID:21845180

  1. Centaur liquid oxygen boost pump vibration test

    NASA Technical Reports Server (NTRS)

    Tang, H. M.

    1975-01-01

    The Centaur LOX boost pump was subjected to both the simulated Titan Centaur proof flight and confidence demonstration vibration test levels. For each test level, both sinusoidal and random vibration tests were conducted along each of the three orthogonal axes of the pump and turbine assembly. In addition to these tests, low frequency longitudinal vibration tests for both levels were conducted. All tests were successfully completed without damage to the boost pump.

  2. Boosted Jets at the LHC

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew

    2015-04-01

    Jets are collimated streams of high-energy particles ubiquitous at any particle collider experiment and serve as proxy for the production of elementary particles at short distances. As the Large Hadron Collider at CERN continues to extend its reach to ever higher energies and luminosities, an increasingly important aspect of any particle physics analysis is the study and identification of jets, electroweak bosons, and top quarks with large Lorentz boosts. In addition to providing a unique insight into potential new physics at the tera-electron volt energy scale, high energy jets are a sensitive probe of emergent phenomena within the Standard Model of particle physics and can teach us an enormous amount about quantum chromodynamics itself. Jet physics is also invaluable for lower-level experimental issues including triggering and background reduction. It is especially important for the removal of pile-up, which is radiation produced by secondary proton collisions that contaminates every hard proton collision event in the ATLAS and CMS experiments at the Large Hadron Collider. In this talk, I will review the myriad ways that jets and jet physics are being exploited at the Large Hadron Collider. This will include a historical discussion of jet algorithms and the requirements that these algorithms must satisfy to be well-defined theoretical objects. I will review how jets are used in searches for new physics and ways in which the substructure of jets is being utilized for discriminating backgrounds from both Standard Model and potential new physics signals. Finally, I will discuss how jets are broadening our knowledge of quantum chromodynamics and how particular measurements performed on jets manifest the universal dynamics of weakly-coupled conformal field theories.

  3. Aerodynamics of a turbojet-boosted launch vehicle concept

    NASA Technical Reports Server (NTRS)

    Small, W. J.; Riebe, G. D.; Taylor, A. H.

    1980-01-01

    Results from analytical and experimental studies of the aerodynamic characteristics of a turbojet-boosted launch vehicle are presented. The success of this launch vehicle concept depends upon several novel applications of aerodynamic technology, particularly in the area of takeoff lift and minimum transonic drag requirements. The take-off mode stresses leading edge vortex lift generated in parallel by a complex arrangement of low aspect ratio booster and orbiter wings. Wind-tunnel tests on a representative model showed that this low-speed lift is sensitive to geometric arrangements of the booster-orbiter combination and is not predictable by standard analytic techniques. Transonic drag was also experimentally observed to be very sensitive to booster location; however, these drag levels were accurately predicted by standard farfield wave drag theory.

  4. Tracking down hyper-boosted top quarks

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-01

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  5. Tracking down hyper-boosted top quarks

    SciTech Connect

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  6. Tracking down hyper-boosted top quarks

    DOE PAGES

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directlymore » employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.« less

  7. Features in Continuous Parallel Coordinates.

    PubMed

    Lehmann, Dirk J; Theisel, Holger

    2011-12-01

    Continuous Parallel Coordinates (CPC) are a contemporary visualization technique in order to combine several scalar fields, given over a common domain. They facilitate a continuous view for parallel coordinates by considering a smooth scalar field instead of a finite number of straight lines. We show that there are feature curves in CPC which appear to be the dominant structures of a CPC. We present methods to extract and classify them and demonstrate their usefulness to enhance the visualization of CPCs. In particular, we show that these feature curves are related to discontinuities in Continuous Scatterplots (CSP). We show this by exploiting a curve-curve duality between parallel and Cartesian coordinates, which is a generalization of the well-known point-line duality. Furthermore, we illustrate the theoretical considerations. Concluding, we discuss relations and aspects of the CPC's/CSP's features concerning the data analysis.

  8. Boost breaking in the EFT of inflation

    NASA Astrophysics Data System (ADS)

    Delacrétaz, Luca V.; Noumi, Toshifumi; Senatore, Leonardo

    2017-02-01

    If time-translations are spontaneously broken, so are boosts. This symmetry breaking pattern can be non-linearly realized by either just the Goldstone boson of time translations, or by four Goldstone bosons associated with time translations and boosts. In this paper we extend the Effective Field Theory of Multifield Inflation to consider the case in which the additional Goldstone bosons associated with boosts are light and coupled to the Goldstone boson of time translations. The symmetry breaking pattern forces a coupling to curvature so that the mass of the additional Goldstone bosons is predicted to be equal to √2H in the vast majority of the parameter space where they are light. This pattern therefore offers a natural way of generating self-interacting particles with Hubble mass during inflation. After constructing the general effective Lagrangian, we study how these particles mix and interact with the curvature fluctuations, generating potentially detectable non-Gaussian signals.

  9. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  10. Boosting Access to Government Rocket Science

    DTIC Science & Technology

    2014-10-01

    September–October 2014 8 with MSFC, through an SAA signed in 2012, using Marshall’s expertise and resources to perform wind tunnel testing on various...Defense AT&L: September–October 2014 6 Boosting Access to Government Rocket Science John F. Rice Defense AT&L: September–October 2014 6 Report...REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Boosting Access to Government Rocket Science 5a. CONTRACT NUMBER

  11. Modeling self-priming circuits for dielectric elastomer generators towards optimum voltage boost

    NASA Astrophysics Data System (ADS)

    Zanini, Plinio; Rossiter, Jonathan; Homer, Martin

    2016-04-01

    One of the main challenges for the practical implementation of dielectric elastomer generators (DEGs) is supplying high voltages. To address this issue, systems using self-priming circuits (SPCs) — which exploit the DEG voltage swing to increase its supplied voltage — have been used with success. A self-priming circuit consists of a charge pump implemented in parallel with the DEG circuit. At each energy harvesting cycle, the DEG receives a low voltage input and, through an almost constant charge cycle, generates a high voltage output. SPCs receive the high voltage output at the end of the energy harvesting cycle and supply it back as input for the following cycle, using the DEG as a voltage multiplier element. Although rules for designing self-priming circuits for dielectric elastomer generators exist, they have been obtained from intuitive observation of simulation results and lack a solid theoretical foundation. Here we report the development of a mathematical model to predict voltage boost using self-priming circuits. The voltage on the DEG attached to the SPC is described as a function of its initial conditions, circuit parameters/layout, and the DEG capacitance. Our mathematical model has been validated on an existing DEG implementation from the literature, and successfully predicts the voltage boost for each cycle. Furthermore, it allows us to understand the conditions for the boost to exist, and obtain the design rules that maximize the voltage boost.

  12. The Attentional Boost Effect and Context Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Smith, S. Adam; Spataro, Pietro

    2016-01-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors--the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is…

  13. Concomitant GRID boost for Gamma Knife radiosurgery

    SciTech Connect

    Ma Lijun; Kwok, Young; Chin, Lawrence S.; Simard, J. Marc; Regine, William F.

    2005-11-15

    We developed an integrated GRID boost technique for Gamma Knife radiosurgery. The technique generates an array of high dose spots within the target volume via a grid of 4-mm shots. These high dose areas were placed over a conventional Gamma Knife plan where a peripheral dose covers the full target volume. The beam weights of the 4-mm shots were optimized iteratively to maximize the integral dose inside the target volume. To investigate the target volume coverage and the dose to the adjacent normal brain tissue for the technique, we compared the GRID boosted treatment plans with conventional Gamma Knife treatment plans using physical and biological indices such as dose-volume histogram (DVH), DVH-derived indices, equivalent uniform dose (EUD), tumor control probabilities (TCP), and normal tissue complication probabilities (NTCP). We found significant increase in the target volume indices such as mean dose (5%-34%; average 14%), TCP (4%-45%; average 21%), and EUD (2%-22%; average 11%) for the GRID boost technique. No significant change in the peripheral dose coverage for the target volume was found per RTOG protocol. In addition, the EUD and the NTCP for the normal brain adjacent to the target (i.e., the near region) were decreased for the GRID boost technique. In conclusion, we demonstrated a new technique for Gamma Knife radiosurgery that can escalate the dose to the target while sparing the adjacent normal brain tissue.

  14. Schools Enlisting Defense Industry to Boost STEM

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2008-01-01

    Defense contractors Northrop Grumman Corp. and Lockheed Martin Corp. are joining forces in an innovative partnership to develop high-tech simulations to boost STEM--or science, technology, engineering, and mathematics--education in the Baltimore County schools. The Baltimore County partnership includes the local operations of two major military…

  15. The Attentional Boost Effect with Verbal Materials

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Spataro, Pietro; Picklesimer, Milton

    2014-01-01

    Study stimuli presented at the same time as unrelated targets in a detection task are better remembered than stimuli presented with distractors. This attentional boost effect (ABE) has been found with pictorial (Swallow & Jiang, 2010) and more recently verbal materials (Spataro, Mulligan, & Rossi-Arnaud, 2013). The present experiments…

  16. Energy Boost. Q & A with Steve Kiesner.

    ERIC Educational Resources Information Center

    Schneider, Jay W.

    2002-01-01

    Presents an interview with the director of national accounts for the Edison Electric Institute in Washington, DC about the association, its booklet on energy conservation within education facilities, and ways in which educational facilities can reduce costs by boosting energy conservation. (EV)

  17. Boost Converters for Gas Electric and Fuel Cell Hybrid Electric Vehicles

    SciTech Connect

    McKeever, JW

    2005-06-16

    Hybrid electric vehicles (HEVs) are driven by at least two prime energy sources, such as an internal combustion engine (ICE) and propulsion battery. For a series HEV configuration, the ICE drives only a generator, which maintains the state-of-charge (SOC) of propulsion and accessory batteries and drives the electric traction motor. For a parallel HEV configuration, the ICE is mechanically connected to directly drive the wheels as well as the generator, which likewise maintains the SOC of propulsion and accessory batteries and drives the electric traction motor. Today the prime energy source is an ICE; tomorrow it will very likely be a fuel cell (FC). Use of the FC eliminates a direct drive capability accentuating the importance of the battery charge and discharge systems. In both systems, the electric traction motor may use the voltage directly from the batteries or from a boost converter that raises the voltage. If low battery voltage is used directly, some special control circuitry, such as dual mode inverter control (DMIC) which adds a small cost, is necessary to drive the electric motor above base speed. If high voltage is chosen for more efficient motor operation or for high speed operation, the propulsion battery voltage must be raised, which would require some type of two-quadrant bidirectional chopper with an additional cost. Two common direct current (dc)-to-dc converters are: (1) the transformer-based boost or buck converter, which inverts a dc voltage, feeds the resulting alternating current (ac) into a transformer to raise or lower the voltage, and rectifies it to complete the conversion; and (2) the inductor-based switch mode boost or buck converter [1]. The switch-mode boost and buck features are discussed in this report as they operate in a bi-directional chopper. A benefit of the transformer-based boost converter is that it isolates the high voltage from the low voltage. Usually the transformer is large, further increasing the cost. A useful feature

  18. Mediterranean Diet Plus Olive Oil a Boost to Heart Health?

    MedlinePlus

    ... gov/news/fullstory_163557.html Mediterranean Diet Plus Olive Oil a Boost to Heart Health? It enhances ... HealthDay News) -- A Mediterranean diet high in virgin olive oil may boost the protective effects of "good" ...

  19. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  20. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and

  1. Occurrence of perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) in N.E. Spanish surface waters and their removal in a drinking water treatment plant that combines conventional and advanced treatments in parallel lines.

    PubMed

    Flores, Cintia; Ventura, Francesc; Martin-Alonso, Jordi; Caixach, Josep

    2013-09-01

    Perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) are two emerging contaminants that have been detected in all environmental compartments. However, while most of the studies in the literature deal with their presence or removal in wastewater treatment, few of them are devoted to their detection in treated drinking water and fate during drinking water treatment. In this study, analyses of PFOS and PFOA have been carried out in river water samples and in the different stages of a drinking water treatment plant (DWTP) which has recently improved its conventional treatment process by adding ultrafiltration and reverse osmosis in a parallel treatment line. Conventional and advanced treatments have been studied in several pilot plants and in the DWTP, which offers the opportunity to compare both treatments operating simultaneously. From the results obtained, neither preoxidation, sand filtration, nor ozonation, removed both perfluorinated compounds. As advanced treatments, reverse osmosis has proved more effective than reverse electrodialysis to remove PFOA and PFOS in the different configurations of pilot plants assayed. Granular activated carbon with an average elimination efficiency of 64±11% and 45±19% for PFOS and PFOA, respectively and especially reverse osmosis, which was able to remove ≥99% of both compounds, were the sole effective treatment steps. Trace levels of PFOS (3.0-21 ng/L) and PFOA (<4.2-5.5 ng/L) detected in treated drinking water were significantly lowered in comparison to those measured in precedent years. These concentrations represent overall removal efficiencies of 89±22% for PFOA and 86±7% for PFOS.

  2. Conformal pure radiation with parallel rays

    NASA Astrophysics Data System (ADS)

    Leistner, Thomas; Nurowski, Paweł

    2012-03-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves.

  3. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  4. Voltage-Boosting Driver For Switching Regulator

    NASA Technical Reports Server (NTRS)

    Trump, Ronald C.

    1990-01-01

    Driver circuit assures availability of 10- to 15-V gate-to-source voltage needed to turn on n-channel metal oxide/semiconductor field-effect transistor (MOSFET) acting as switch in switching voltage regulator. Includes voltage-boosting circuit efficiently providing gate voltage 10 to 15 V above supply voltage. Contains no exotic parts and does not require additional power supply. Consists of NAND gate and dual voltage booster operating in conjunction with pulse-width modulator part of regulator.

  5. Gradient Boosting for Conditional Random Fields

    DTIC Science & Technology

    2014-09-23

    Information Processing Systems 26 ( NIPS ’13), pages 647–655. 2013. [4] J. Friedman. Greedy function approximation: a gradient boosting machine. Annals of...and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 ( NIPS ’13), pages 3111–3119. 2013. [15] A. Quattoni, M...Collins, and T. Darrell. Conditional random fields for object recognition. In Advances in Neural Information Processing Systems 17 ( NIPS ’04), pages

  6. Boosted Random Ferns for Object Detection.

    PubMed

    Villamizar, Michael; Andrade-Cetto, Juan; Sanfeliu, Alberto; Moreno-Noguer, Francesc

    2017-03-01

    In this paper we introduce the Boosted Random Ferns (BRFs) to rapidly build discriminative classifiers for learning and detecting object categories. At the core of our approach we use standard random ferns, but we introduce four main innovations that let us bring ferns from an instance to a category level, and still retain efficiency. First, we define binary features on the histogram of oriented gradients-domain (as opposed to intensity-), allowing for a better representation of intra-class variability. Second, both the positions where ferns are evaluated within the sliding window, and the location of the binary features for each fern are not chosen completely at random, but instead we use a boosting strategy to pick the most discriminative combination of them. This is further enhanced by our third contribution, that is to adapt the boosting strategy to enable sharing of binary features among different ferns, yielding high recognition rates at a low computational cost. And finally, we show that training can be performed online, for sequentially arriving images. Overall, the resulting classifier can be very efficiently trained, densely evaluated for all image locations in about 0.1 seconds, and provides detection rates similar to competing approaches that require expensive and significantly slower processing times. We demonstrate the effectiveness of our approach by thorough experimentation in publicly available datasets in which we compare against state-of-the-art, and for tasks of both 2D detection and 3D multi-view estimation.

  7. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  8. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  9. Boost matrix converters in clean energy systems

    NASA Astrophysics Data System (ADS)

    Karaman, Ekrem

    This dissertation describes an investigation of novel power electronic converters, based on the ultra-sparse matrix topology and characterized by the minimum number of semiconductor switches. The Z-source, Quasi Z-source, Series Z-source and Switched-inductor Z-source networks were originally proposed for boosting the output voltage of power electronic inverters. These ideas were extended here on three-phase to three-phase and three-phase to single-phase indirect matrix converters. For the three-phase to three-phase matrix converters, the Z-source networks are placed between the three-switch input rectifier stage and the output six-switch inverter stage. A brief shoot-through state produces the voltage boost. An optimal pulse width modulation technique was developed to achieve high boosting capability and minimum switching losses in the converter. For the three-phase to single-phase matrix converters, those networks are placed similarly. For control purposes, a new modulation technique has been developed. As an example application, the proposed converters constitute a viable alternative to the existing solutions in residential wind-energy systems, where a low-voltage variable-speed generator feeds power to the higher-voltage fixed-frequency grid. Comprehensive analytical derivations and simulation results were carried out to investigate the operation of the proposed converters. Performance of the proposed converters was then compared between each other as well as with conventional converters. The operation of the converters was experimentally validated using a laboratory prototype.

  10. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  11. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  12. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  13. Presentation of antigen in immune complexes is boosted by soluble bacterial immunoglobulin binding proteins.

    PubMed

    Léonetti, M; Galon, J; Thai, R; Sautès-Fridman, C; Moine, G; Ménez, A

    1999-04-19

    Using a snake toxin as a proteic antigen (Ag), two murine toxin-specific monoclonal antibodies (mAbs), splenocytes, and two murine Ag-specific T cell hybridomas, we showed that soluble protein A (SpA) from Staphylococcus aureus and protein G from Streptococcus subspecies, two Ig binding proteins (IBPs), not only abolish the capacity of the mAbs to decrease Ag presentation but also increase Ag presentation 20-100-fold. Five lines of evidence suggest that this phenomenon results from binding of an IBP-Ab-Ag complex to B cells possessing IBP receptors. First, we showed that SpA is likely to boost presentation of a free mAb, suggesting that the IBP-boosted presentation of an Ag in an immune complex results from the binding of IBP to the mAb. Second, FACS analyses showed that an Ag-Ab complex is preferentially targeted by SpA to a subpopulation of splenocytes mainly composed of B cells. Third, SpA-dependent boosted presentation of an Ag-Ab complex is further enhanced when splenocytes are enriched in cells containing SpA receptors. Fourth, the boosting effect largely diminishes when splenocytes are depleted of cells containing SpA receptors. Fifth, the boosting effect occurs only when IBP simultaneously contains a Fab and an Fc binding site. Altogether, our data suggest that soluble IBPs can bridge immune complexes to APCs containing IBP receptors, raising the possibility that during an infection process by bacteria secreting these IBPs, Ag-specific T cells may activate IBP receptor-containing B cells by a mechanism of intermolecular help, thus leading to a nonspecific immune response.

  14. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  15. Precision Jet Substructure from Boosted Event Shapes

    NASA Astrophysics Data System (ADS)

    Feige, Ilya; Schwartz, Matthew D.; Stewart, Iain W.; Thaler, Jesse

    2012-08-01

    Jet substructure has emerged as a critical tool for LHC searches, but studies so far have relied heavily on shower Monte Carlo simulations, which formally approximate QCD at the leading-log level. We demonstrate that systematic higher-order QCD computations of jet substructure can be carried out by boosting global event shapes by a large momentum Q and accounting for effects due to finite jet size, initial-state radiation (ISR), and the underlying event (UE) as 1/Q corrections. In particular, we compute the 2-subjettiness substructure distribution for boosted Z→qq¯ events at the LHC at next-to-next-to-next-to-leading-log order. The calculation is greatly simplified by recycling known results for the thrust distribution in e+e- collisions. The 2-subjettiness distribution quickly saturates, becoming Q independent for Q≳400GeV. Crucially, the effects of jet contamination from ISR/UE can be subtracted out analytically at large Q without knowing their detailed form. Amusingly, the Q=∞ and Q=0 distributions are related by a scaling by e up to next-to-leading-log order.

  16. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  17. A multiview boosting approach to tissue segmentation

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Xu, Sheng; Pinto, Peter A.; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter L.; Wood, Bradford J.

    2014-04-01

    Digitized histopathology images have a great potential for improving or facilitating current assessment tools in cancer pathology. In order to develop accurate and robust automated methods, the precise segmentation of histologic objects such epithelium, stroma, and nucleus is necessary, in the hopes of information extraction not otherwise obvious to the subjective eye. Here, we propose a multivew boosting approach to segment histology objects of prostate tissue. Tissue specimen images are first represented at different scales using a Gaussian kernel and converted into several forms such HSV and La*b*. Intensity- and texture-based features are extracted from the converted images. Adopting multiview boosting approach, we effectively learn a classifier to predict the histologic class of a pixel in a prostate tissue specimen. The method attempts to integrate the information from multiple scales (or views). 18 prostate tissue specimens from 4 patients were employed to evaluate the new method. The method was trained on 11 tissue specimens including 75,832 epithelial and 103,453 stroma pixels and tested on 55,319 epithelial and 74,945 stroma pixels from 7 tissue specimens. The technique showed 96.7% accuracy, and as summarized into a receiver operating characteristic (ROC) plot, the area under the ROC curve (AUC) of 0.983 (95% CI: 0.983-0.984) was achieved.

  18. Centaur boost pump turbine icing investigation

    NASA Technical Reports Server (NTRS)

    Rollbuhler, R. J.

    1976-01-01

    An investigation was conducted to determine if ice formation in the Centaur vehicle liquid oxygen boost pump turbine could prevent rotation of the pump and whether or not this phenomenon could have been the failure mechanism for the Titan/Centaur vehicle TC-1. The investigation consisted of a series of tests done in the LeRC Space Power Chamber Facility to evaluate evaporative cooling behavior patterns in a turbine as a function of the quantity of water trapped in the turbine and as a function of the vehicle ascent pressure profile. It was found that evaporative freezing of water in the turbine housing, due to rapid depressurization within the turbine during vehicle ascent, could result in the formation of ice that would block the turbine and prevent rotation of the boost pump. But for such icing conditions to exist it would be necessary to have significant quantities of water in the turbine and/or its components, and the turbine housing temperature would have to be colder than 40 F at vehicle liftoff.

  19. Non-boost-invariant dissipative hydrodynamics

    NASA Astrophysics Data System (ADS)

    Florkowski, Wojciech; Ryblewski, Radoslaw; Strickland, Michael; Tinti, Leonardo

    2016-12-01

    The one-dimensional non-boost-invariant evolution of the quark-gluon plasma, presumably produced during the early stages of heavy-ion collisions, is analyzed within the frameworks of viscous and anisotropic hydrodynamics. We neglect transverse dynamics and assume homogeneous conditions in the transverse plane but, differently from Bjorken expansion, we relax longitudinal boost invariance in order to study the rapidity dependence of various hydrodynamical observables. We compare the results obtained using several formulations of second-order viscous hydrodynamics with a recent approach to anisotropic hydrodynamics, which treats the large initial pressure anisotropy in a nonperturbative fashion. The results obtained with second-order viscous hydrodynamics depend on the particular choice of the second-order terms included, which suggests that the latter should be included in the most complete way. The results of anisotropic hydrodynamics and viscous hydrodynamics agree for the central hot part of the system, however, they differ at the edges where the approach of anisotropic hydrodynamics helps to control the undesirable growth of viscous corrections observed in standard frameworks.

  20. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  1. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  2. Series Transmission Line Transformer

    DOEpatents

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  3. Behavior Analysis in Distance Education by Boosting Algorithms

    ERIC Educational Resources Information Center

    Zang, Wei; Lin, Fuzong

    2006-01-01

    Student behavior analysis is an active research topic in distance education in recent years. In this article, we propose a new method called Boosting to investigate students' behaviors. The Boosting Algorithm can be treated as a data mining method, trying to infer from a large amount of training data the essential factors and their relations that…

  4. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  5. Simulation Exploration through Immersive Parallel Planes: Preprint

    SciTech Connect

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  6. Boosting jet power in black hole spacetimes.

    PubMed

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W; Liebling, Steven L; Motl, Patrick M; Garrett, Travis

    2011-08-02

    The extraction of rotational energy from a spinning black hole via the Blandford-Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux.

  7. Inflammation boosts bacteriophage transfer between Salmonella spp.

    PubMed

    Diard, Médéric; Bakkeren, Erik; Cornuault, Jeffrey K; Moor, Kathrin; Hausmann, Annika; Sellin, Mikael E; Loverdo, Claude; Aertsen, Abram; Ackermann, Martin; De Paepe, Marianne; Slack, Emma; Hardt, Wolf-Dietrich

    2017-03-17

    Bacteriophage transfer (lysogenic conversion) promotes bacterial virulence evolution. There is limited understanding of the factors that determine lysogenic conversion dynamics within infected hosts. A murine Salmonella Typhimurium (STm) diarrhea model was used to study the transfer of SopEΦ, a prophage from STm SL1344, to STm ATCC14028S. Gut inflammation and enteric disease triggered >55% lysogenic conversion of ATCC14028S within 3 days. Without inflammation, SopEΦ transfer was reduced by up to 10(5)-fold. This was because inflammation (e.g., reactive oxygen species, reactive nitrogen species, hypochlorite) triggers the bacterial SOS response, boosts expression of the phage antirepressor Tum, and thereby promotes free phage production and subsequent transfer. Mucosal vaccination prevented a dense intestinal STm population from inducing inflammation and consequently abolished SopEΦ transfer. Vaccination may be a general strategy for blocking pathogen evolution that requires disease-driven transfer of temperate bacteriophages.

  8. Boosting jet power in black hole spacetimes

    PubMed Central

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W.; Liebling, Steven L.; Motl, Patrick M.; Garrett, Travis

    2011-01-01

    The extraction of rotational energy from a spinning black hole via the Blandford–Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux. PMID:21768341

  9. Giving top quark effective operators a boost

    NASA Astrophysics Data System (ADS)

    Englert, Christoph; Moore, Liam; Nordström, Karl; Russell, Michael

    2016-12-01

    We investigate the prospects to systematically improve generic effective field theory-based searches for new physics in the top sector during LHC run 2 as well as the high luminosity phase. In particular, we assess the benefits of high momentum transfer final states on top EFT-fit as a function of systematic uncertainties in comparison with sensitivity expected from fully-resolved analyses focusing on t t bar production. We find that constraints are typically driven by fully-resolved selections, while boosted top quarks can serve to break degeneracies in the global fit. This demystifies and clarifies the importance of high momentum transfer final states for global fits to new interactions in the top sector from direct measurements.

  10. Parallel methods for the flight simulation model

    SciTech Connect

    Xiong, Wei Zhong; Swietlik, C.

    1994-06-01

    The Advanced Computer Applications Center (ACAC) has been involved in evaluating advanced parallel architecture computers and the applicability of these machines to computer simulation models. The advanced systems investigated include parallel machines with shared. memory and distributed architectures consisting of an eight processor Alliant FX/8, a twenty four processor sor Sequent Symmetry, Cray XMP, IBM RISC 6000 model 550, and the Intel Touchstone eight processor Gamma and 512 processor Delta machines. Since parallelizing a truly efficient application program for the parallel machine is a difficult task, the implementation for these machines in a realistic setting has been largely overlooked. The ACAC has developed considerable expertise in optimizing and parallelizing application models on a collection of advanced multiprocessor systems. One of aspect of such an application model is the Flight Simulation Model, which used a set of differential equations to describe the flight characteristics of a launched missile by means of a trajectory. The Flight Simulation Model was written in the FORTRAN language with approximately 29,000 lines of source code. Depending on the number of trajectories, the computation can require several hours to full day of CPU time on DEC/VAX 8650 system. There is an impetus to reduce the execution time and utilize the advanced parallel architecture computing environment available. ACAC researchers developed a parallel method that allows the Flight Simulation Model to be able to run in parallel on the multiprocessor system. For the benchmark data tested, the parallel Flight Simulation Model implemented on the Alliant FX/8 has achieved nearly linear speedup. In this paper, we describe a parallel method for the Flight Simulation Model. We believe the method presented in this paper provides a general concept for the design of parallel applications. This concept, in most cases, can be adapted to many other sequential application programs.

  11. Boosting for multi-graph classification.

    PubMed

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Cai, Zhihua

    2015-03-01

    In this paper, we formulate a novel graph-based learning problem, multi-graph classification (MGC), which aims to learn a classifier from a set of labeled bags each containing a number of graphs inside the bag. A bag is labeled positive, if at least one graph in the bag is positive, and negative otherwise. Such a multi-graph representation can be used for many real-world applications, such as webpage classification, where a webpage can be regarded as a bag with texts and images inside the webpage being represented as graphs. This problem is a generalization of multi-instance learning (MIL) but with vital differences, mainly because instances in MIL share a common feature space whereas no feature is available to represent graphs in a multi-graph bag. To solve the problem, we propose a boosting based multi-graph classification framework (bMGC). Given a set of labeled multi-graph bags, bMGC employs dynamic weight adjustment at both bag- and graph-levels to select one subgraph in each iteration as a weak classifier. In each iteration, bag and graph weights are adjusted such that an incorrectly classified bag will receive a higher weight because its predicted bag label conflicts to the genuine label, whereas an incorrectly classified graph will receive a lower weight value if the graph is in a positive bag (or a higher weight if the graph is in a negative bag). Accordingly, bMGC is able to differentiate graphs in positive and negative bags to derive effective classifiers to form a boosting model for MGC. Experiments and comparisons on real-world multi-graph learning tasks demonstrate the algorithm performance.

  12. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Florence H.

    2004-05-01

    Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.

  13. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  14. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  15. Line Creek improves efficiency

    SciTech Connect

    Harder, P.

    1988-04-01

    Boosting coal recovery rate by 8% and reducing fuel expense $18,000 annually by replacing two tractors, are two tangible benefits that Crows Nest Resources of British Columbia has achieved since overseas coal markets weakened in 1985. Though coal production at the 4-million tpy Line Creek open pit mine has been cut 25% from its 1984 level, morale among the pit crew remains high. More efficient pit equipment, innovative use of existing equipment, and encouragement of multiple skill development among workers - so people can be assigned to different jobs in the operation as situations demand - contribute to a successful operation.

  16. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  17. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  18. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  19. The reach for charged Higgs bosons with boosted bottom and boosted top jets

    NASA Astrophysics Data System (ADS)

    Sullivan, Zack; Pedersen, Keith

    2017-01-01

    At moderate values of tan(β) , a supersymmetric charged Higgs boson H+/- is expected to be difficult to find due its small cross section and large backgrounds. Using the new μx boosted bottom jet tag, and measured boosted top tagging rates from the CERN LHC, we examine the reach for TeV-scale charged Higgs bosons at 14 TeV and 100 TeV colliders in top-Higgs associated production, where the charged Higgs decays to a boosted top and bottom quark pair. We conclude that the cross section for charged Higgs bosons is indeed too small to observe at the LHC in the moderate tan(β) ``wedge region,'' but it will be possible to probe charged Higgs bosons at nearly all tan(β) up to 6 TeV at a 100 TeV collider. This work was supported by the U.S. Department of Energy under award No. DE-SC0008347.

  20. Development of cassava periclinal chimera may boost production.

    PubMed

    Bomfim, N; Nassar, N M A

    2014-02-10

    Plant periclinal chimeras are genotypic mosaics arranged concentrically. Trials to produce them to combine different species have been done, but pratical results have not been achieved. We report for the second time the development of a very productive interspecific periclinal chimera in cassava. It has very large edible roots up to 14 kg per plant at one year old compared to 2-3 kg in common varieties. The epidermal tissue formed was from Manihot esculenta cultivar UnB 032, and the subepidermal and internal tissue from the wild species, Manihot fortalezensis. We determined the origin of tissues by meiotic and mitotic chromosome counts, plant anatomy and morphology. Epidermal features displayed useful traits to deduce tissue origin: cell shape and size, trichome density and stomatal length. Chimera roots had a wholly tuberous and edible constitution with smaller starch granule size and similar distribution compared to cassava. Root size enlargement might have been due to an epigenetic effect. These results suggest a new line of improved crop based on the development of interspecific chimeras composed of different combinations of wild and cultivated species. It promises boosting cassava production through exceptional root enlargement.

  1. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  2. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  3. First results of the Los Alamos polyphase boost converter-modulator

    SciTech Connect

    Doss, James D.; Gribble, R. F.; Lynch, M. T.; Rees, D. E.; Tallerico, P. J.; Reass, W. A.

    2001-01-01

    This paper describes the first full-scale electrical test results of the Los Alamos polyphase boost converter-modulator being developed for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. The convertrr-modulator provides 140 kV, 1.2 mS, 60 Hz pulses to a 5 MW, 805 MHz klystron. The system, which has 1 MW average power, derives its +/- 1250 Volt DC buss link voltages from a standard 3-phase utility 13.8 kV to 2100 volt transformer. An SCR pre-regulator provides a soft-start function in addition to correction of line and load variations, from no-load to full-load. Energy storage is provided by low inductance self-clearing metallized hazy polypropylene traction capacitors. Each of the 3-phase H-bridge Insulated Gate Bipolar Transistor (IGBT) Pulse-Width Modulation (PWM) drivers are resonated with the amorphous nanocrystalline boost transformer and associated peaking circuits to provide zero-voltage-switching characteristics for the IGBT's. This design feature minimizes IGBT switching losses. By PWM of individual IGBT conduction angles, output pulse regulation with adaptive feedforward and feedback techniques is used to improve the klystron voltage pulse shape. In addition to the first operational results, this paper will discuss the relevant design techniques associated with the boost converter-modulator topology.

  4. Series-Connected Buck Boost Regulators

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G.

    2005-01-01

    A series-connected buck boost regulator (SCBBR) is an electronic circuit that bucks a power-supply voltage to a lower regulated value or boosts it to a higher regulated value. The concept of the SCBBR is a generalization of the concept of the SCBR, which was reported in "Series-Connected Boost Regulators" (LEW-15918), NASA Tech Briefs, Vol. 23, No. 7 (July 1997), page 42. Relative to prior DC-voltage-regulator concepts, the SCBBR concept can yield significant reductions in weight and increases in power-conversion efficiency in many applications in which input/output voltage ratios are relatively small and isolation is not required, as solar-array regulation or battery charging with DC-bus regulation. Usually, a DC voltage regulator is designed to include a DC-to-DC converter to reduce its power loss, size, and weight. Advances in components, increases in operating frequencies, and improved circuit topologies have led to continual increases in efficiency and/or decreases in the sizes and weights of DC voltage regulators. The primary source of inefficiency in the DC-to-DC converter portion of a voltage regulator is the conduction loss and, especially at high frequencies, the switching loss. Although improved components and topology can reduce the switching loss, the reduction is limited by the fact that the converter generally switches all the power being regulated. Like the SCBR concept, the SCBBR concept involves a circuit configuration in which only a fraction of the power is switched, so that the switching loss is reduced by an amount that is largely independent of the specific components and circuit topology used. In an SCBBR, the amount of power switched by the DC-to-DC converter is only the amount needed to make up the difference between the input and output bus voltage. The remaining majority of the power passes through the converter without being switched. The weight and power loss of a DC-to-DC converter are determined primarily by the amount of power

  5. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  6. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. Linked-View Parallel Coordinate Plot Renderer

    SciTech Connect

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  9. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  10. 39% access time improvement, 11% energy reduction, 32 kbit 1-read/1-write 2-port static random-access memory using two-stage read boost and write-boost after read sensing scheme

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yasue; Moriwaki, Shinichi; Kawasumi, Atsushi; Miyano, Shinji; Shinohara, Hirofumi

    2016-04-01

    We propose novel circuit techniques for 1 clock (1CLK) 1 read/1 write (1R/1W) 2-port static random-access memories (SRAMs) to improve read access time (tAC) and write margins at low voltages. Two-stage read boost (TSR-BST) and write word line boost (WWL-BST) after the read sensing schemes have been proposed. TSR-BST reduces the worst read bit line (RBL) delay by 61% and RBL amplitude by 10% at V DD = 0.5 V, which improves tAC by 39% and reduces energy dissipation by 11% at V DD = 0.55 V. WWL-BST after read sensing scheme improves minimum operating voltage (V min) by 140 mV. A 32 kbit 1CLK 1R/1W 2-port SRAM with TSR-BST and WWL-BST has been developed using a 40 nm CMOS.

  11. Boosting nitrification by membrane-attached biofilm.

    PubMed

    Wu, C Y; Ushiwaka, S; Horii, H; Yamagiwa, K

    2006-01-01

    Nitrification is a key step for reliable biological nitrogen removal. In order to enhance nitrification in the activated sludge (AS) process, membrane-attached biofilm (MAB) was incorporated in a conventional activated sludge tank. Simultaneous organic carbon removal and nitrification of the MAB incorporated activated sludge (AS + MAB) process was investigated with continuous wastewater treatment. The effluent TOC concentration of AS and the AS + MAB processes were about 6.3 mg/L and 7.9 mg/L, respectively. The TOC removal efficiency of both AS and AS + MAB were above 95% during the wastewater treatment, indicating excellent organic carbon removal performance in both processes. Little nitrification occurred in the AS process. On the contrary, successful nitrification was obtained with the AS + MAB process with nitrification efficiency of about 90%. The volumetric and surface nitrification rates were about 0.14 g/Ld and 6.5 g/m2d, respectively. The results clearly demonstrated that nitrification in the conventional AS process was boosted by MAB. Furthermore, the microfaunal population in the AS + MAB process was different from that in the AS process. The high concentration of rotifers in the AS + MAB process was expected to decrease the generation of excess sludge in the process.

  12. Acetonitrile boosts conductivity of imidazolium ionic liquids.

    PubMed

    Chaban, Vitaly V; Voroshylova, Iuliia V; Kalugin, Oleg N; Prezhdo, Oleg V

    2012-07-05

    We apply a new methodology in the force field generation (Phys. Chem. Chem. Phys.2011, 13, 7910) to study binary mixtures of five imidazolium-based room-temperature ionic liquids (RTILs) with acetonitrile (ACN). Each RTIL is composed of tetrafluoroborate (BF(4)) anion and dialkylimidazolium (MMIM) cations. The first alkyl group of MIM is methyl, and the other group is ethyl (EMIM), butyl (BMIM), hexyl (HMIM), octyl (OMIM), and decyl (DMIM). Upon addition of ACN, the ionic conductivity of RTILs increases by more than 50 times. It significantly exceeds an impact of most known solvents. Unexpectedly, long-tailed imidazolium cations demonstrate the sharpest conductivity boost. This finding motivates us to revisit an application of RTIL/ACN binary systems as advanced electrolyte solutions. The conductivity correlates with a composition of ion aggregates simplifying its predictability. Addition of ACN exponentially increases diffusion and decreases viscosity of the RTIL/ACN mixtures. Large amounts of ACN stabilize ion pairs, although they ruin greater ion aggregates.

  13. Boosted Regression Tree Models to Explain Watershed ...

    EPA Pesticide Factsheets

    Boosted regression tree (BRT) models were developed to quantify the nonlinear relationships between landscape variables and nutrient concentrations in a mesoscale mixed land cover watershed during base-flow conditions. Factors that affect instream biological components, based on the Index of Biotic Integrity (IBI), were also analyzed. Seasonal BRT models at two spatial scales (watershed and riparian buffered area [RBA]) for nitrite-nitrate (NO2-NO3), total Kjeldahl nitrogen, and total phosphorus (TP) and annual models for the IBI score were developed. Two primary factors — location within the watershed (i.e., geographic position, stream order, and distance to a downstream confluence) and percentage of urban land cover (both scales) — emerged as important predictor variables. Latitude and longitude interacted with other factors to explain the variability in summer NO2-NO3 concentrations and IBI scores. BRT results also suggested that location might be associated with indicators of sources (e.g., land cover), runoff potential (e.g., soil and topographic factors), and processes not easily represented by spatial data indicators. Runoff indicators (e.g., Hydrological Soil Group D and Topographic Wetness Indices) explained a substantial portion of the variability in nutrient concentrations as did point sources for TP in the summer months. The results from our BRT approach can help prioritize areas for nutrient management in mixed-use and heavily impacted watershed

  14. Exploiting tRNAs to Boost Virulence.

    PubMed

    Albers, Suki; Czech, Andreas

    2016-01-19

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage-either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5'- and 3'-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification.

  15. Exploiting tRNAs to Boost Virulence

    PubMed Central

    Albers, Suki; Czech, Andreas

    2016-01-01

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage—either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5′- and 3′-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification. PMID:26797637

  16. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  17. Cutting Salt a Health Boost for Kidney Patients

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_163628.html Cutting Salt a Health Boost for Kidney Patients Blood pressure ... Encouraging people with kidney disease to reduce their salt intake may help improve blood pressure and cut ...

  18. Did El Nino Weather Give Zika a Boost?

    MedlinePlus

    ... fullstory_162611.html Did El Nino Weather Give Zika a Boost? Climate phenomenon could have helped infection- ... might have aided the explosive spread of the Zika virus throughout South America, a new study reports. ...

  19. High-temperature alloys: Single-crystal performance boost

    NASA Astrophysics Data System (ADS)

    Schütze, Michael

    2016-08-01

    Titanium aluminide alloys are lightweight and have attractive properties for high-temperature applications. A new growth method that enables single-crystal production now boosts their mechanical performance.

  20. Xanax, Valium May Boost Pneumonia Risk in Alzheimer's Patients

    MedlinePlus

    ... html Xanax, Valium May Boost Pneumonia Risk in Alzheimer's Patients Researchers suspect people may breathe saliva or ... 10, 2017 MONDAY, April 10, 2017 (HealthDay News) -- Alzheimer's patients given sedatives such as Valium or Xanax ...

  1. Lung-Sparing Surgery May Boost Mesothelioma Survival

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_162720.html Lung-Sparing Surgery May Boost Mesothelioma Survival Treatment nearly ... 23, 2016 (HealthDay News) -- Surgery that preserves the lung, when combined with other therapies, appears to extend ...

  2. Autism Greatly Boosts Kids' Injury Risk, Especially for Drowning

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_164198.html Autism Greatly Boosts Kids' Injury Risk, Especially for Drowning ... TUESDAY, March 21, 2017 (HealthDay News) -- Children with autism are at extremely high risk of drowning compared ...

  3. Trauma as A Teen May Boost Depression Risk Around Menopause

    MedlinePlus

    ... 164355.html Trauma as a Teen May Boost Depression Risk Around Menopause Likelihood was more than twice ... during their teens have a greater risk of depression during the years leading into menopause, a new ...

  4. A Lengthy, Stable Marriage May Boost Stroke Survival

    MedlinePlus

    ... 162542.html A Lengthy, Stable Marriage May Boost Stroke Survival Lifelong singles fared the worst, study finds ... 14, 2016 WEDNESDAY, Dec. 14, 2016 (HealthDay News) -- Stroke patients may have better odds of surviving if ...

  5. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  6. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  7. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  8. SCALING OF THE ANOMALOUS BOOST IN RELATIVISTIC JET BOUNDARY LAYER

    SciTech Connect

    Zenitani, Seiji; Hesse, Michael; Klimas, Alex

    2010-04-01

    We investigate the one-dimensional interaction of a relativistic jet and an external medium. Relativistic magnetohydrodynamic simulations show an anomalous boost of the jet fluid in the boundary layer, as previously reported. We describe the boost mechanism using an ideal relativistic fluid and magnetohydrodynamic theory. The kinetic model is also examined for further understanding. Simple scaling laws for the maximum Lorentz factor are derived, and verified by the simulations.

  9. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  10. Revisiting and parallelizing SHAKE

    NASA Astrophysics Data System (ADS)

    Weinbach, Yael; Elber, Ron

    2005-10-01

    An algorithm is presented for running SHAKE in parallel. SHAKE is a widely used approach to compute molecular dynamics trajectories with constraints. An essential step in SHAKE is the solution of a sparse linear problem of the type Ax = b, where x is a vector of unknowns. Conjugate gradient minimization (that can be done in parallel) replaces the widely used iteration process that is inherently serial. Numerical examples present good load balancing and are limited only by communication time.

  11. Parallel Unsteady Turbopump Flow Simulations for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2000-01-01

    An efficient solution procedure for time-accurate solutions of Incompressible Navier-Stokes equation is obtained. Artificial compressibility method requires a fast convergence scheme. Pressure projection method is efficient when small time-step is required. The number of sub-iteration is reduced significantly when Poisson solver employed with the continuity equation. Both computing time and memory usage are reduced (at least 3 times). Other work includes Multi Level Parallelism (MLP) of INS3D, overset connectivity for the validation case, experimental measurements, and computational model for boost pump.

  12. Breakdown of Spatial Parallel Coding in Children's Drawing

    ERIC Educational Resources Information Center

    De Bruyn, Bart; Davis, Alyson

    2005-01-01

    When drawing real scenes or copying simple geometric figures young children are highly sensitive to parallel cues and use them effectively. However, this sensitivity can break down in surprisingly simple tasks such as copying a single line where robust directional errors occur despite the presence of parallel cues. Before we can conclude that this…

  13. Our intraoperative boost radiotherapy experience and applications

    PubMed Central

    Günay, Semra; Alan, Ömür; Yalçın, Orhan; Türkmen, Aygen; Dizdar, Nihal

    2016-01-01

    Objective: To present our experience since November 2013, and case selection criteria for intraoperative boost radiotherapy (IObRT) that significantly reduces the local recurrence rate after breast conserving surgery in patients with breast cancer. Material and Methods: Patients who were suitable for IObRT were identified within the group of patients who were selected for breast conserving surgery at our breast council. A MOBETRON (mobile linear accelerator for IObRT) was used for IObRt during surgery. Results: Patients younger than 60 years old with <3 cm invasive ductal cancer in one focus (or two foci within 2 cm), with a histologic grade of 2–3, and a high possibility of local recurrence were admitted for IObRT application. Informed consent was obtained from all participants. Lumpectomy and sentinel lymph node biopsy was performed and advancement flaps were prepared according to the size and inclination of the conus following evaluation of tumor size and surgical margins by pathology. Distance to the thoracic wall was measured, and a radiation oncologist and radiation physicist calculated the required dose. Anesthesia was regulated with slower ventilation frequency, without causing hypoxia. The skin and incision edges were protected, the field was radiated (with 6 MeV electron beam of 10 Gy) and the incision was closed. In our cases, there were no major postoperative surgical or early radiotherapy related complications. Conclusion: The completion of another stage of local therapy with IObRT during surgery positively effects sequencing of other treatments like chemotherapy, hormonotherapy and radiotherapy, if required. IObRT increases disease free and overall survival, as well as quality of life in breast cancer patients. PMID:26985156

  14. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  15. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  16. Bridging the gap between parallel file systems and local file systems : a case study with PVFS.

    SciTech Connect

    Gu, P.; Wang, J.; Ross, R.; Mathematics and Computer Science; Univ. of Central Florida

    2008-09-01

    Parallel I/O plays an increasingly important role in today's data intensive computing applications. While much attention has been paid to parallel read performance, most of this work has focused on the parallel file system, middleware, or application layers, ignoring the potential for improvement through more effective use of local storage. In this paper, we present the design and implementation of segment-structured on-disk data grouping and prefetching (SOGP), a technique that leverages additional local storage to boost the local data read performance for parallel file systems, especially for those applications with partially overlapped access patterns. Parallel virtual file system (PVFS) is chosen as an example. Our experiments show that an SOGP-enhanced PVFS prototype system can outperform a traditional Linux-Ext3-based PVFS for many applications and benchmarks, in some tests by as much as 230% in terms of I/O bandwidth.

  17. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  18. Boosted Fast Flux Loop Final Report

    SciTech Connect

    Boosted Fast Flux Loop Project Staff

    2009-09-01

    The Boosted Fast Flux Loop (BFFL) project was initiated to determine basic feasibility of designing, constructing, and installing in a host irradiation facility, an experimental vehicle that can replicate with reasonable fidelity the fast-flux test environment needed for fuels and materials irradiation testing for advanced reactor concepts. Originally called the Gas Test Loop (GTL) project, the activity included (1) determination of requirements that must be met for the GTL to be responsive to potential users, (2) a survey of nuclear facilities that may successfully host the GTL, (3) conceptualizing designs for hardware that can support the needed environments for neutron flux intensity and energy spectrum, atmosphere, flow, etc. needed by the experimenters, and (4) examining other aspects of such a system, such as waste generation and disposal, environmental concerns, needs for additional infrastructure, and requirements for interfacing with the host facility. A revised project plan included requesting an interim decision, termed CD-1A, that had objectives of' establishing the site for the project at the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL), deferring the CD 1 application, and authorizing a research program that would resolve the most pressing technical questions regarding GTL feasibility, including issues relating to the use of booster fuel in the ATR. Major research tasks were (1) hydraulic testing to establish flow conditions through the booster fuel, (2) mini-plate irradiation tests and post-irradiation examination to alleviate concerns over corrosion at the high heat fluxes planned, (3) development and demonstration of booster fuel fabrication techniques, and (4) a review of the impact of the GTL on the ATR safety basis. A revised cooling concept for the apparatus was conceptualized, which resulted in renaming the project to the BFFL. Before the subsequent CD-1 approval request could be made, a decision was made in April 2006

  19. How Vein Sealing Boosts Fracture Opening

    NASA Astrophysics Data System (ADS)

    Nüchter, Jens-Alexander

    2015-04-01

    an increase in the fracture opening rates. (4) At constant strain rates, the rate of fracture opening increases with increasing strain. These results suggest that vein sealing boosts the rate of fracture opening, and contributes to development of low-aspect ratio veins.

  20. An optimized posterior axillary boost technique in radiation therapy to supraclavicular and axillary lymph nodes: A comparative study

    SciTech Connect

    Hernandez, Victor; Arenas, Meritxell; Müller, Katrin; Gomez, David; Bonet, Marta

    2013-01-01

    To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statistical analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.

  1. CRUNCH_PARALLEL

    SciTech Connect

    Shumaker, Dana E.; Steefel, Carl I.

    2016-06-21

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  2. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  3. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  4. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  5. Effects of Nasal Corticosteroids on Boosts of Systemic Allergen-Specific IgE Production Induced by Nasal Allergen Exposure

    PubMed Central

    Egger, Cornelia; Lupinek, Christian; Ristl, Robin; Lemell, Patrick; Horak, Friedrich; Zieglmayer, Petra; Spitzauer, Susanne; Valenta, Rudolf; Niederberger, Verena

    2015-01-01

    Background Allergen exposure via the respiratory tract and in particular via the nasal mucosa boosts systemic allergen-specific IgE production. Intranasal corticosteroids (INCS) represent a first line treatment of allergic rhinitis but their effects on this boost of allergen-specific IgE production are unclear. Aim Here we aimed to determine in a double-blind, placebo-controlled study whether therapeutic doses of an INCS preparation, i.e., nasal fluticasone propionate, have effects on boosts of allergen-specific IgE following nasal allergen exposure. Methods Subjects (n = 48) suffering from grass and birch pollen allergy were treated with daily fluticasone propionate or placebo nasal spray for four weeks. After two weeks of treatment, subjects underwent nasal provocation with either birch pollen allergen Bet v 1 or grass pollen allergen Phl p 5. Bet v 1 and Phl p 5-specific IgE, IgG1–4, IgM and IgA levels were measured in serum samples obtained at the time of provocation and one, two, four, six and eight weeks thereafter. Results Nasal allergen provocation induced a median increase to 141.1% of serum IgE levels to allergens used for provocation but not to control allergens 4 weeks after provocation. There were no significant differences regarding the boosts of allergen-specific IgE between INCS- and placebo-treated subjects. Conclusion In conclusion, the application of fluticasone propionate had no significant effects on the boosts of systemic allergen-specific IgE production following nasal allergen exposure. Trial Registration http://clinicaltrials.gov/ NCT00755066 PMID:25705889

  6. Maximizing boosted top identification by minimizing N-subjettiness

    NASA Astrophysics Data System (ADS)

    Thaler, Jesse; van Tilburg, Ken

    2012-02-01

    N -subjettiness is a jet shape designed to identify boosted hadronic objects such as top quarks. Given N subjet axes within a jet, N-subjettiness sums the angular distances of jet constituents to their nearest subjet axis. Here, we generalize and improve on N -subjettiness by minimizing over all possible subjet directions, using a new variant of the k-means clustering algorithm. On boosted top benchmark samples from the BOOST2010 workshop, we demonstrate that a simple cut on the 3-subjettiness to 2-subjettiness ratio yields 20% (50%) tagging efficiency for a 0.23% (4.1%) fake rate, making N -subjettiness a highly effective boosted top tagger. N-subjettiness can be modified by adjusting an angular weighting exponent, and we find that the jet broadening measure is preferred for boosted top searches. We also explore multivariate techniques, and show that additional improvements are possible using a modified Fisher discriminant. Finally, we briefly mention how our minimization procedure can be extended to the entire event, allowing the event shape N-jettiness to act as a fixed N cone jet algorithm.

  7. Parallel Coordinate Axes.

    ERIC Educational Resources Information Center

    Friedlander, Alex; And Others

    1982-01-01

    Several methods of numerical mappings other than the usual cartesian coordinate system are considered. Some examples using parallel axes representation, which are seen to lead to aesthetically pleasing or interesting configurations, are presented. Exercises with alternative representations can stimulate pupil imagination and exploration in…

  8. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  9. Parallel Dislocation Simulator

    SciTech Connect

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  10. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  11. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  12. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  13. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  14. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Quinn O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  15. Parallel Multigrid Equation Solver

    SciTech Connect

    Adams, Mark

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  16. Noise reduction effect and analysis through serial multiple sampling in a CMOS image sensor with floating diffusion boost-driving

    NASA Astrophysics Data System (ADS)

    Wakabayashi, Hayato; Yamaguchi, Keiji; Yamagata, Yuuki

    2017-04-01

    We have developed a 1/2.3-in. 10.3 mega pixel back-illuminated CMOS image sensor utilizing serial multiple sampling. This sensor achieves an RMS random noise of 1.3e‑ and row temporal noise (RTN) of 0.19e‑. Serial multiple sampling is realized with a column inline averaging technique without the need for additional processing circuitry. Pixel readout is accomplished utilizing a 4-shared-pixel floating diffusion (FD) boost-driving architecture. RTN caused by column parallel readout was analyzed considering the transfer function at the system level and the developed model was verified by measurement data taken at each sampling time. This model demonstrates the RTN improvement of ‑1.6 dB in a parallel multiple readout architecture.

  17. Boosted Fast Flux Loop Alternative Cooling Assessment

    SciTech Connect

    Glen R. Longhurst; Donna Post Guillen; James R. Parry; Douglas L. Porter; Bruce W. Wallace

    2007-08-01

    The Gas Test Loop (GTL) Project was instituted to develop the means for conducting fast neutron irradiation tests in a domestic radiation facility. It made use of booster fuel to achieve the high neutron flux, a hafnium thermal neutron absorber to attain the high fast-to-thermal flux ratio, a mixed gas temperature control system for maintaining experiment temperatures, and a compressed gas cooling system to remove heat from the experiment capsules and the hafnium thermal neutron absorber. This GTL system was determined to provide a fast (E > 0.1 MeV) flux greater than 1.0E+15 n/cm2-s with a fast-to-thermal flux ratio in the vicinity of 40. However, the estimated system acquisition cost from earlier studies was deemed to be high. That cost was strongly influenced by the compressed gas cooling system for experiment heat removal. Designers were challenged to find a less expensive way to achieve the required cooling. This report documents the results of the investigation leading to an alternatively cooled configuration, referred to now as the Boosted Fast Flux Loop (BFFL). This configuration relies on a composite material comprised of hafnium aluminide (Al3Hf) in an aluminum matrix to transfer heat from the experiment to pressurized water cooling channels while at the same time providing absorption of thermal neutrons. Investigations into the performance this configuration might achieve showed that it should perform at least as well as its gas-cooled predecessor. Physics calculations indicated that the fast neutron flux averaged over the central 40 cm (16 inches) relative to ATR core mid-plane in irradiation spaces would be about 1.04E+15 n/cm2-s. The fast-to-thermal flux ratio would be in excess of 40. Further, the particular configuration of cooling channels was relatively unimportant compared with the total amount of water in the apparatus in determining performance. Thermal analyses conducted on a candidate configuration showed the design of the water coolant and

  18. Managing first-line failure.

    PubMed

    Cooper, David A

    2014-01-01

    WHO standard of care for failure of a first regimen, usually 2N(t)RTI's and an NNRTI, consists of a ritonavir-boosted protease inhibitor with a change in N(t)RTI's. Until recently, there was no evidence to support these recommendations which were based on expert opinion. Two large randomized clinical trials, SECOND LINE and EARNEST both showed excellent response rates (>80%) for the WHO standard of care and indicated that a novel regimen of a boosted protease inhibitor with an integrase inhibitor had equal efficacy with no difference in toxicity. In EARNEST, a third arm consisting of induction with the combined protease and integrase inhibitor followed by protease inhibitor monotherapy maintenance was inferior and led to substantial (20%) protease inhibitor resistance. These studies confirm the validity of the current recommendations of WHO and point to a novel public health approach of using two new classes for second line when standard first-line therapy has failed, which avoids resistance genotyping. Notwithstanding, adherence must be stressed in those failing first-line treatments. Protease inhibitor monotherapy is not suitable for a public health approach in low- and middle-income countries.

  19. Conditional Random Field (CRF)-Boosting: Constructing a Robust Online Hybrid Boosting Multiple Object Tracker Facilitated by CRF Learning

    PubMed Central

    Yang, Ehwa; Gwak, Jeonghwan; Jeon, Moongu

    2017-01-01

    Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable. PMID:28304366

  20. Self-boosting vaccines and their implications for herd immunity.

    PubMed

    Arinaminpathy, Nimalan; Lavine, Jennie S; Grenfell, Bryan T

    2012-12-04

    Advances in vaccine technology over the past two centuries have facilitated far-reaching impact in the control of many infections, and today's emerging vaccines could likewise open new opportunities in the control of several diseases. Here we consider the potential, population-level effects of a particular class of emerging vaccines that use specific viral vectors to establish long-term, intermittent antigen presentation within a vaccinated host: in essence, "self-boosting" vaccines. In particular, we use mathematical models to explore the potential role of such vaccines in situations where current immunization raises only relatively short-lived protection. Vaccination programs in such cases are generally limited in their ability to raise lasting herd immunity. Moreover, in certain cases mass vaccination can have the counterproductive effect of allowing an increase in severe disease, through reducing opportunities for immunity to be boosted through natural exposure to infection. Such dynamics have been proposed, for example, in relation to pertussis and varicella-zoster virus. In this context we show how self-boosting vaccines could open qualitatively new opportunities, for example by broadening the effective duration of herd immunity that can be achieved with currently used immunogens. At intermediate rates of self-boosting, these vaccines also alleviate the potential counterproductive effects of mass vaccination, through compensating for losses in natural boosting. Importantly, however, we also show how sufficiently high boosting rates may introduce a new regime of unintended consequences, wherein the unvaccinated bear an increased disease burden. Finally, we discuss important caveats and data needs arising from this work.

  1. Augmenting antitumor T-cell responses to mimotope vaccination by boosting with native tumor antigens.

    PubMed

    Buhrman, Jonathan D; Jordan, Kimberly R; U'ren, Lance; Sprague, Jonathan; Kemmler, Charles B; Slansky, Jill E

    2013-01-01

    Vaccination with antigens expressed by tumors is one strategy for stimulating enhanced T-cell responses against tumors. However, these peptide vaccines rarely result in efficient expansion of tumor-specific T cells or responses that protect against tumor growth. Mimotopes, or peptide mimics of tumor antigens, elicit increased numbers of T cells that crossreact with the native tumor antigen, resulting in potent antitumor responses. Unfortunately, mimotopes may also elicit cells that do not crossreact or have low affinity for tumor antigen. We previously showed that one such mimotope of the dominant MHC class I tumor antigen of a mouse colon carcinoma cell line stimulates a tumor-specific T-cell clone and elicits antigen-specific cells in vivo, yet protects poorly against tumor growth. We hypothesized that boosting the mimotope vaccine with the native tumor antigen would focus the T-cell response elicited by the mimotope toward high affinity, tumor-specific T cells. We show that priming T cells with the mimotope, followed by a native tumor-antigen boost, improves tumor immunity compared with T cells elicited by the same prime with a mimotope boost. Our data suggest that the improved tumor immunity results from the expansion of mimotope-elicited tumor-specific T cells that have increased avidity for the tumor antigen. The enhanced T cells are phenotypically distinct and enriched for T-cell receptors previously correlated with improved antitumor immunity. These results suggest that incorporation of native antigen into clinical mimotope vaccine regimens may improve the efficacy of antitumor T-cell responses.

  2. A methodology for boost-glide transport technology planning

    NASA Technical Reports Server (NTRS)

    Repic, E. M.; Olson, G. A.; Milliken, R. J.

    1974-01-01

    A systematic procedure is presented by which the relative economic value of technology factors affecting design, configuration, and operation of boost-glide transport can be evaluated. Use of the methodology results in identification of first-order economic gains potentially achievable by projected advances in each of the definable, hypersonic technologies. Starting with a baseline vehicle, the formulas, procedures and forms which are integral parts of this methodology are developed. A demonstration of the methodology is presented for one specific boost-glide system.

  3. Boosted Objects: A Probe of Beyond the Standard Model Physics

    SciTech Connect

    Abdesselam, A.; Kuutmann, E.Bergeaas; Bitenc, U.; Brooijmans, G.; Butterworth, J.; Bruckman de Renstrom, P.; Buarque Franzosi, D.; Buckingham, R.; Chapleau, B.; Dasgupta, M.; Davison, A.; Dolen, J.; Ellis, S.; Fassi, F.; Ferrando, J.; Frandsen, M.T.; Frost, J.; Gadfort, T.; Glover, N.; Haas, A.; Halkiadakis, E.; /more authors..

    2012-06-12

    We present the report of the hadronic working group of the BOOST2010 workshop held at the University of Oxford in June 2010. The first part contains a review of the potential of hadronic decays of highly boosted particles as an aid for discovery at the LHC and a discussion of the status of tools developed to meet the challenge of reconstructing and isolating these topologies. In the second part, we present new results comparing the performance of jet grooming techniques and top tagging algorithms on a common set of benchmark channels. We also study the sensitivity of jet substructure observables to the uncertainties in Monte Carlo predictions.

  4. Buck-boost converter feedback controller design via evolutionary search

    NASA Astrophysics Data System (ADS)

    Sundareswaran, K.; Devi, V.; Nadeem, S. K.; Sreedevi, V. T.; Palani, S.

    2010-11-01

    Buck-boost converters are switched power converters. The model of the converter system varies from the ON state to the OFF state and hence traditional methods of controller design based on approximate transfer function models do not yield good dynamic response at different operating points of the converter system. This article attempts to design a feedback controller for a buck-boost type dc-dc converter using a genetic algorithm. The feedback controller design is perceived as an optimisation problem and a robust controller is estimated through an evolutionary search. Extensive simulation and experimental results provided in the article show the effectiveness of the new approach.

  5. Quantum AdaBoost algorithm via cluster state

    NASA Astrophysics Data System (ADS)

    Li, Yuan

    2017-03-01

    The principle and theory of quantum computation are investigated by researchers for many years, and further applied to improve the efficiency of classical machine learning algorithms. Based on physical mechanism, a quantum version of AdaBoost (Adaptive Boosting) training algorithm is proposed in this paper, of which purpose is to construct a strong classifier. In the proposed scheme with cluster state in quantum mechanism is to realize the weak learning algorithm, and then update the corresponding weight of examples. As a result, a final classifier can be obtained by combining efficiently weak hypothesis based on measuring cluster state to reweight the distribution of examples.

  6. 10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND SOUTH PIER. LOOKING SOUTHEAST. - Route 31 Bridge, New Jersey Route 31, crossing disused main line of Central Railroad of New Jersey (C.R.R.N.J.) (New Jersey Transit's Raritan Valley Line), Hampton, Hunterdon County, NJ

  7. Detection of multiple sinusoids using a parallel ale

    SciTech Connect

    David, R.A.

    1984-01-01

    This paper introduces an Adaptive Line Enhancer (ALE) whose parallel structure enables the detection and enhancement of multiple sinusoids. A function describing the performance surface is derived for the case where several line signals are buried in white noise. A steepest descent adaptive algorithm is derived, and simulations are used to demonstrate its performance.

  8. Digital parallel-to-series pulse-train converter

    NASA Technical Reports Server (NTRS)

    Hussey, J.

    1971-01-01

    Circuit converts number represented as two level signal on n-bit lines to series of pulses on one of two lines, depending on sign of number. Converter accepts parallel binary input data and produces number of output pulses equal to number represented by input data.

  9. Early childhood investments substantially boost adult health.

    PubMed

    Campbell, Frances; Conti, Gabriella; Heckman, James J; Moon, Seong Hyeok; Pinto, Rodrigo; Pungello, Elizabeth; Pan, Yi

    2014-03-28

    High-quality early childhood programs have been shown to have substantial benefits in reducing crime, raising earnings, and promoting education. Much less is known about their benefits for adult health. We report on the long-term health effects of one of the oldest and most heavily cited early childhood interventions with long-term follow-up evaluated by the method of randomization: the Carolina Abecedarian Project (ABC). Using recently collected biomedical data, we find that disadvantaged children randomly assigned to treatment have significantly lower prevalence of risk factors for cardiovascular and metabolic diseases in their mid-30s. The evidence is especially strong for males. The mean systolic blood pressure among the control males is 143 millimeters of mercury (mm Hg), whereas it is only 126 mm Hg among the treated. One in four males in the control group is affected by metabolic syndrome, whereas none in the treatment group are affected. To reach these conclusions, we address several statistical challenges. We use exact permutation tests to account for small sample sizes and conduct a parallel bootstrap confidence interval analysis to confirm the permutation analysis. We adjust inference to account for the multiple hypotheses tested and for nonrandom attrition. Our evidence shows the potential of early life interventions for preventing disease and promoting health.

  10. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT

  11. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  12. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  13. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  14. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  15. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  16. Homology, convergence and parallelism

    PubMed Central

    Ghiselin, Michael T.

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  17. Dielectric Nonlinear Transmission Line (Postprint)

    DTIC Science & Technology

    2011-12-01

    Technical Paper 3. DATES COVERED (From - To) 2011 4. TITLE AND SUBTITLE Dielectric Nonlinear Transmission Line (POSTPRINT) 5a. CONTRACT NUMBER...14. ABSTRACT A parallel plate nonlinear transmission line (NLTL) was constructed. Periodic loading of nonlinear dielectric slabs provides the...846-9101 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 Dielectric Nonlinear Transmission Line David M. French, Brad W. Hoff

  18. Parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Camberos, Jose; Merriam, Marshal

    1991-01-01

    A parallel unstructured grid generation algorithm is presented and implemented on the Hypercube. Different processor hierarchies are discussed, and the appropraite hierarchies for mesh generation and mesh smoothing are selected. A domain-splitting algorithm for unstructured grids which tries to minimize the surface-to-volume ratio of each subdomain is described. This splitting algorithm is employed both for grid generation and grid smoothing. Results obtained on the Hypercube demonstrate the effectiveness of the algorithms developed.

  19. Development of Parallel GSSHA

    DTIC Science & Technology

    2013-09-01

    C en te r Paul R. Eller , Jing-Ru C. Cheng, Aaron R. Byrd, Charles W. Downer, and Nawa Pradhan September 2013 Approved for public release...Program ERDC TR-13-8 September 2013 Development of Parallel GSSHA Paul R. Eller and Jing-Ru C. Cheng Information Technology Laboratory US Army Engineer...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Eller , Ruth Cheng, Aaron Byrd, Chuck Downer, and Nawa Pradhan 5d. PROJECT NUMBER

  20. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  1. Massively Parallel Genetics.

    PubMed

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance."

  2. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  3. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  4. Elf Atochem boosts production of CFC substitutes

    SciTech Connect

    Not Available

    1992-05-01

    To carve out a larger share of the market for acceptable chlorofluorocarbon substitutes, Elf Atochem (Paris) is expanding its production of HFC-134a, HCFC-141b and HCFC-142b in the U.S. and in France. This paper reports that the company is putting the finishing touches on a plant at its Pierre-Benite (France) facility, to bring 9,000 m.t./yr (19.8 million lb) of HFC-134a capacity on-line by September. Construction is scheduled to begin next year at the company's Calvert City, Ky., plant, where a 15,000-m.t./yr (33-million-lb) unit for HFC-134a will come onstream by 1995.

  5. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  6. Mimotope vaccine efficacy gets a "boost" from native tumor antigens.

    PubMed

    Buhrman, Jonathan D; Slansky, Jill E

    2013-04-01

    Tumor-associated antigen (TAA)-targeting mimotope peptides exert more prominent immunostimulatory functions than unmodified TAAs, with the caveat that some T-cell clones exhibit a relatively low affinity for TAAs. Combining mimotope-based vaccines with native TAAs in a prime-boost setting significantly improves antitumor immunity.

  7. Repetitive peptide boosting progressively enhances functional memory CTLs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Induction of functional memory CTLs holds promise for fighting critical infectious diseases through vaccination, but so far, no effective regime has been identified. We show here that memory CTLs can be enhanced progressively to high levels by repetitive intravenous boosting with peptide and adjuvan...

  8. Real-World Connections Can Boost Journalism Program.

    ERIC Educational Resources Information Center

    Schrier, Kathy; Bott, Don; McGuire, Tim

    2001-01-01

    Describes various ways scholastic journalism advisers have attempted to make real-world connections to boost their journalism programs: critiques of student publications by invited guest speakers (professional journalists); regional workshops where professionals offer short presentations; local media offering programming or special sections aimed…

  9. Boost compensator for use with internal combustion engine with supercharger

    SciTech Connect

    Asami, T.

    1988-04-12

    A boost compensator for controlling the position of a control rack of a fuel injection pump to supply fuel to an internal combustion with a supercharger in response to a boost pressure to be applied to the engine is described. The control rack is movable in a first direction increasing an amount of fuel to be supplied by the fuel injection pump to the engine and in a second direction, opposite to the first direction, decreasing the amount of fuel. The boost compensator comprises: a push rod disposed for forward and rearward movement in response to the boost pressure; a main lever disposed for angular movement about a first pivot; an auxiliary lever disposed for angular movement about a second pivot; return spring means associated with the first portion of the auxiliary lever for resiliently biasing same in one direction about the second pivot; and abutment means mounted on the second portion of the auxiliary lever and engageable with the second portion of the main lever.

  10. Balance-Boosting Footwear Tips for Older People

    MedlinePlus

    ... Home » Learn About Feet » Tips for Healthy Feet Balance-Boosting Footwear Tips for Older People Balance in all aspects of life is a good ... mental equilibrium isn't the only kind of balance that's important in life. Good physical balance can ...

  11. Graph ensemble boosting for imbalanced noisy graph stream classification.

    PubMed

    Pan, Shirui; Wu, Jia; Zhu, Xingquan; Zhang, Chengqi

    2015-05-01

    Many applications involve stream data with structural dependency, graph representations, and continuously increasing volumes. For these applications, it is very common that their class distributions are imbalanced with minority (or positive) samples being only a small portion of the population, which imposes significant challenges for learning models to accurately identify minority samples. This problem is further complicated with the presence of noise, because they are similar to minority samples and any treatment for the class imbalance may falsely focus on the noise and result in deterioration of accuracy. In this paper, we propose a classification model to tackle imbalanced graph streams with noise. Our method, graph ensemble boosting, employs an ensemble-based framework to partition graph stream into chunks each containing a number of noisy graphs with imbalanced class distributions. For each individual chunk, we propose a boosting algorithm to combine discriminative subgraph pattern selection and model learning as a unified framework for graph classification. To tackle concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the boosting framework can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-life imbalanced graph streams demonstrate clear benefits of our boosting design for handling imbalanced noisy graph stream.

  12. Boost glycemic control in teen diabetics through 'family focused teamwork'.

    PubMed

    2003-09-01

    While family conflict during the teenaged years is typical, it can have long-term health consequences when it involves an adolescent with diabetes. However, researchers at Joslin Diabetes Center in Boston have developed a low-cost intervention that aims to remove conflict from disease management responsibilities--and a new study shows that it can boost glycemic control as well.

  13. Heterologous Prime-Boost Immunisation Regimens Against Infectious Diseases

    DTIC Science & Technology

    2006-08-01

    Heterologous Prime-Boost Immunisation Regimens Against Infectious Diseases Susan Shahin and David Proll Human Protection and... diseases (such as malaria, tuberculosis and HIV) has been hindered by the lack of effective immunisation strategies that induce the cellular arm of...different animal and disease models. Since several intracellular pathogens are considered potential biowarfare threats, the objective of this review

  14. Lock-in-detection-free line-scan stimulated Raman scattering microscopy for near video-rate Raman imaging.

    PubMed

    Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2016-09-01

    We report on the development of a unique lock-in-detection-free line-scan stimulated Raman scattering microscopy technique based on a linear detector with a large full well capacity controlled by a field-programmable gate array (FPGA) for near video-rate Raman imaging. With the use of parallel excitation and detection scheme, the line-scan SRS imaging at 20 frames per second can be acquired with a ∼5-fold lower excitation power density, compared to conventional point-scan SRS imaging. The rapid data communication between the FPGA and the linear detector allows a high line-scanning rate to boost the SRS imaging speed without the need for lock-in detection. We demonstrate this lock-in-detection-free line-scan SRS imaging technique using the 0.5 μm polystyrene and 1.0 μm poly(methyl methacrylate) beads mixed in water, as well as living gastric cancer cells.

  15. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  16. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  17. Status of TRANSP Parallel Services

    NASA Astrophysics Data System (ADS)

    Indireshkumar, K.; Andre, Robert; McCune, Douglas; Randerson, Lewis

    2006-10-01

    The PPPL TRANSP code suite has been used successfully over many years to carry out time dependent simulations of tokamak plasmas. However, accurately modeling certain phenomena such as RF heating and fast ion behavior using TRANSP requires extensive computational power and will benefit from parallelization. Parallelizing all of TRANSP is not required and parts will run sequentially while other parts run parallelized. To efficiently use a site's parallel services, the parallelized TRANSP modules are deployed to a shared ``parallel service'' on a separate cluster. The PPPL Monte Carlo fast ion module NUBEAM and the MIT RF module TORIC are the first TRANSP modules to be so deployed. This poster will show the performance scaling of these modules within the parallel server. Communications between the serial client and the parallel server will be described in detail, and measurements of startup and communications overhead will be shown. Physics modeling benefits for TRANSP users will be assessed.

  18. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  19. Benefit of Radiation Boost After Whole-Breast Radiotherapy

    SciTech Connect

    Livi, Lorenzo; Borghesi, Simona; Saieva, Calogero; Fambrini, Massimiliano; Iannalfi, Alberto; Greto, Daniela; Paiar, Fabiola; Scoccianti, Silvia; Simontacchi, Gabriele; Bianchi, Simonetta; Cataliotti, Luigi; Biti, Giampaolo

    2009-11-15

    Purpose: To determine whether a boost to the tumor bed after breast-conserving surgery (BCS) and radiotherapy (RT) to the whole breast affects local control and disease-free survival. Methods and Materials: A total of 1,138 patients with pT1 to pT2 breast cancer underwent adjuvant RT at the University of Florence. We analyzed only patients with a minimum follow-up of 1 year (range, 1-20 years), with negative surgical margins. The median age of the patient population was 52.0 years (+-7.9 years). The breast cancer relapse incidence probability was estimated by the Kaplan-Meier method, and differences between patient subgroups were compared by the log rank test. Cox regression models were used to evaluate the risk of breast cancer relapse. Results: On univariate survival analysis, boost to the tumor bed reduced breast cancer recurrence (p < 0.0001). Age and tamoxifen also significantly reduced breast cancer relapse (p = 0.01 and p = 0.014, respectively). On multivariate analysis, the boost and the medium age (45-60 years) were found to be inversely related to breast cancer relapse (hazard ratio [HR], 0.27; 95% confidence interval [95% CI], 0.14-0.52, and HR 0.61; 95% CI, 0.37-0.99, respectively). The effect of the boost was more evident in younger patients (HR, 0.15 and 95% CI, 0.03-0.66 for patients <45 years of age; and HR, 0.31 and 95% CI, 0.13-0.71 for patients 45-60 years) on multivariate analyses stratified by age, although it was not a significant predictor in women older than 60 years. Conclusion: Our results suggest that boost to the tumor bed reduces breast cancer relapse and is more effective in younger patients.

  20. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  1. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  2. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  3. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  4. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... failure of all engines. (b) Each alternate system may be a duplicate power portion or a manually operated... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power boost and power-operated...

  5. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... failure of all engines. (b) Each alternate system may be a duplicate power portion or a manually operated... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power boost and power-operated...

  6. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  7. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  8. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  9. Integration of cell line and process development to overcome the challenge of a difficult to express protein.

    PubMed

    Alves, Christina S; Gilbert, Alan; Dalvi, Swati; St Germain, Bryan; Xie, Wenqi; Estes, Scott; Kshirsagar, Rashmi; Ryll, Thomas

    2015-01-01

    This case study addresses the difficulty in achieving high level expression and production of a small, very positively charged recombinant protein. The novel challenges with this protein include the protein's adherence to the cell surface and its inhibitory effects on Chinese hamster ovary (CHO) cell growth. To overcome these challenges, we utilized a multi-prong approach. We identified dextran sulfate as a way to simultaneously extract the protein from the cell surface and boost cellular productivity. In addition, host cells were adapted to grow in the presence of this protein to improve growth and production characteristics. To achieve an increase in productivity, new cell lines from three different CHO host lines were created and evaluated in parallel with new process development workflows. Instead of a traditional screen of only four to six cell lines in bioreactors, over 130 cell lines were screened by utilization of 15 mL automated bioreactors (AMBR) in an optimal production process specifically developed for this protein. Using the automation, far less manual intervention is required than in traditional bench-top bioreactors, and much more control is achieved than typical plate or shake flask based screens. By utilizing an integrated cell line and process development incorporating medium optimized for this protein, we were able to increase titer more than 10-fold while obtaining desirable product quality. Finally, Monte Carlo simulations were performed to predict the optimal number of cell lines to screen in future cell line development work with the goal of systematically increasing titer through enhanced cell line screening.

  10. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  11. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  12. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  13. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  14. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  15. Xyce parallel electronic simulator : reference guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  16. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  17. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  18. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  19. Spacecraft boost and abort guidance and control systems requirement study, boost dynamics and control analysis study. Exhibit A: Boost dynamics and control anlaysis

    NASA Technical Reports Server (NTRS)

    Williams, F. E.; Price, J. B.; Lemon, R. S.

    1972-01-01

    The simulation developments for use in dynamics and control analysis during boost from liftoff to orbit insertion are reported. Also included are wind response studies of the NR-GD 161B/B9T delta wing booster/delta wing orbiter configuration, the MSC 036B/280 inch solid rocket motor configuration, the MSC 040A/L0X-propane liquid injection TVC configuration, the MSC 040C/dual solid rocket motor configuration, and the MSC 049/solid rocket motor configuration. All of the latest math models (rigid and flexible body) developed for the MSC/GD Space Shuttle Functional Simulator, are included.

  20. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  1. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  2. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  3. A parallel programming environment supporting multiple data-parallel modules

    SciTech Connect

    Seevers, B.K.; Quinn, M.J. ); Hatcher, P.J. )

    1992-10-01

    We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The progammer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer.

  4. Parallel imaging microfluidic cytometer.

    PubMed

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  5. High Temperature Boost (HTB) Power Processing Unit (PPU) Formulation Study

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Bradley, Arthur T.; Iannello, Christopher J.; Carr, Gregory A.; Mohammad, Mojarradi M.; Hunter, Don J.; DelCastillo, Linda; Stell, Christopher B.

    2013-01-01

    This technical memorandum is to summarize the Formulation Study conducted during fiscal year 2012 on the High Temperature Boost (HTB) Power Processing Unit (PPU). The effort is authorized and supported by the Game Changing Technology Division, NASA Office of the Chief Technologist. NASA center participation during the formulation includes LaRC, KSC and JPL. The Formulation Study continues into fiscal year 2013. The formulation study has focused on the power processing unit. The team has proposed a modular, power scalable, and new technology enabled High Temperature Boost (HTB) PPU, which has 5-10X improvement in PPU specific power/mass and over 30% in-space solar electric system mass saving.

  6. Black brane entropy and hydrodynamics: The boost-invariant case

    SciTech Connect

    Booth, Ivan; Heller, Michal P.; Spalinski, Michal

    2009-12-15

    The framework of slowly evolving horizons is generalized to the case of black branes in asymptotically anti-de Sitter spaces in arbitrary dimensions. The results are used to analyze the behavior of both event and apparent horizons in the gravity dual to boost-invariant flow. These considerations are motivated by the fact that at second order in the gradient expansion the hydrodynamic entropy current in the dual Yang-Mills theory appears to contain an ambiguity. This ambiguity, in the case of boost-invariant flow, is linked with a similar freedom on the gravity side. This leads to a phenomenological definition of the entropy of black branes. Some insights on fluid/gravity duality and the definition of entropy in a time-dependent setting are elucidated.

  7. Boost capacity, slash LWBS rate with POD triage system.

    PubMed

    2011-04-01

    With bottlenecks boosting ED wait times as well as the LWBS rate, Methodist Hospital of Sacramento decided to boost its triage capacity by taking over six beds that were being used for fast-track patients, and by taking advantage of waiting-room space for patients who don't need to be placed in beds. Within a month of implementing the new approach, the LWBS rate dropped to less than 2%, and door-to-doc time was slashed by 20 minutes. Under the POD system, providers have 15 minutes to determine whether patients should be discharged, sent back to the waiting room while tests are conducted, or placed in an ED bed where they can be monitored. To implement the approach, no alterations in physician staffing were needed, but the hospital added a triage nurse and a task nurse to manage patient flow of the triage POD.

  8. IMM tracking of a theater ballistic missile during boost phase

    NASA Astrophysics Data System (ADS)

    Hutchins, Robert G.; San Jose, Anthony

    1998-09-01

    Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper addresses the performance of tracking algorithms for TBMs during boost phase and across the transition to ballistic flight. Three families of tracking algorithms are examined: alpha-beta-gamma trackers, Kalman-based trackers, and the interactive multiple model (IMM) tracker. In addition, a variation on the IMM to include prior knowledge of a booster cutoff parameter is examined. Simulated data is used to compare algorithms. Also, the IMM tracker is run on an actual ballistic missile trajectory. Results indicate that IMM trackers show significant advantage in tracking through the model transition represented by booster cutoff.

  9. Externally Dispersed Interferometry for Resolution Boosting and Doppler Velocimetry

    SciTech Connect

    Erskine, D J

    2003-12-01

    Externally dispersed interferometry (EDI) is a rapidly advancing technique for wide bandwidth spectroscopy and radial velocimetry. By placing a small angle-independent interferometer near the slit of an existing spectrograph system, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moire pattern, which manifests high detailed spectral information heterodyned down to low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry (under a Doppler shift the entire moir{acute e} pattern shifts in phase). A demonstration of {approx}2x resolution boosting (100,000 from 50,000) on the Lick Obs. echelle spectrograph is shown. Preliminary data indicating {approx}8x resolution boost (170,000 from 20,000) using multiple delays has been taken on a linear grating spectrograph.

  10. LINE-ABOVE-GROUND ATTENUATOR

    DOEpatents

    Wilds, R.B.; Ames, J.R.

    1957-09-24

    The line-above-ground attenuator provides a continuously variable microwave attenuator for a coaxial line that is capable of high attenuation and low insertion loss. The device consists of a short section of the line-above- ground plane type transmission lime, a pair of identical rectangular slabs of lossy material like polytron, whose longitudinal axes are parallel to and indentically spaced away from either side of the line, and a geared mechanism to adjust amd maintain this spaced relationship. This device permits optimum fineness and accuracy of attenuator control which heretofore has been difficult to achieve.

  11. Motivating quantum field theory: the boosted particle in a box

    NASA Astrophysics Data System (ADS)

    Vutha, Amar C.

    2013-07-01

    It is a maxim often stated, yet rarely illustrated, that the combination of special relativity and quantum mechanics necessarily leads to quantum field theory. An elementary illustration is provided using the familiar particle in a box, boosted to relativistic speeds. It is shown that quantum fluctuations of momentum lead to energy fluctuations, which are inexplicable without a framework that endows the vacuum with dynamical degrees of freedom and allows particle creation/annihilation.

  12. The Voltage Boost Enabled by Luminescence Extraction in Solar Cells

    SciTech Connect

    Ganapati, Vidya; Steiner, Myles A.; Yablonovitch, Eli

    2016-11-21

    A new physical principle has emerged to produce record voltages and efficiencies in photovoltaic cells, 'luminescence extraction.' This is exemplified by the mantra 'a good solar cell should also be a good LED.' Luminescence extraction is the escape of internal photons out of the front surface of a solar cell. Basic thermodynamics says that the voltage boost should be related to concentration ratio, C, of a resource by ..delta..V=(kT/q)ln{C}. In light trapping, (i.e. when the solar cell is textured and has a perfect back mirror) the concentration ratio of photons C={4n2}, so one would expect a voltage boost of ..delta..V=kT ln{4n2} over a solar cell with no texture and zero back reflectivity, where n is the refractive index. Nevertheless, there has been ambiguity over the voltage benefit to be expected from perfect luminescence extraction. Do we gain an open circuit voltage boost of ..delta..V=(kT/q)ln{n2}, ..delta..V=(kT/q)ln{2n2}, or ..delta..V=(kT/q)ln{4n2}? What is responsible for this voltage ambiguity ..delta..V=(kT/q)ln{4}=36mVolts? We show that different results come about, depending on whether the photovoltaic cell is optically thin or thick to its internal luminescence. In realistic intermediate cases of optical thickness the voltage boost falls in between; ln{n2}q..delta..V/kT)<;ln{4n2}.

  13. Chagas Parasite Detection in Blood Images Using AdaBoost

    PubMed Central

    Uc-Cetina, Víctor; Brito-Loeza, Carlos; Ruiz-Piña, Hugo

    2015-01-01

    The Chagas disease is a potentially life-threatening illness caused by the protozoan parasite, Trypanosoma cruzi. Visual detection of such parasite through microscopic inspection is a tedious and time-consuming task. In this paper, we provide an AdaBoost learning solution to the task of Chagas parasite detection in blood images. We give details of the algorithm and our experimental setup. With this method, we get 100% and 93.25% of sensitivity and specificity, respectively. A ROC comparison with the method most commonly used for the detection of malaria parasites based on support vector machines (SVM) is also provided. Our experimental work shows mainly two things: (1) Chagas parasites can be detected automatically using machine learning methods with high accuracy and (2) AdaBoost + SVM provides better overall detection performance than AdaBoost or SVMs alone. Such results are the best ones known so far for the problem of automatic detection of Chagas parasites through the use of machine learning, computer vision, and image processing methods. PMID:25861375

  14. Stereotactic Body Radiation Therapy Boost in Locally Advanced Pancreatic Cancer

    SciTech Connect

    Seo, Young Seok; Kim, Mi-Sook; Yoo, Sung Yul; Cho, Chul Koo; Yang, Kwang Mo; Yoo, Hyung Jun; Choi, Chul Won; Lee, Dong Han; Kim, Jin; Kim, Min Suk; Kang, Hye Jin; Kim, YoungHan

    2009-12-01

    Purpose: To investigate the clinical application of a stereotactic body radiation therapy (SBRT) boost in locally advanced pancreatic cancer patients with a focus on local efficacy and toxicity. Methods and Materials: We retrospectively reviewed 30 patients with locally advanced and nonmetastatic pancreatic cancer who had been treated between 2004 and 2006. Follow-up duration ranged from 4 to 41 months (median, 14.5 months). A total dose of 40 Gy was delivered in 20 fractions using a conventional three-field technique, and then a single fraction of 14, 15, 16, or 17 Gy SBRT was administered as a boost without a break. Twenty-one patients received chemotherapy. Overall and local progression-free survival were calculated and prognostic factors were evaluated. Results: One-year overall survival and local progression-free survival rates were 60.0% and 70.2%, respectively. One patient (3%) developed Grade 4 toxicity. Carbohydrate antigen 19-9 response was found to be an independent prognostic factor for survival. Conclusions: Our findings indicate that a SBRT boost provides a safe means of increasing radiation dose. Based on the results of this study, we recommend that a well controlled Phase II study be conducted on locally advanced pancreatic cancer.

  15. Action Classification by Joint Boosting Using Spatiotemporal and Depth Information

    NASA Astrophysics Data System (ADS)

    Ikemura, Sho; Fujiyoshi, Hironobu

    This paper presents a method for action classification by using Joint Boosting with depth information obtained by TOF camera. Our goal is to classify action of a customer who takes the goods from each of the upper, middle and lower shelf in the supermarkets and convenience stores. Our method detects of human region by using Pixel State Analysis (PSA) from the depth image stream obtained by TOF camera, and extracts the PSA features captured from human-motion and the depth features (peak value of depth) captured from the information of human-height. We employ Joint Boosting, which is a multi-class classification of boosting method, to perform the action classification. Since the proposed method employs spatiotemporal and depth feature, it is possible to perform the detection of action for taking the goods and the classification of the height of the shelf simultaneously. Experimental results show that our method using PSA feature and peak value of depth achieved a classification rate of 93.2%. It also had a 3.1% higher performance than that of the CHLAC feature, and 2.8% higher performance than that of the ST-patch feature.

  16. Perception of straightness and parallelism with minimal distance information.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2016-07-01

    The ability of human observers to judge the straightness and parallelism of extended lines has been a neglected topic of study since von Helmholtz's initial observations 150 years ago. He showed that there were significant misperceptions of the straightness of extended lines seen in the peripheral visual field. The present study focused on the perception of extended lines (spanning 90° visual angle) that were directly fixated in the visual environment of a planetarium where there was only minimal information about the distance to the lines. Observers were asked to vary the curvature of 1 or more lines until they appeared to be straight and/or parallel, ignoring any perceived curvature in depth. When the horizon between the ground and the sky was visible, the results showed that observers' judgements of the straightness of a single line were significantly biased away from the veridical, great circle locations, and towards equal elevation settings. Similar biases can be seen in the jet trails of aircraft flying across the sky and in Rogers and Anstis's new moon illusion (Perception, 42(Abstract supplement) 18, 2013, 2016). The biasing effect of the horizon was much smaller when observers were asked to judge the straightness and parallelism of 2 or more extended lines. We interpret the results as showing that, in the absence of adequate distance information, observers tend to perceive the projected lines as lying on an approximately equidistant, hemispherical surface and that their judgements of straightness and parallelism are based on the perceived separation of the lines superimposed on that surface.

  17. Massively-Parallel Dislocation Dynamics Simulations

    SciTech Connect

    Cai, W; Bulatov, V V; Pierce, T G; Hiratani, M; Rhee, M; Bartelt, M; Tang, M

    2003-06-18

    Prediction of the plastic strength of single crystals based on the collective dynamics of dislocations has been a challenge for computational materials science for a number of years. The difficulty lies in the inability of the existing dislocation dynamics (DD) codes to handle a sufficiently large number of dislocation lines, in order to be statistically representative and to reproduce experimentally observed microstructures. A new massively-parallel DD code is developed that is capable of modeling million-dislocation systems by employing thousands of processors. We discuss the general aspects of this code that make such large scale simulations possible, as well as a few initial simulation results.

  18. A mobile mass spectrometer for comprehensive on-line analysis of trace and bulk components of complex gas mixtures: parallel application of the laser-based ionization methods VUV single-photon ionization, resonant multiphoton ionization, and laser-induced electron impact ionization.

    PubMed

    Mühlberger, F; Zimmermann, R; Kettrup, A

    2001-08-01

    A newly developed compact and mobile time-of-flight mass spectrometer (TOFMS) for on-line analysis and monitoring of complex gas mixtures is presented. The instrument is designed for a (quasi-)simultaneous application of three ionization techniques that exhibit different ionization selectivities. The highly selective resonance-enhanced multiphoton ionization (REMPI) technique, using 266-nm UV laser pulses, is applied for selective and fragmentationless ionization of aromatic compounds at trace levels (parts-per-billion volume range). Mass spectra obtained using this technique show the chemical signature solely of monocyclic (benzene, phenols, etc.) and polycyclic (naphthalene, phenathrene, indol, etc.) aromatic species. Furthermore, the less selective but still fragmentationless single photon ionization (SPI) technique with 118-nm VUV laser pulses allows the ionization of compounds with an ionization potential below 10.5 eV. Mass spectra obtained using this technique show the profile of most organic compounds (aliphatic and aromatic species, like nonane, acetaldehyde, or pyrrol) and some inorganic compounds (e.g., ammonia, nitrogen monoxide). Finally, the nonselective ionization technique laser-induced electron-impact ionization (LEI) is applied. However, the sensitivity of the LEI technique is adjusted to be fairly low. Thus, the LEI signal in the mass spectra gives information on the inorganic bulk constituents of the sample (i.e., compounds such as water, oxygen, nitrogen, and carbon dioxide). Because the three ionization methods (REMPI, SPI, LEI) exhibit largely different ionization selectivities, the isolated application of each method alone solely provides specific mass spectrometric information about the sample composition. Special techniques have been developed and applied which allow the quasi-parallel use of all three ionization techniques for on-line monitoring purposes. Thus, a comprehensive characterization of complex samples is feasible jointly using

  19. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  20. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  1. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  2. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  3. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  4. CS-Studio Scan System Parallelization

    SciTech Connect

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  5. Gut inflammation can boost horizontal gene transfer between pathogenic and commensal Enterobacteriaceae

    PubMed Central

    Stecher, Bärbel; Denzler, Rémy; Maier, Lisa; Bernet, Florian; Sanders, Mandy J.; Pickard, Derek J.; Barthel, Manja; Westendorf, Astrid M.; Krogfelt, Karen A.; Walker, Alan W.; Ackermann, Martin; Dobrindt, Ulrich; Thomson, Nicholas R.; Hardt, Wolf-Dietrich

    2012-01-01

    The mammalian gut harbors a dense microbial community interacting in multiple ways, including horizontal gene transfer (HGT). Pangenome analyses established particularly high levels of genetic flux between Gram-negative Enterobacteriaceae. However, the mechanisms fostering intraenterobacterial HGT are incompletely understood. Using a mouse colitis model, we found that Salmonella-inflicted enteropathy elicits parallel blooms of the pathogen and of resident commensal Escherichia coli. These blooms boosted conjugative HGT of the colicin-plasmid p2 from Salmonella enterica serovar Typhimurium to E. coli. Transconjugation efficiencies of ∼100% in vivo were attributable to high intrinsic p2-transfer rates. Plasmid-encoded fitness benefits contributed little. Under normal conditions, HGT was blocked by the commensal microbiota inhibiting contact-dependent conjugation between Enterobacteriaceae. Our data show that pathogen-driven inflammatory responses in the gut can generate transient enterobacterial blooms in which conjugative transfer occurs at unprecedented rates. These blooms may favor reassortment of plasmid-encoded genes between pathogens and commensals fostering the spread of fitness-, virulence-, and antibiotic-resistance determinants. PMID:22232693

  6. Pharmacodynamics of long-acting folic acid-receptor targeted ritonavir boosted atazanavir nanoformulations

    PubMed Central

    Puligujja, Pavan; Balkundi, Shantanu; Kendrick, Lindsey; Baldridge, Hannah; Hilaire, James; Bade, Aditya N.; Dash, Prasanta K.; Zhang, Gang; Poluektova, Larisa; Gorantla, Santhi; Liu, Xin-Ming; Ying, Tianlei; Feng, Yang; Wang, Yanping; Dimitrov, Dimiter S.; McMillan, JoEllyn M.; Gendelman, Howard E.

    2014-01-01

    Long-acting nanoformulated antiretroviral therapy (nanoART) that target monocyte-macrophage could improve the drug’s half-life and protein binding capacities while facilitating cell and tissue depots. To this end, ART nanoparticles that target the folic acid (FA) receptor and permit cell-based drug depots were examined using pharmacokinetic and pharmacodynamic (PD) tests. FA receptor-targeted poloxamer 407 nanocrystals, containing ritonavir-boosted atazanavir (ATV/r), significantly affected several therapeutic factors: drug bioavailability increased as much as 5 times and PD activity improved as much as 100 times. Drug particles administered to human peripheral blood lymphocyte reconstituted NOD.Cg-PrkdcscidIl2rgtm1Wjl/SzJ mice and infected with HIV-1ADA at a tissue culture infective dose50 of 104 infectious viral particles/ml led to ATV/r drug concentrations that paralleled FA receptor beta staining in both the macrophage-rich parafollicular areas of spleen and lymph nodes. Drug levels were higher in these tissues than what could be achieved by either native drug or untargeted nanoART particles. The data also mirrored potent reductions in viral loads, tissue viral RNA and numbers of HIV-1p24+ cells in infected and treated animals. We conclude that FA-P407 coating of ART nanoparticles readily facilitate drug carriage and facilitate antiretroviral responses. PMID:25522973

  7. Pharmacodynamics of long-acting folic acid-receptor targeted ritonavir-boosted atazanavir nanoformulations.

    PubMed

    Puligujja, Pavan; Balkundi, Shantanu S; Kendrick, Lindsey M; Baldridge, Hannah M; Hilaire, James R; Bade, Aditya N; Dash, Prasanta K; Zhang, Gang; Poluektova, Larisa Y; Gorantla, Santhi; Liu, Xin-Ming; Ying, Tianlei; Feng, Yang; Wang, Yanping; Dimitrov, Dimiter S; McMillan, JoEllyn M; Gendelman, Howard E

    2015-02-01

    Long-acting nanoformulated antiretroviral therapy (nanoART) that targets monocyte-macrophages could improve the drug's half-life and protein-binding capacities while facilitating cell and tissue depots. To this end, ART nanoparticles that target the folic acid (FA) receptor and permit cell-based drug depots were examined using pharmacokinetic and pharmacodynamic (PD) tests. FA receptor-targeted poloxamer 407 nanocrystals, containing ritonavir-boosted atazanavir (ATV/r), significantly increased drug bioavailability and PD by five and 100 times, respectively. Drug particles administered to human peripheral blood lymphocyte reconstituted NOD.Cg-Prkdc(scid)Il2rg(tm1Wjl)/SzJ mice and infected with HIV-1ADA led to ATV/r drug concentrations that paralleled FA receptor beta staining in both the macrophage-rich parafollicular areas of spleen and lymph nodes. Drug levels were higher in these tissues than what could be achieved by either native drug or untargeted nanoART particles. The data also mirrored potent reductions in viral loads, tissue viral RNA and numbers of HIV-1p24+ cells in infected and treated animals. We conclude that FA-P407 coating of ART nanoparticles readily facilitates drug carriage and antiretroviral responses.

  8. Parallel Computational Protein Design

    PubMed Central

    Zhou, Yichao; Donald, Bruce R.; Zeng, Jianyang

    2016-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab [1] to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE [2] and DEEPer [3] to also consider continuous backbone and side-chain flexibility. PMID:27914056

  9. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  10. A Parallel Particle Swarm Optimizer

    DTIC Science & Technology

    2003-01-01

    by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.

  11. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  12. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  13. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  14. Vertical bloch line memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R. (Inventor); Stadler, Henry L. (Inventor); Wu, Jiin-chuan (Inventor)

    1995-01-01

    A new read gate design for the vertical Bloch line (VBL) memory is disclosed which offers larger operating margin than the existing read gate designs. In the existing read gate designs, a current is applied to all the stripes. The stripes that contain a VBL pair are chopped, while the stripes that do not contain a VBL pair are not chopped. The information is then detected by inspecting the presence or absence of the bubble. The margin of the chopping current amplitude is very small, and sometimes non-existent. A new method of reading Vertical Bloch Line memory is also disclosed. Instead of using the wall chirality to separate the two binary states, the spatial deflection of the stripe head is used. Also disclosed herein is a compact memory which uses vertical Bloch line (VBL) memory technology for providing data storage. A three-dimensional arrangement in the form of stacks of VBL memory layers is used to achieve high volumetric storage density. High data transfer rate is achieved by operating all the layers in parallel. Using Hall effect sensing, and optical sensing via the Faraday effect to access the data from within the three-dimensional packages, an even higher data transfer rate can be achieved due to parallel operation within each layer.

  15. How Citation Boosts Promote Scientific Paradigm Shifts and Nobel Prizes

    PubMed Central

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-01-01

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the “boosting effect” of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying “boost factor” is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract. PMID:21573229

  16. How citation boosts promote scientific paradigm shifts and nobel prizes.

    PubMed

    Mazloumian, Amin; Eom, Young-Ho; Helbing, Dirk; Lozano, Sergi; Fortunato, Santo

    2011-05-04

    Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the "boosting effect" of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying "boost factor" is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.

  17. "With One Lip, With Two Lips"; Parallelism in Nahuatl.

    ERIC Educational Resources Information Center

    Bright, William

    1990-01-01

    Texts in Classical Nahuatl from 1524, in the genre of formal oratory, reveal extensive use of lines showing parallel morphosyntactic and semantic structure. Analysis and translation of a passage point to the applicability of structural analysis to "expressive" as well as "referential" texts; and the importance of understanding…

  18. Single-Phase Boost Rectifier with Snubber Energy Recovery Feature

    NASA Astrophysics Data System (ADS)

    Neba, Yasuhiko; Ishizaka, Kouichi; Matsumoto, Hirokazu; Itoh, Ryozo

    Single-phase boost rectifier with snubber energy recovery feature operating under the current-mode control with a turn-on at constant clock time is studied. In this rectifier, the resonant circuit consisting of small inductor and capacitor is added in DC circuit. The snubber energy is transferred to an additional resonant capacitor and can next be transferred to the load circuit when an insulated-gate bipolar transistor as the active power device is turned off. The experimental prototype is implemented to investigate the operation. The experimental results confirm that the proposed snubber energy recovery scheme has the feasibility.

  19. Boosting alternating decision trees modeling of disease trait information.

    PubMed

    Liu, Kuang-Yu; Lin, Jennifer; Zhou, Xiaobo; Wong, Stephen T C

    2005-12-30

    We applied the alternating decision trees (ADTrees) method to the last 3 replicates from the Aipotu, Danacca, Karangar, and NYC populations in the Problem 2 simulated Genetic Analysis Workshop dataset. Using information from the 12 binary phenotypes and sex as input and Kofendrerd Personality Disorder disease status as the outcome of ADTrees-based classifiers, we obtained a new quantitative trait based on average prediction scores, which was then used for genome-wide quantitative trait linkage (QTL) analysis. ADTrees are machine learning methods that combine boosting and decision trees algorithms to generate smaller and easier-to-interpret classification rules. In this application, we compared four modeling strategies from the combinations of two boosting iterations (log or exponential loss functions) coupled with two choices of tree generation types (a full alternating decision tree or a classic boosting decision tree). These four different strategies were applied to the founders in each population to construct four classifiers, which were then applied to each study participant. To compute average prediction score for each subject with a specific trait profile, such a process was repeated with 10 runs of 10-fold cross validation, and standardized prediction scores obtained from the 10 runs were averaged and used in subsequent expectation-maximization Haseman-Elston QTL analyses (implemented in GENEHUNTER) with the approximate 900 SNPs in Hardy-Weinberg equilibrium provided for each population. Our QTL analyses on the basis of four models (a full alternating decision tree and a classic boosting decision tree paired with either log or exponential loss function) detected evidence for linkage (Z >or= 1.96, p < 0.01) on chromosomes 1, 3, 5, and 9. Moreover, using average iteration and abundance scores for the 12 phenotypes and sex as their relevancy measurements, we found all relevant phenotypes for all four populations except phenotype b for the Karangar population

  20. Parallel Impurity Spreading During Massive Gas Injection

    NASA Astrophysics Data System (ADS)

    Izzo, V. A.

    2016-10-01

    Extended-MHD simulations of disruption mitigation in DIII-D demonstrate that both pre-existing islands (locked-modes) and plasma rotation can significantly influence toroidal spreading of impurities following massive gas injection (MGI). Given the importance of successful disruption mitigation in ITER and the large disparity in device parameters, empirical demonstrations of disruption mitigation strategies in present tokamaks are insufficient to inspire unreserved confidence for ITER. Here, MHD simulations elucidate how impurities injected as a localized jet spread toroidally and poloidally. Simulations with large pre-existing islands at the q = 2 surface reveal that the magnetic topology strongly influences the rate of impurity spreading parallel to the field lines. Parallel spreading is largely driven by rapid parallel heat conduction, and is much faster at low order rational surfaces, where a short parallel connection length leads to faster thermal equilibration. Consequently, the presence of large islands, which alter the connection length, can slow impurity transport; but the simulations also show that the appearance of a 4/2 harmonic of the 2/1 mode, which breaks up the large islands, can increase the rate of spreading. This effect is seen both for simulations with spontaneously growing and directly imposed 4/2 modes. Given the prevalence of locked-modes as a cause of disruptions, understanding the effect of large islands is of particular importance. Simulations with and without islands also show that rotation can alter impurity spreading, even reversing the predominant direction of spreading, which is toward the high-field-side in the absence of rotation. Given expected differences in rotation for ITER vs. DIII-D, rotation effects are another important consideration when extrapolating experimental results. Work supported by US DOE under DE-FG02-95ER54309.

  1. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  2. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  3. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  4. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  5. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  6. A Compensation-Based Optimization Methodology for Gain-Boosted OPAMP

    DTIC Science & Technology

    2004-05-14

    Reprint_._ 4. TiLu AWo 3UNTnU S. FUNDNIG UUMEAS A Compensation-Based Optimization Methodollogy for N00014-94-1-0931 Gain-Boosted OPAMP Jie Yuan and Nabil...Unlimited 13. •SW.&•CT %Af~xum200l..uord,) A gain-boosted OPAMP design methodology is presented. The methodology provides a systematic way of gain...boosted OPAMP optimization in terms- of AC response and settling performance. The evolution of the major poles and zeros of the gain- boosted OPAMP is

  7. Final Technical Report for the BOOST2013 Workshop. Hosted by the University of Arizona

    SciTech Connect

    Johns, Kenneth

    2015-02-20

    BOOST 2013 was the 5th International Joint Theory/Experiment Workshop on Phenomenology, Reconstruction and Searches for Boosted Objects in High Energy Hadron Collisions. It was locally organized and hosted by the Experimental High Energy Physics Group at the University of Arizona and held at Flagstaff, Arizona on August 12-16, 2013. The workshop provided a forum for theorists and experimentalists to present and discuss the latest findings related to the reconstruction of boosted objects in high energy hadron collisions and their use in searches for new physics. This report gives the outcomes of the BOOST 2013 Workshop.

  8. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  9. Multigrid on massively parallel architectures

    SciTech Connect

    Falgout, R D; Jones, J E

    1999-09-17

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs.

  10. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  11. Hyperdynamics boost factor achievable with an ideal bias potential

    SciTech Connect

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-20

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintaining high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.

  12. Boosting Antimicrobial Peptides by Hydrophobic Oligopeptide End Tags*

    PubMed Central

    Schmidtchen, Artur; Pasupuleti, Mukesh; Mörgelin, Matthias; Davoudi, Mina; Alenfall, Jan; Chalupka, Anna; Malmsten, Martin

    2009-01-01

    A novel approach for boosting antimicrobial peptides through end tagging with hydrophobic oligopeptide stretches is demonstrated. Focusing on two peptides derived from kininogen, GKHKNKGKKNGKHNGWK (GKH17) and HKHGHGHGKHKNKGKKN (HKH17), tagging resulted in enhanced killing of Gram-positive Staphylococcus aureus, Gram-negative Escherichia coli, and fungal Candida albicans. Microbicidal potency increased with tag length, also in plasma, and was larger for Trp and Phe stretches than for aliphatic ones. The enhanced microbicidal effects correlated to a higher degree of bacterial wall rupture. Analogously, tagging promoted peptide binding to model phospholipid membranes and liposome rupture, particularly for anionic and cholesterol-void membranes. Tagged peptides displayed low toxicity, particularly in the presence of serum, and resisted degradation by human leukocyte elastase and by staphylococcal aureolysin and V8 proteinase. The biological relevance of these findings was demonstrated ex vivo and in vivo in porcine S. aureus skin infection models. The generality of end tagging for facile boosting of antimicrobial peptides without the need for post-synthesis modification was also demonstrated. PMID:19398550

  13. Playing tag with ANN: boosted top identification with pattern recognition

    NASA Astrophysics Data System (ADS)

    Almeida, Leandro G.; Backović, Mihailo; Cliche, Mathieu; Lee, Seung J.; Perelstein, Maxim

    2015-07-01

    Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a "digital image" of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p T in the 1100-1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.

  14. Boosting the Light: X-ray Physics in Confinement

    ScienceCinema

    Rhisberger, Ralf [HASYLAB/ DESY

    2016-07-12

    Remarkable effects are observed if light is confined to dimensions comparable to the wavelength of the light. The lifetime of atomic resonances excited by the radiation is strongly reduced in photonic traps, such as cavities or waveguides. Moreover, one observes an anomalous boost of the intensity scattered from the resonant atoms. These phenomena results from the strong enhancement of the photonic density of states in such geometries. Many of these effects are currently being explored in the regime of vsible light due to their relevance for optical information processing. It is thus appealing to study these phenomena also for much shorter wavelengths. This talk illuminates recent experiments where synchrotron x-rays were trapped in planar waveguides to resonantly excite atomos ([57]Fe nuclei_ embedded in them. In fact, one observes that the radiative decay of these excited atoms is strongly accelerated. The temporal acceleration of the decay goes along with a strong boost of the radiation coherently scattered from the confined atmos. This can be exploited to obtain a high signal-to-noise ratio from tiny quantities of material, leading to manifold applications in the investigation of nanostructured materials. One application is the use of ultrathin probe layers to image the internal structure of magnetic layer systems.

  15. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  16. A TEG Efficiency Booster with Buck-Boost Conversion

    NASA Astrophysics Data System (ADS)

    Wu, Hongfei; Sun, Kai; Zhang, Junjun; Xing, Yan

    2013-07-01

    A thermoelectric generator (TEG) efficiency booster with buck-boost conversion and power management is proposed as a TEG battery power conditioner suitable for a wide TEG output voltage range. An inverse-coupled inductor is employed in the buck-boost converter, which is used to achieve smooth current with low ripple on both the TEG and battery sides. Furthermore, benefiting from the magnetic flux counteraction of the two windings on the coupled inductor, the core size and power losses of the filter inductor are reduced, which can achieve both high efficiency and high power density. A power management strategy is proposed for this power conditioning system, which involves maximum power point tracking (MPPT), battery voltage control, and battery current control. A control method is employed to ensure smooth switching among different working modes. A modified MPPT control algorithm with improved dynamic and steady-state characteristics is presented and applied to the TEG battery power conditioning system to maximize energy harvesting. A 500-W prototype has been built, and experimental tests carried out on it. The power efficiency of the prototype at full load is higher than 96%, and peak efficiency of 99% is attained.

  17. Boosting color feature selection for color face recognition.

    PubMed

    Choi, Jae Young; Ro, Yong Man; Plataniotis, Konstantinos N

    2011-05-01

    This paper introduces the new color face recognition (FR) method that makes effective use of boosting learning as color-component feature selection framework. The proposed boosting color-component feature selection framework is designed for finding the best set of color-component features from various color spaces (or models), aiming to achieve the best FR performance for a given FR task. In addition, to facilitate the complementary effect of the selected color-component features for the purpose of color FR, they are combined using the proposed weighted feature fusion scheme. The effectiveness of our color FR method has been successfully evaluated on the following five public face databases (DBs): CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0. Experimental results show that the results of the proposed method are impressively better than the results of other state-of-the-art color FR methods over different FR challenges including highly uncontrolled illumination, moderate pose variation, and small resolution face images.

  18. Hyperdynamics boost factor achievable with an ideal bias potential

    DOE PAGES

    Huang, Chen; Perez, Danny; Voter, Arthur F.

    2015-08-20

    Hyperdynamics is a powerful method to significantly extend the time scales amenable to molecular dynamics simulation of infrequent events. One outstanding challenge, however, is the development of the so-called bias potential required by the method. In this work, we design a bias potential using information about all minimum energy pathways (MEPs) out of the current state. While this approach is not suitable for use in an actual hyperdynamics simulation, because the pathways are generally not known in advance, it allows us to show that it is possible to come very close to the theoretical boost limit of hyperdynamics while maintainingmore » high accuracy. We demonstrate this by applying this MEP-based hyperdynamics (MEP-HD) to metallic surface diffusion systems. In most cases, MEP-HD gives boost factors that are orders of magnitude larger than the best existing bias potential, indicating that further development of hyperdynamics bias potentials could have a significant payoff. Lastly, we discuss potential practical uses of MEP-HD, including the possibility of developing MEP-HD into a true hyperdynamics.« less

  19. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  20. Adaptive optics parallel near-confocal scanning ophthalmoscopy.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2016-08-15

    We present an adaptive optics parallel near-confocal scanning ophthalmoscope (AOPCSO) using a digital micromirror device (DMD). The imaging light is modulated to be a line of point sources by the DMD, illuminating the retina simultaneously. By using a high-speed line camera to acquire the image and using adaptive optics to compensate the ocular wave aberration, the AOPCSO can image the living human eye with cellular level resolution at the frame rate of 100 Hz. AOPCSO has been demonstrated with improved spatial resolution in imaging of the living human retina compared with adaptive optics line scan ophthalmoscopy.

  1. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  2. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  3. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  4. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  5. Parallel programming of industrial applications

    SciTech Connect

    Heroux, M; Koniges, A; Simon, H

    1998-07-21

    In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from these applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).

  6. Distinguishing serial and parallel parsing.

    PubMed

    Gibson, E; Pearlmutter, N J

    2000-03-01

    This paper discusses ways of determining whether the human parser is serial maintaining at most, one structural interpretation at each parse state, or whether it is parallel, maintaining more than one structural interpretation in at least some circumstances. We make four points. The first two counterclaims made by Lewis (2000): (1) that the availability of alternative structures should not vary as a function of the disambiguating material in some ranked parallel models; and (2) that parallel models predict a slow down during the ambiguous region for more syntactically ambiguous structures. Our other points concern potential methods for seeking experimental evidence relevant to the serial/parallel question. We discuss effects of the plausibility of a secondary structure in the ambiguous region (Pearlmutter & Mendelsohn, 1999) and suggest examining the distribution of reaction times in the disambiguating region.

  7. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  8. Three-dimensional parallel vortex rings in Bose-Einstein condensates

    SciTech Connect

    Crasovan, Lucian-Cornel; Perez-Garcia, Victor M.; Danaila, Ionut; Mihalache, Dumitru; Torner, Lluis

    2004-09-01

    We construct three-dimensional structures of topological defects hosted in trapped wave fields, in the form of vortex stars, vortex cages, parallel vortex lines, perpendicular vortex rings, and parallel vortex rings, and we show that the latter exist as robust stationary, collective states of nonrotating Bose-Einstein condensates. We discuss the stability properties of excited states containing several parallel vortex rings hosted by the condensate, including their dynamical and structural stability.

  9. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  10. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  11. Debugging in a parallel environment

    SciTech Connect

    Wasserman, H.J.; Griffin, J.H.

    1985-01-01

    This paper describes the preliminary results of a project investigating approaches to dynamic debugging in parallel processing systems. Debugging programs in a multiprocessing environment is particularly difficult because of potential errors in synchronization of tasks, data dependencies, sharing of data among tasks, and irreproducibility of specific machine instruction sequences from one job to the next. The basic methodology involved in predicate-based debuggers is given as well as other desirable features of dynamic parallel debugging. 13 refs.

  12. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  13. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  14. Efficiency of parallel direct optimization.

    PubMed

    Janies, D A; Wheeler, W C

    2001-03-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size.

  15. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework

    PubMed Central

    Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155

  16. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Boosting oncolytic adenovirus potency with magnetic nanoparticles and magnetic force.

    PubMed

    Tresilwised, Nittaya; Pithayanukul, Pimolpan; Mykhaylyk, Olga; Holm, Per Sonne; Holzmüller, Regina; Anton, Martina; Thalhammer, Stefan; Adigüzel, Denis; Döblinger, Markus; Plank, Christian

    2010-08-02

    Oncolytic adenoviruses rank among the most promising innovative agents in cancer therapy. We examined the potential of boosting the efficacy of the oncolytic adenovirus dl520 by associating it with magnetic nanoparticles and magnetic-field-guided infection in multidrug-resistant (MDR) cancer cells in vitro and upon intratumoral injection in vivo. The virus was complexed by self-assembly with core-shell nanoparticles having a magnetite core of about 10 nm and stabilized by a shell containing 68 mass % lithium 3-[2-(perfluoroalkyl)ethylthio]propionate) and 32 mass % 25 kDa branched polyethylenimine. Optimized virus binding, sufficiently stable in 50% fetal calf serum, was found at nanoparticle-to-virus ratios of 5 fg of Fe per physical virus particle (VP) and above. As estimated from magnetophoretic mobility measurements, 3,600 to 4,500 magnetite nanocrystallites were associated per virus particle. Ultrastructural analysis by electron and atomic force microscopy showed structurally intact viruses surrounded by magnetic particles that occasionally bridged several virus particles. Viral uptake into cells at a given virus dose was enhanced 10-fold compared to nonmagnetic virus when infections were carried out under the influence of a magnetic field. Increased virus internalization resulted in a 10-fold enhancement of the oncolytic potency in terms of the dose required for killing 50% of the target cells (IC(50) value) and an enhancement of 4 orders of magnitude in virus progeny formation at equal input virus doses compared to nonmagnetic viruses. Furthermore, the full oncolytic effect developed within two days postinfection compared with six days in a nonmagnetic virus as a reference. Plotting target cell viability versus internalized virus particles for magnetic and nonmagnetic virus showed that the inherent oncolytic productivity of the virus remained unchanged upon association with magnetic nanoparticles. Hence, we conclude that the mechanism of boosting the

  19. Enhanced algorithm performance for land cover classification from remotely sensed data using bagging and boosting

    USGS Publications Warehouse

    Chan, J.C.-W.; Huang, C.; DeFries, R.

    2001-01-01

    Two ensemble methods, bagging and boosting, were investigated for improving algorithm performance. Our results confirmed the theoretical explanation [1] that bagging improves unstable, but not stable, learning algorithms. While boosting enhanced accuracy of a weak learner, its behavior is subject to the characteristics of each learning algorithm.

  20. The economics of parallel trade.

    PubMed

    Danzon, P M

    1998-03-01

    The potential for parallel trade in the European Union (EU) has grown with the accession of low price countries and the harmonisation of registration requirements. Parallel trade implies a conflict between the principle of autonomy of member states to set their own pharmaceutical prices, the principle of free trade and the industrial policy goal of promoting innovative research and development (R&D). Parallel trade in pharmaceuticals does not yield the normal efficiency gains from trade because countries achieve low pharmaceutical prices by aggressive regulation, not through superior efficiency. In fact, parallel trade reduces economic welfare by undermining price differentials between markets. Pharmaceutical R&D is a global joint cost of serving all consumers worldwide; it accounts for roughly 30% of total costs. Optimal (welfare maximising) pricing to cover joint costs (Ramsey pricing) requires setting different prices in different markets, based on inverse demand elasticities. By contrast, parallel trade and regulation based on international price comparisons tend to force price convergence across markets. In response, manufacturers attempt to set a uniform 'euro' price. The primary losers from 'euro' pricing will be consumers in low income countries who will face higher prices or loss of access to new drugs. In the long run, even higher income countries are likely to be worse off with uniform prices, because fewer drugs will be developed. One policy option to preserve price differentials is to exempt on-patent products from parallel trade. An alternative is confidential contracting between individual manufacturers and governments to provide country-specific ex post discounts from the single 'euro' wholesale price, similar to rebates used by managed care in the US. This would preserve differentials in transactions prices even if parallel trade forces convergence of wholesale prices.

  1. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  2. Vaccination of Mice Using the West Nile Virus E-Protein in a DNA Prime-Protein Boost Strategy Stimulates Cell-Mediated Immunity and Protects Mice against a Lethal Challenge

    PubMed Central

    De Filette, Marina; Soehle, Silke; Ulbert, Sebastian; Richner, Justin; Diamond, Michael S.; Sinigaglia, Alessandro; Barzon, Luisa; Roels, Stefan; Lisziewicz, Julianna; Lorincz, Orsolya; Sanders, Niek N.

    2014-01-01

    West Nile virus (WNV) is a mosquito-borne flavivirus that is endemic in Africa, the Middle East, Europe and the United States. There is currently no antiviral treatment or human vaccine available to treat or prevent WNV infection. DNA plasmid-based vaccines represent a new approach for controlling infectious diseases. In rodents, DNA vaccines have been shown to induce B cell and cytotoxic T cell responses and protect against a wide range of infections. In this study, we formulated a plasmid DNA vector expressing the ectodomain of the E-protein of WNV into nanoparticles by using linear polyethyleneimine (lPEI) covalently bound to mannose and examined the potential of this vaccine to protect against lethal WNV infection in mice. Mice were immunized twice (prime – boost regime) with the WNV DNA vaccine formulated with lPEI-mannose using different administration routes (intramuscular, intradermal and topical). In parallel a heterologous boost with purified recombinant WNV envelope (E) protein was evaluated. While no significant E-protein specific humoral response was generated after DNA immunization, protein boosting of DNA-primed mice resulted in a marked increase in total neutralizing antibody titer. In addition, E-specific IL-4 T-cell immune responses were detected by ELISPOT after protein boost and CD8+ specific IFN-γ expression was observed by flow cytometry. Challenge experiments using the heterologous immunization regime revealed protective immunity to homologous and virulent WNV infection. PMID:24503579

  3. Investigation of the Centaur boost pump overspeed condition at main engine shutdown on the Titan Centaur TC-2 flight

    NASA Technical Reports Server (NTRS)

    Baud, K. W.

    1975-01-01

    An investigation was conducted to evaluate a potential boost pump overspeed condition which could exist on the Titan/Centaur launch vehicle after main engine shut-off. Preliminary analyses indicated that the acceleration imparted to the unloaded boost pump-turbine assembly, caused by purging residual hydrogen peroxide from the turbine supply lines, could result in a pump-turbine overspeed. Previous test experience indicated that turbine damage occurs at speeds in excess of 75,000 rpm. Detailed theoretical analyses, in conjunction with pump tests, were conducted to establish the maximum pump-turbine speed at main engine shut-off. The analyses predicted a maximum speed of 68,000 rpm. Testing showed the pump-turbine speed to be 66,700 rpm in the overspeed condition. Inasmuch as both the analysis and tests showed the overspeed to be sufficiently less than the speed at which damage could occur, it was concluded that no corrective action would be required for the launch vehicle.

  4. Prediction and control of limit cycling motions in boosting rockets

    NASA Astrophysics Data System (ADS)

    Newman, Brett

    An investigation concerning the prediction and control of observed limit cycling behavior in a boosting rocket is considered. The suspected source of the nonlinear behavior is the presence of Coulomb friction in the nozzle pivot mechanism. A classical sinusoidal describing function analysis is used to accurately recreate and predict the observed oscillatory characteristic. In so doing, insight is offered into the limit cycling mechanism and confidence is gained in the closed-loop system design. Nonlinear simulation results are further used to support and verify the results obtained from describing function theory. Insight into the limit cycling behavior is, in turn, used to adjust control system parameters in order to passively control the oscillatory tendencies. Tradeoffs with the guidance and control system stability/performance are also noted. Finally, active control of the limit cycling behavior, using a novel feedback algorithm to adjust the inherent nozzle sticking-unsticking characteristics, is considered.

  5. Usefulness of effective field theory for boosted Higgs production

    SciTech Connect

    Dawson, S.; Lewis, I. M.; Zeng, Mao

    2015-04-07

    The Higgs + jet channel at the LHC is sensitive to the effects of new physics both in the total rate and in the transverse momentum distribution at high pT. We examine the production process using an effective field theory (EFT) language and discussing the possibility of determining the nature of the underlying high-scale physics from boosted Higgs production. The effects of heavy color triplet scalars and top partner fermions with TeV scale masses are considered as examples and Higgs-gluon couplings of dimension-5 and dimension-7 are included in the EFT. As a byproduct of our study, we examine the region of validity of the EFT. Dimension-7 contributions in realistic new physics models give effects in the high pT tail of the Higgs signal which are so tiny that they are likely to be unobservable.

  6. Syntactic priming during sentence comprehension: evidence for the lexical boost.

    PubMed

    Traxler, Matthew J; Tooley, Kristen M; Pickering, Martin J

    2014-07-01

    Syntactic priming occurs when structural information from one sentence influences processing of a subsequently encountered sentence (Bock, 1986; Ledoux et al., 2007). This article reports 2 eye-tracking experiments investigating the effects of a prime sentence on the processing of a target sentence that shared aspects of syntactic form. The experiments were designed to determine the degree to which lexical overlap between prime and target sentences produced larger effects, comparable to the widely observed "lexical boost" in production experiments (Pickering & Branigan, 1998; Pickering & Ferreira, 2008). The current experiments showed that priming effects during online comprehension were in fact larger when a verb was repeated across the prime and target sentences (see also Tooley et al., 2009). The finding of larger priming effects with lexical repetition supports accounts under which syntactic form representations are connected to individual lexical items (e.g., Tomasello, 2003; Vosse & Kempen, 2000, 2009).

  7. Adaptive guidance for an aero-assisted boost vehicle

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Taylor, Lawrence W., Jr.; Price, Douglas B.

    1988-01-01

    An adaptive guidance system incorporating dynamic pressure constraint is studied for a single stage to low earth orbit (LEO) aero-assist booster with thrust gimbal angle as the control variable. To derive an adaptive guidance law, cubic spline functions are used to represent the ascent profile. The booster flight to LEO is divided into initial and terminal phases. In the initial phase, the ascent profile is continuously updated to maximize the performance of the boost vehicle enroute. A linear feedback control is used in the terminal phase to guide the aero-assisted booster onto the desired LEO. The computer simulation of the vehicle dynamics considers a rotating spherical earth, inverse square (Newtonian) gravity field and an exponential model for the earth's atmospheric density. This adaptive guidance algorithm is capable of handling large deviations in both atmospheric conditions and modeling uncertainties, while ensuring maximum booster performance.

  8. Link prediction boosted psychiatry disorder classification for functional connectivity network

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  9. Boosting thermoelectric efficiency using time-dependent control

    PubMed Central

    Zhou, Hangbo; Thingna, Juzar; Hänggi, Peter; Wang, Jian-Sheng; Li, Baowen

    2015-01-01

    Thermoelectric efficiency is defined as the ratio of power delivered to the load of a device to the rate of heat flow from the source. Till date, it has been studied in presence of thermodynamic constraints set by the Onsager reciprocal relation and the second law of thermodynamics that severely bottleneck the thermoelectric efficiency. In this study, we propose a pathway to bypass these constraints using a time-dependent control and present a theoretical framework to study dynamic thermoelectric transport in the far from equilibrium regime. The presence of a control yields the sought after substantial efficiency enhancement and importantly a significant amount of power supplied by the control is utilised to convert the wasted-heat energy into useful-electric energy. Our findings are robust against nonlinear interactions and suggest that external time-dependent forcing, which can be incorporated with existing devices, provides a beneficial scheme to boost thermoelectric efficiency. PMID:26464021

  10. A mechatronic power boosting design for piezoelectric generators

    SciTech Connect

    Liu, Haili; Liang, Junrui Ge, Cong

    2015-10-05

    It was shown that the piezoelectric power generation can be boosted by using the synchronized switch power conditioning circuits. This letter reports a self-powered and self-sensing mechatronic design in substitute of the auxiliary electronics towards a compact and universal synchronized switch solution. The design criteria are derived based on the conceptual waveforms and a two-degree-of-freedom analytical model. Experimental result shows that, compared to the standard bridge rectifier interface, the mechatronic design leads to an extra 111% increase of generated power from the prototyped piezoelectric generator under the same deflection magnitude excitation. The proposed design has introduced a valuable physical insight of electromechanical synergy towards the improvement of piezoelectric power generation.

  11. Writing about testing worries boosts exam performance in the classroom.

    PubMed

    Ramirez, Gerardo; Beilock, Sian L

    2011-01-14

    Two laboratory and two randomized field experiments tested a psychological intervention designed to improve students' scores on high-stakes exams and to increase our understanding of why pressure-filled exam situations undermine some students' performance. We expected that sitting for an important exam leads to worries about the situation and its consequences that undermine test performance. We tested whether having students write down their thoughts about an upcoming test could improve test performance. The intervention, a brief expressive writing assignment that occurred immediately before taking an important test, significantly improved students' exam scores, especially for students habitually anxious about test taking. Simply writing about one's worries before a high-stakes exam can boost test scores.

  12. Defined three-dimensional microenvironments boost induction of pluripotency

    NASA Astrophysics Data System (ADS)

    Caiazzo, Massimiliano; Okawa, Yuya; Ranga, Adrian; Piersigilli, Alessandra; Tabata, Yoji; Lutolf, Matthias P.

    2016-03-01

    Since the discovery of induced pluripotent stem cells (iPSCs), numerous approaches have been explored to improve the original protocol, which is based on a two-dimensional (2D) cell-culture system. Surprisingly, nothing is known about the effect of a more biologically faithful 3D environment on somatic-cell reprogramming. Here, we report a systematic analysis of how reprogramming of somatic cells occurs within engineered 3D extracellular matrices. By modulating microenvironmental stiffness, degradability and biochemical composition, we have identified a previously unknown role for biophysical effectors in the promotion of iPSC generation. We find that the physical cell confinement imposed by the 3D microenvironment boosts reprogramming through an accelerated mesenchymal-to-epithelial transition and increased epigenetic remodelling. We conclude that 3D microenvironmental signals act synergistically with reprogramming transcription factors to increase somatic plasticity.

  13. Metabolic engineering of resveratrol and other longevity boosting compounds.

    SciTech Connect

    Wang, Y; Chen, H; Yu, O

    2010-09-16

    Resveratrol, a compound commonly found in red wine, has attracted many attentions recently. It is a diphenolic natural product accumulated in grapes and a few other species under stress conditions. It possesses a special ability to increase the life span of eukaryotic organisms, ranging from yeast, to fruit fly, to obese mouse. The demand for resveratrol as a food and nutrition supplement has increased significantly in recent years. Extensive work has been carried out to increase the production of resveratrol in plants and microbes. In this review, we will discuss the biosynthetic pathway of resveratrol and engineering methods to heterologously express the pathway in various organisms. We will outline the shortcuts and limitations of common engineering efforts. We will also discuss briefly the features and engineering challenges of other longevity boosting compounds.

  14. An update on Shankhpushpi, a cognition-boosting Ayurvedic medicine.

    PubMed

    Sethiya, Neeraj Kumar; Nahata, Alok; Mishra, Sri Hari; Dixit, Vinod Kumar

    2009-11-01

    Shankhpushpi is an Ayurvedic drug used for its action on the central nervous system, especially for boosting memory and improving intellect. Quantum of information gained from Ayurvedic and other Sanskrit literature revealed the existence of four different plant species under the name of Shankhpushpi, which is used in various Ayurvedic prescriptions described in ancient texts, singly or in combination with other herbs. The sources comprise of entire herbs with following botanicals viz., Convulvulus pluricaulis Choisy. (Convulvulaceae), Evolvulus alsinoides Linn. (Convulvulaceae), Clitoria ternatea Linn. (Papilionaceae) and Canscora decussata Schult. (Gentianaceae). A review on the available scientific information in terms of pharmacognostical characteristics, chemical constituents, pharmacological activities, preclinical and clinical applications of controversial sources of Shankhpushpi is prepared with a view to review scientific work undertaken on Shankhpushpi. It may provide parameters of differentiation and permit appreciation of variability of drug action by use of different botanical sources.

  15. Buck-Buck- Boost Regulatr (B3R)

    NASA Astrophysics Data System (ADS)

    Mourra, Olivier; Fernandez, Arturo; Landstroem, Sven; Tonicello, Ferdinando

    2011-10-01

    In a satellite, the main function of a Power Conditioning Unit (PCU) is to manage the energy coming from several power sources (usually solar arrays and battery) and to deliver it continuously to the users in an appropriate form during the overall mission. The objective of this paper is to present an electronic switching DC-DC converter called Buck-Buck-Boost Regulator (B3R) that could be used as a modular and recurrent solution in a PCU for regulated or un- regulated 28Vsatellite power bus classes. The power conversion stages of the B3R topology are first described. Then theoretical equations and practical tests illustrate how the converter operates in term of power conversion, control loops performances and efficiency. The paper finally provides some examples of single point failure tolerant implementation using the B3R.

  16. Traction drive for cryogenic boost pump. [hydrogen oxygen rocket engines

    NASA Technical Reports Server (NTRS)

    Meyer, S.; Connelly, R. E.

    1981-01-01

    Two versions of a Nasvytis multiroller traction drive were tested in liquid oxygen for possible application as cryogenic boost pump speed reduction drives for advanced hydrogen-oxygen rocket engines. The roller drive, with a 10.8:1 reduction ratio, was successfully run at up to 70,000 rpm input speed and up to 14.9 kW (20 hp) input power level. Three drive assemblies were tested for a total of about three hours of which approximately one hour was at nominal full speed and full power conditions. Peak efficiency of 60 percent was determined. There was no evidence of slippage between rollers for any of the conditions tested. The ball drive, a version using balls instead of one row of rollers, and having a 3.25:1 reduction ratio, failed to perform satisfactorily.

  17. Measuring Intuition: Nonconscious Emotional Information Boosts Decision Accuracy and Confidence.

    PubMed

    Lufityanto, Galang; Donkin, Chris; Pearson, Joel

    2016-05-01

    The long-held popular notion of intuition has garnered much attention both academically and popularly. Although most people agree that there is such a phenomenon as intuition, involving emotionally charged, rapid, unconscious processes, little compelling evidence supports this notion. Here, we introduce a technique in which subliminal emotional information is presented to subjects while they make fully conscious sensory decisions. Our behavioral and physiological data, along with evidence-accumulator models, show that nonconscious emotional information can boost accuracy and confidence in a concurrent emotion-free decision task, while also speeding up response times. Moreover, these effects were contingent on the specific predictive arrangement of the nonconscious emotional valence and motion direction in the decisional stimulus. A model that simultaneously accumulates evidence from both physiological skin conductance and conscious decisional information provides an accurate description of the data. These findings support the notion that nonconscious emotions can bias concurrent nonemotional behavior-a process of intuition.

  18. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  19. Esophageal Cancer Dose Escalation Using a Simultaneous Integrated Boost Technique

    SciTech Connect

    Welsh, James; Palmer, Matthew B.; Ajani, Jaffer A.; Liao Zhongxing; Swisher, Steven G.; Hofstetter, Wayne L.; Allen, Pamela K.; Settle, Steven H.; Gomez, Daniel; Likhacheva, Anna; Cox, James D.; Komaki, Ritsuko

    2012-01-01

    Purpose: We previously showed that 75% of radiation therapy (RT) failures in patients with unresectable esophageal cancer are in the gross tumor volume (GTV). We performed a planning study to evaluate if a simultaneous integrated boost (SIB) technique could selectively deliver a boost dose of radiation to the GTV in patients with esophageal cancer. Methods and Materials: Treatment plans were generated using four different approaches (two-dimensional conformal radiotherapy [2D-CRT] to 50.4 Gy, 2D-CRT to 64.8 Gy, intensity-modulated RT [IMRT] to 50.4 Gy, and SIB-IMRT to 64.8 Gy) and optimized for 10 patients with distal esophageal cancer. All plans were constructed to deliver the target dose in 28 fractions using heterogeneity corrections. Isodose distributions were evaluated for target coverage and normal tissue exposure. Results: The 50.4 Gy IMRT plan was associated with significant reductions in mean cardiac, pulmonary, and hepatic doses relative to the 50.4 Gy 2D-CRT plan. The 64.8 Gy SIB-IMRT plan produced a 28% increase in GTV dose and comparable normal tissue doses as the 50.4 Gy IMRT plan; compared with the 50.4 Gy 2D-CRT plan, the 64.8 Gy SIB-IMRT produced significant dose reductions to all critical structures (heart, lung, liver, and spinal cord). Conclusions: The use of SIB-IMRT allowed us to selectively increase the dose to the GTV, the area at highest risk of failure, while simultaneously reducing the dose to the normal heart, lung, and liver. Clinical implications warrant systematic evaluation.

  20. Boosted di-boson from a mixed heavy stop

    SciTech Connect

    Ghosh, Diptimoy

    2013-12-01

    The lighter mass eigenstate ($\\widetilde{t}_1$) of the two top squarks, the scalar superpartners of the top quark, is extremely difficult to discover if it is almost degenerate with the lightest neutralino ($\\widetilde{\\chi}_1^0$), the lightest and stable supersymmetric particle in the R-parity conserving supersymmetry. The current experimental bound on $\\widetilde{t}_1$ mass in this scenario stands only around 200 GeV. For such a light $\\widetilde{t}_1$, the heavier top squark ($\\widetilde{t}_2$) can also be around the TeV scale. Moreover, the high value of the higgs ($h$) mass prefers the left and right handed top squarks to be highly mixed allowing the possibility of a considerable branching ratio for $\\widetilde{t}_2 \\to \\widetilde{t}_1 h$ and $\\widetilde{t}_2 \\to \\widetilde{t}_1 Z$. In this paper, we explore the above possibility together with the pair production of $\\widetilde{t}_2$ $\\widetilde{t}_2^*$ giving rise to the spectacular di-boson + missing transverse energy final state. For an approximately 1 TeV $\\widetilde{t}_2$ and a few hundred GeV $\\widetilde{t}_1$ the final state particles can be moderately boosted which encourages us to propose a novel search strategy employing the jet substructure technique to tag the boosted $h$ and $Z$. The reconstruction of the $h$ and $Z$ momenta also allows us to construct the stransverse mass $M_{T2}$ providing an additional efficient handle to fight the backgrounds. We show that a 4--5$\\sigma$ signal can be observed at the 14 TeV LHC for $\\sim$ 1 TeV $\\widetilde{t}_2$ with 100 fb$^{-1}$ integrated luminosity.

  1. Radiosurgical boost for primary high-grade gliomas.

    PubMed

    Prisco, Flavio E; Weltman, Eduardo; de Hanriot, Rodrigo M; Brandt, Reynaldo A

    2002-04-01

    The purpose of this study was to retrospectively evaluate the survival of patients with high-grade gliomas treated with external beam radiotherapy with or without radiosurgical boost. From July 1993 to April 1998, 32 patients were selected, 15 of which received radiosurgery. Inclusion criteria were age > 18 years, histological confirmation of high-grade glioma, primary tumor treatment with curative intent, unifocal tumor and supratentorial location. All patients were found to be in classes III-VI, according to the recursive partitioning analysis proposed by the Radiation Therapy Oncology Group. The median interval between radiotherapy and radiosurgery was 5 weeks (range 1-13). Treatment volumes ranged from 2.9 to 70.3 cc (median 15.0 cc). Prescribed radiosurgery doses varied from 8.0 to 12.5 Gy (median 10.0 Gy). Radiosurgery and control groups were well balanced with respect to prognostic factor distributions. Median actuarial survival time in radiosurgery and control groups was 21.4 months and 11.6 months, respectively (p = 0.0254). Among patients with KPS > 80, median survival time was 11.0 months and 53.9 months in the control and radiosurgery groups, respectively (p = 0.0103). Radiosurgery was the single factor correlated with survival on Cox model analysis (p = 0.0362) and was associated with a 2.76 relative reduction in the risk of cancer death (95% confidence interval (CI) 1.07-7.13). Our results suggest that radiosurgery may confer a survival advantage for patients in RPA classes III-VI, especially for those with Karnofsky performance status >80. The definitive role of radiosurgical boost for patients with high-grade gliomas awaits the results of randomized trials.

  2. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  3. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  4. Comparison of composite prostate radiotherapy plan doses with dependent and independent boost phases.

    PubMed

    Narayanasamy, Ganesh; Avila, Gabrielle; Mavroidis, Panayiotis; Papanikolaou, Niko; Gutierrez, Alonso; Baacke, Diana; Shi, Zheng; Stathakis, Sotirios

    2016-09-01

    Prostate cases commonly consist of dual phase planning with a primary plan followed by a boost. Traditionally, the boost phase is planned independently from the primary plan with the risk of generating hot or cold spots in the composite plan. Alternatively, boost phase can be planned taking into account the primary dose. The aim of this study was to compare the composite plans from independently and dependently planned boosts using dosimetric and radiobiological metrics. Ten consecutive prostate patients previously treated at our institution were used to conduct this study on the Raystation™ 4.0 treatment planning system. For each patient, two composite plans were developed: a primary plan with an independently planned boost and a primary plan with a dependently planned boost phase. The primary plan was prescribed to 54 Gy in 30 fractions to the primary planning target volume (PTV1) which includes prostate and seminal vesicles, while the boost phases were prescribed to 24 Gy in 12 fractions to the boost planning target volume (PTV2) that targets only the prostate. PTV coverage, max dose, median dose, target conformity, dose homogeneity, dose to OARs, and probabilities of benefit, injury, and complication-free tumor control (P+) were compared. Statistical significance was tested using either a 2-tailed Student's t-test or Wilcoxon signed-rank test. Dosimetrically, the composite plan with dependent boost phase exhibited smaller hotspots, lower maximum dose to the target without any significant change to normal tissue dose. Radiobiologically, for all but one patient, the percent difference in the P+ values between the two methods was not significant. A large percent difference in P+ value could be attributed to an inferior primary plan. The benefits of considering the dose in primary plan while planning the boost is not significant unless a poor primary plan was achieved.

  5. Ductal Carcinoma in Situ-The Influence of the Radiotherapy Boost on Local Control

    SciTech Connect

    Wong, Philip; Lambert, Christine; Agnihotram, Ramanakumar V.; David, Marc; Duclos, Marie; Freeman, Carolyn R.

    2012-02-01

    Purpose: Local recurrence (LR) of ductal carcinoma in situ (DCIS) is reduced by whole-breast irradiation after breast-conserving surgery (BCS). However, the benefit of adding a radiotherapy boost to the surgical cavity for DCIS is unclear. We sought to determine the impact of the boost on LR in patients with DCIS treated at the McGill University Health Centre. Methods and Materials: A total of 220 consecutive cases of DCIS treated with BCS and radiotherapy between January 2000 and December 2006 were reviewed. Of the patients, 36% received a radiotherapy boost to the surgical cavity. Median follow-up was 46 months for the boost and no-boost groups. Kaplan-Meier survival analyses and Cox regression analyses were performed. Results: Compared with the no-boost group, patients in the boost group more frequently had positive and <0.1-cm margins (48% vs. 8%) (p < 0.0001) and more frequently were in higher-risk categories as defined by the Van Nuys Prognostic (VNP) index (p = 0.006). Despite being at higher risk for LR, none (0/79) of the patients who received a boost experienced LR, whereas 8 of 141 patients who did not receive a boost experienced an in-breast LR (log-rank p = 0.03). Univariate analysis of prognostic factors (age, tumor size, margin status, histological grade, necrosis, and VNP risk category) revealed only the presence of necrosis to significantly correlate with LR (log-rank p = 0.003). The whole-breast irradiation dose and fractionation schedule did not affect LR rate. Conclusions: Our results suggest that the use of a radiotherapy boost improves local control in DCIS and may outweigh the poor prognostic effect of necrosis.

  6. World lines.

    PubMed

    Waser, Jürgen; Fuchs, Raphael; Ribicić, Hrvoje; Schindler, Benjamin; Blöschl, Günther; Gröller, Eduard

    2010-01-01

    In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas, decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting, the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation, visualization and computational steering into a single unified system that is capable of dealing with the extended solution space. World Lines represent simulation runs as causally connected tracks that share a common time axis. This setup enables users to interfere and add new information quickly. A World Line is introduced as a visual combination of user events and their effects in order to present a possible future. To quickly find the most attractive outcome, we suggest World Lines as the governing component in a system of multiple linked views and a simulation component. World Lines employ linking and brushing to enable comparative visual analysis of multiple simulations in linked views. Analysis results can be mapped to various visual variables that World Lines provide in order to highlight the most compelling solutions. To demonstrate this technique we present a flooding scenario and show the usefulness of the integrated approach to support informed decision making.

  7. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  8. Trajectory prediction for ballistic missiles based on boost-phase LOS measurements

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov

    1997-10-01

    This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.

  9. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  10. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  11. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  12. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  13. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  14. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  15. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  16. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  17. Line-on-Line Coincidence: A New Type of Epitaxy Found in Organic-Organic Heterolayers

    NASA Astrophysics Data System (ADS)

    Mannsfeld, Stefan C.; Leo, Karl; Fritz, Torsten

    2005-02-01

    We propose a new type of epitaxy, line-on-line coincidence (LOL), which explains the ordering in the organic-organic heterolayer system PTCDA on HBC on graphite. LOL epitaxy is similar to point-on-line coincidence (POL) in the sense that all overlayer molecules lie on parallel, equally spaced lines. The key difference to POL is that these lines are not restricted to primitive lattice lines of the substrate lattice. Potential energy calculations demonstrate that this new type of epitaxy is indeed characterized by a minimum in the overlayer-substrate interaction potential.

  18. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  19. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  20. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  1. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-12-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

  2. The Complexity of Parallel Algorithms,

    DTIC Science & Technology

    1985-11-01

    Much of this work was done in collaboration with my advisor, Ernst Mayr . He was also supported in part by ONR contract N00014-85-C-0731. F ’. Table...Helinbold and Mayr in their algorithn to compute an optimal two processor schedule [HM2]. One of the promising developments in parallel algorithms is that...lei can be solved by it fast parallel algorithmmmi if the nmlmmmibers are smiall. llehmibold and Mayr JIlM I] have slhowm that. if Ole job timies are

  3. Monochromatic neutrino lines from sneutrino dark matter

    NASA Astrophysics Data System (ADS)

    Arina, Chiara; Kulkarni, Suchita; Silk, Joseph

    2015-10-01

    We investigate the possibility of observing monochromatic neutrino lines originating from annihilation of dark matter. We analyze several astrophysical sources with overdensities of dark matter that can amplify the signal. As a case study, we consider mixed left- and right-handed sneutrino dark matter. We demonstrate that in the physically viable region of the model, one can obtain a prominent monochromatic neutrino line. We propose a search strategy to observe these neutrino lines in future generations of neutrino telescopes that is especially sensitive to dwarf spheroidal galaxies. We demonstrate that the presence of massive black holes in the cores of dwarfs as well as of more massive galaxies substantially boosts any putative signal. In particular, dark matter in dwarf galaxies spiked by an intermediate massive black hole provides a powerful means of probing low-annihilation cross sections well below 10-26 cm3 s-1 that are otherwise inaccessible by any future direct detection or collider experiment.

  4. CyberKnife Boost for Patients with Cervical Cancer Unable to Undergo Brachytherapy.

    PubMed

    Haas, Jonathan Andrew; Witten, Matthew R; Clancey, Owen; Episcopia, Karen; Accordino, Diane; Chalas, Eva

    2012-01-01

    Standard radiation therapy for patients undergoing primary chemosensitized radiation for carcinomas of the cervix usually consists of external beam radiation followed by an intracavitary brachytherapy boost. On occasion, the brachytherapy boost cannot be performed due to unfavorable anatomy or because of coexisting medical conditions. We examined the safety and efficacy of using CyberKnife stereotactic body radiotherapy (SBRT) as a boost to the cervix after external beam radiation in those patients unable to have brachytherapy to give a more effective dose to the cervix than with conventional external beam radiation alone. Six consecutive patients with anatomic or medical conditions precluding a tandem and ovoid boost were treated with combined external beam radiation and CyberKnife boost to the cervix. Five patients received 45 Gy to the pelvis with serial intensity-modulated radiation therapy boost to the uterus and cervix to a dose of 61.2 Gy. These five patients received an SBRT boost to the cervix to a dose of 20 Gy in five fractions of 4 Gy each. One patient was treated to the pelvis to a dose of 45 Gy with an external beam boost to the uterus and cervix to a dose of 50.4 Gy. This patient received an SBRT boost to the cervix to a dose of 19.5 Gy in three fractions of 6.5 Gy. Five percent volumes of the bladder and rectum were kept to ≤75 Gy in all patients (i.e., V75 Gy ≤ 5%). All of the patients remain locally controlled with no evidence of disease following treatment. Grade 1 diarrhea occurred in 4/6 patients during the conventional external beam radiation. There has been no grade 3 or 4 rectal or bladder toxicity. There were no toxicities observed following SBRT boost. At a median follow-up of 14 months, CyberKnife radiosurgical boost is well tolerated and efficacious in providing a boost to patients with cervix cancer who are unable to undergo brachytherapy boost. Further follow-up is required to see if these results remain

  5. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  6. Modeling a multilevel boost converter using SiC components for PV application

    NASA Astrophysics Data System (ADS)

    Alateeq, Ayoob S.; Almalaq, Yasser A.; Matin, Mohammad A.

    2016-09-01

    This paper discusses a DC-DC multilevel boost with wide bandgap components for PV applications. In the PV system, the multilevel boost converter is advisable to be used over the conventional boost converter because of the high ratio conversion. The multilevel boost converter is designed with one inductor, 2N-1 silicon carbide (SiC) schottky diodes, 2N-1 capacitors and one SiC MOSFET where N is the number of levels. Inserting SiC components in the design helps to maintain the temperature effect that could cause a high power loss. Most function of using a multilevel boost converter is to produce a high output voltage without using either a power transformer or a coupled inductor. Achieving a high gain output in the multilevel boost converter depends on the level of the converter and the switching duty cycle. The demonstrated design is a multilevel boost converter supplies from 220 V to rate 2 KW power. The switching frequency is 100 KHz and the output voltage of 4-level is 3.5 KV. Several values of temperatures are applicable to the system and the effect of changing the temperature on efficiency is studied. The developed design is simulated by using a LTspice software and the results are discussed.

  7. Boost Your High: Cigarette Smoking to Enhance Alcohol and Drug Effects among Southeast Asian American Youth

    PubMed Central

    Lipperman-Kreda, Sharon; Lee, Juliet P.

    2011-01-01

    The current study examined: 1) whether using cigarettes to enhance the effects of other drugs (here referred to as “boosting”) is a unique practice related to blunts (i.e., small cheap cigars hollowed out and filled with cannabis) or marijuana use only; 2) the prevalence of boosting among drug-using young people; and 3) the relationship between boosting and other drug-related risk behaviors. We present data collected from 89 Southeast Asian American youth and young adults in Northern California (35 females). 72% respondents reported any lifetime boosting. Controlling for gender, results of linear regression analyses show a significant positive relationship between frequency of boosting to enhance alcohol high and number of drinks per occasion. Boosting was also found to be associated with use of blunts but not other forms of marijuana and with the number of blunts on a typical day. The findings indicate that boosting may be common among drug-using Southeast Asian youths. These findings also indicate a need for further research on boosting as an aspect of cigarette uptake and maintenance among drug- and alcohol-involved youths. PMID:22522322

  8. A novel sparse boosting method for crater detection in the high resolution planetary image

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Yang, Gang; Guo, Lei

    2015-09-01

    Impact craters distributed on planetary surface become one of the main barriers during the soft landing of planetary probes. In order to accelerate the crater detection, in this paper, we present a new sparse boosting (SparseBoost) method for automatic detection of sub-kilometer craters. The SparseBoost method integrates an improved sparse kernel density estimator (RSDE-WL1) into the Boost algorithm and the RSDE-WL1 estimator is achieved by introducing weighted l1 penalty term into the reduced set density estimator. An iterative algorithm is proposed to implement the RSDE-WL1. The SparseBoost algorithm has the advantage of fewer selected features and simpler representation of the weak classifiers compared with the Boost algorithm. Our SparseBoost based crater detection method is evaluated on a large and high resolution image of Martian surface. Experimental results demonstrate that the proposed method can achieve less computational complexity in comparison with other crater detection methods in terms of selected features.

  9. Evaluation of stereotactic body radiotherapy (SBRT) boost in the management of endometrial cancer.

    PubMed

    Demiral, S; Beyzadeoglu, M; Uysal, B; Oysul, K; Kahya, Y Elcim; Sager, O; Dincoglan, F; Gamsiz, H; Dirican, B; Surenkok, S

    2013-01-01

    The purpose of this study is to evaluate the use of linear accelerator (LINAC)-based stereotactic body radiotherapy (SBRT) boost with multileaf collimator technique after pelvic radiotherapy (RT) in patients with endometrial cancer. Consecutive patients with endometrial cancer treated using LINAC-based SBRT boost after pelvic RT were enrolled in the study. All patients had undergone surgery including total abdominal hysterectomy and bilateral salpingo-oophorectomy ± pelvic/paraortic lymphadenectomy before RT. Prescribed external pelvic RT dose was 45 Gray (Gy) in 1.8 Gy daily fractions. All patients were treated with SBRT boost after pelvic RT. The prescribed SBRT boost dose to the upper two thirds of the vagina including the vaginal vault was 18 Gy delivered in 3 fractions with 1-week intervals. Gastrointestinal and genitourinary toxicity was assessed using the Common Terminology Criteria for Adverse Events version 3 (CTCAE v3).Between April 2010 and May 2011, 18 patients with stage I-III endometrial cancer were treated with LINAC-based SBRT boost after pelvic RT. At a median follow-up of 24 (8-26) months with magnetic resonance imaging (MRI) and gynecological examination, local control rate of the study group was 100 % with negligible acute and late toxicity.LINAC-based SBRT boost to the vaginal cuff is a feasible gynecological cancer treatment modality with excellent local control and minimal toxicity that may replace traditional brachytherapy boost in the management of endometrial cancer.

  10. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  11. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  12. Fast, Massively Parallel Data Processors

    NASA Technical Reports Server (NTRS)

    Heaton, Robert A.; Blevins, Donald W.; Davis, ED

    1994-01-01

    Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.

  13. Optical Interferometric Parallel Data Processor

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.

    1987-01-01

    Image data processed faster than in present electronic systems. Optical parallel-processing system effectively calculates two-dimensional Fourier transforms in time required by light to travel from plane 1 to plane 8. Coherence interferometer at plane 4 splits light into parts that form double image at plane 6 if projection screen placed there.

  14. Tutorial: Parallel Simulation on Supercomputers

    SciTech Connect

    Perumalla, Kalyan S

    2012-01-01

    This tutorial introduces typical hardware and software characteristics of extant and emerging supercomputing platforms, and presents issues and solutions in executing large-scale parallel discrete event simulation scenarios on such high performance computing systems. Covered topics include synchronization, model organization, example applications, and observed performance from illustrative large-scale runs.

  15. The physics of parallel machines

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    The idea is considered that architectures for massively parallel computers must be designed to go beyond supporting a particular class of algorithms to supporting the underlying physical processes being modelled. Physical processes modelled by partial differential equations (PDEs) are discussed. Also discussed is the idea that an efficient architecture must go beyond nearest neighbor mesh interconnections and support global and hierarchical communications.

  16. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  17. Impact of the Radiation Boost on Outcomes After Breast-Conserving Surgery and Radiation

    SciTech Connect

    Murphy, Colin; Anderson, Penny R.; Li Tianyu; Bleicher, Richard J.; Sigurdson, Elin R.; Goldstein, Lori J.; Swaby, Ramona; Denlinger, Crystal; Dushkin, Holly; Nicolaou, Nicos; Freedman, Gary M.

    2011-09-01

    Purpose: We examined the impact of radiation tumor bed boost parameters in early-stage breast cancer on local control and cosmetic outcomes. Methods and Materials: A total of 3,186 women underwent postlumpectomy whole-breast radiation with a tumor bed boost for Tis to T2 breast cancer from 1970 to 2008. Boost parameters analyzed included size, energy, dose, and technique. Endpoints were local control, cosmesis, and fibrosis. The Kaplan-Meier method was used to estimate actuarial incidence, and a Cox proportional hazard model was used to determine independent predictors of outcomes on multivariate analysis (MVA). The median follow-up was 78 months (range, 1-305 months). Results: The crude cosmetic results were excellent in 54%, good in 41%, and fair/poor in 5% of patients. The 10-year estimate of an excellent cosmesis was 66%. On MVA, independent predictors for excellent cosmesis were use of electron boost, lower electron energy, adjuvant systemic therapy, and whole-breast IMRT. Fibrosis was reported in 8.4% of patients. The actuarial incidence of fibrosis was 11% at 5 years and 17% at 10 years. On MVA, independent predictors of fibrosis were larger cup size and higher boost energy. The 10-year actuarial local failure was 6.3%. There was no significant difference in local control by boost method, cut-out size, dose, or energy. Conclusions: Likelihood of excellent cosmesis or fibrosis are associated with boost technique, electron energy, and cup size. However, because of high local control and rare incidence of fair/poor cosmesis with a boost, the anatomy of the patient and tumor cavity should ultimately determine the necessary boost parameters.

  18. PALM: a Parallel Dynamic Coupler

    NASA Astrophysics Data System (ADS)

    Thevenin, A.; Morel, T.

    2008-12-01

    In order to efficiently represent complex systems, numerical modeling has to rely on many physical models at a time: an ocean model coupled with an atmospheric model is at the basis of climate modeling. The continuity of the solution is granted only if these models can constantly exchange information. PALM is a coupler allowing the concurrent execution and the intercommunication of programs not having been especially designed for that. With PALM, the dynamic coupling approach is introduced: a coupled component can be launched and can release computers' resources upon termination at any moment during the simulation. In order to exploit as much as possible computers' possibilities, the PALM coupler handles two levels of parallelism. The first level concerns the components themselves. While managing the resources, PALM allocates the number of processes which are necessary to any coupled component. These models can be parallel programs based on domain decomposition with MPI or applications multithreaded with OpenMP. The second level of parallelism is a task parallelism: one can define a coupling algorithm allowing two or more programs to be executed in parallel. PALM applications are implemented via a Graphical User Interface called PrePALM. In this GUI, the programmer initially defines the coupling algorithm then he describes the actual communications between the models. PALM offers a very high flexibility for testing different coupling techniques and for reaching the best load balance in a high performance computer. The transformation of computational independent code is almost straightforward. The other qualities of PALM are its easy set-up, its flexibility, its performances, the simple updates and evolutions of the coupled application and the many side services and functions that it offers.

  19. Parallel heat transport in integrable and chaotic magnetic fields

    SciTech Connect

    Del-Castillo-Negrete, Diego B; Chacon, Luis

    2012-01-01

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly chal- lenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), , and the perpendicular, , conductivities ( / may exceed 1010 in fusion plasmas); (ii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates; and (iii) Nonlocal parallel transport in the limit of small collisionality. Motivated by these issues, we present a Lagrangian Green s function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geom- etry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island chain), weakly chaotic (devil s staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local closures, is non-diffusive, thus casting doubts on the appropriateness of the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.

  20. Effects of parallel electron dynamics on plasma blob transport

    SciTech Connect

    Angus, Justin R.; Krasheninnikov, Sergei I.; Umansky, Maxim V.

    2012-08-15

    The 3D effects on sheath connected plasma blobs that result from parallel electron dynamics are studied by allowing for the variation of blob density and potential along the magnetic field line and using collisional Ohm's law to model the parallel current density. The parallel current density from linear sheath theory, typically used in the 2D model, is implemented as parallel boundary conditions. This model includes electrostatic 3D effects, such as resistive drift waves and blob spinning, while retaining all of the fundamental 2D physics of sheath connected plasma blobs. If the growth time of unstable drift waves is comparable to the 2D advection time scale of the blob, then the blob's density gradient will be depleted resulting in a much more diffusive blob with little radial motion. Furthermore, blob profiles that are initially varying along the field line drive the potential to a Boltzmann relation that spins the blob and thereby acts as an addition sink of the 2D potential. Basic dimensionless parameters are presented to estimate the relative importance of these two 3D effects. The deviation of blob dynamics from that predicted by 2D theory in the appropriate limits of these parameters is demonstrated by a direct comparison of 2D and 3D seeded blob simulations.

  1. Semi-coarsening multigrid methods for parallel computing

    SciTech Connect

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  2. Retroperitoneal Sarcoma (RPS) High Risk Gross Tumor Volume Boost (HR GTV Boost) Contour Delineation Agreement Among NRG Sarcoma Radiation and Surgical Oncologists

    PubMed Central

    Baldini, Elizabeth H.; Bosch, Walter; Kane, John M.; Abrams, Ross A.; Salerno, Kilian E.; Deville, Curtiland; Raut, Chandrajit P.; Petersen, Ivy A.; Chen, Yen-Lin; Mullen, John T.; Millikan, Keith W.; Karakousis, Giorgos; Kendrick, Michael L.; DeLaney, Thomas F.; Wang, Dian

    2015-01-01

    Purpose Curative intent management of retroperitoneal sarcoma (RPS) requires gross total resection. Preoperative radiotherapy (RT) often is used as an adjuvant to surgery, but recurrence rates remain high. To enhance RT efficacy with acceptable tolerance, there is interest in delivering “boost doses” of RT to high-risk areas of gross tumor volume (HR GTV) judged to be at risk for positive resection margins. We sought to evaluate variability in HR GTV boost target volume delineation among collaborating sarcoma radiation and surgical oncologist teams. Methods Radiation planning CT scans for three cases of RPS were distributed to seven paired radiation and surgical oncologist teams at six institutions. Teams contoured HR GTV boost volumes for each case. Analysis of contour agreement was performed using the simultaneous truth and performance level estimation (STAPLE) algorithm and kappa statistics. Results HRGTV boost volume contour agreement between the seven teams was “substantial” or “moderate” for all cases. Agreement was best on the torso wall posteriorly (abutting posterior chest abdominal wall) and medially (abutting ipsilateral para-vertebral space and great vessels). Contours varied more significantly abutting visceral organs due to differing surgical opinions regarding planned partial organ resection. Conclusions Agreement of RPS HRGTV boost volumes between sarcoma radiation and surgical oncologist teams was substantial to moderate. Differences were most striking in regions abutting visceral organs, highlighting the importance of collaboration between the radiation and surgical oncologist for “individualized” target delineation on the basis of areas deemed at risk and planned resection. PMID:26018727

  3. RS-34 Phoenix (Peacekeeper Post Boost Propulsion System) Utilization Study

    NASA Technical Reports Server (NTRS)

    Esther, Elizabeth A.; Kos, Larry; Burnside, Christopher G.; Bruno, Cy

    2013-01-01

    The Advanced Concepts Office (ACO) at the NASA Marshall Space Flight Center (MSFC) in conjunction with Pratt & Whitney Rocketdyne conducted a study to evaluate potential in-space applications for the Rocketdyne produced RS-34 propulsion system. The existing RS-34 propulsion system is a remaining asset from the de-commissioned United States Air Force Peacekeeper ICBM program, specifically the pressure-fed storable bipropellant Stage IV Post Boost Propulsion System, renamed Phoenix. MSFC gained experience with the RS-34 propulsion system on the successful Ares I-X flight test program flown in October 2009. RS-34 propulsion system components were harvested from stages supplied by the USAF and used on the Ares I-X Roll control system (RoCS). The heritage hardware proved extremely robust and reliable and sparked interest for further utilization on other potential in-space applications. MSFC is working closely with the USAF to obtain RS-34 stages for re-use opportunities. Prior to pursuit of securing the hardware, MSFC commissioned the Advanced Concepts Office to understand the capability and potential applications for the RS-34 Phoenix stage as it benefits NASA, DoD, and commercial industry. As originally designed, the RS-34 Phoenix provided in-space six-degrees-of freedom operational maneuvering to deploy multiple payloads at various orbital locations. The RS-34 Phoenix Utilization Study sought to understand how the unique capabilities of the RS-34 Phoenix and its application to six candidate missions: 1) small satellite delivery (SSD), 2) orbital debris removal (ODR), 3) ISS re-supply, 4) SLS kick stage, 5) manned GEO servicing precursor mission, and an Earth-Moon L-2 Waypoint mission. The small satellite delivery and orbital debris removal missions were found to closely mimic the heritage RS-34 mission. It is believed that this technology will enable a small, low-cost multiple satellite delivery to multiple orbital locations with a single boost. For both the small

  4. RS-34 Phoenix (Peacekeeper Post Boost Propulsion System) Utilization Study

    NASA Technical Reports Server (NTRS)

    Esther, Elizabeth A.; Kos, Larry; Bruno, Cy

    2012-01-01

    The Advanced Concepts Office (ACO) at the NASA Marshall Space Flight Center (MSFC) in conjunction with Pratt & Whitney Rocketdyne conducted a study to evaluate potential in-space applications for the Rocketdyne produced RS-34 propulsion system. The existing RS-34 propulsion system is a remaining asset from the decommissioned United States Air Force Peacekeeper ICBM program; specifically the pressure-fed storable bipropellant Stage IV Post Boost Propulsion System, renamed Phoenix. MSFC gained experience with the RS-34 propulsion system on the successful Ares I-X flight test program flown in October 2009. RS-34 propulsion system components were harvested from stages supplied by the USAF and used on the Ares I-X Roll control system (RoCS). The heritage hardware proved extremely robust and reliable and sparked interest for further utilization on other potential in-space applications. Subsequently, MSFC is working closely with the USAF to obtain all the remaining RS-34 stages for re-use opportunities. Prior to pursuit of securing the hardware, MSFC commissioned the Advanced Concepts Office to understand the capability and potential applications for the RS-34 Phoenix stage as it benefits NASA, DoD, and commercial industry. Originally designed, the RS-34 Phoenix provided in-space six-degrees-of freedom operational maneuvering to deploy multiple payloads at various orbital locations. The RS-34 Phoenix Utilization Study sought to understand how the unique capabilities of the RS-34 Phoenix and its application to six candidate missions: 1) small satellite delivery (SSD), 2) orbital debris removal (ODR), 3) ISS re-supply, 4) SLS kick stage, 5) manned GEO servicing precursor mission, and an Earth-Moon L-2 Waypoint mission. The small satellite delivery and orbital debris removal missions were found to closely mimic the heritage RS-34 mission. It is believed that this technology will enable a small, low-cost multiple satellite delivery to multiple orbital locations with a single

  5. Parallel Quantum Circuit in a Tunnel Junction

    PubMed Central

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-01-01

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N2 power law is preserved for Ωab(N) and for Vab(N). PMID:27453262

  6. Parallel Quantum Circuit in a Tunnel Junction

    NASA Astrophysics Data System (ADS)

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-07-01

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N2 power law is preserved for Ωab(N) and for Vab(N).

  7. Parallel Quantum Circuit in a Tunnel Junction.

    PubMed

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian

    2016-07-25

    Spectral analysis of 1 and 2-states per line quantum bus are normally sufficient to determine the effective Vab(N) electronic coupling between the emitter and receiver states through the bus as a function of the number N of parallel lines. When Vab(N) is difficult to determine, an Heisenberg-Rabi time dependent quantum exchange process must be triggered through the bus to capture the secular oscillation frequency Ωab(N) between those states. Two different linear and regimes are demonstrated for Ωab(N) as a function of N. When the initial preparation is replaced by coupling of the quantum bus to semi-infinite electrodes, the resulting quantum transduction process is not faithfully following the Ωab(N) variations. Because of the electronic transparency normalisation to unity and of the low pass filter character of this transduction, large Ωab(N) cannot be captured by the tunnel junction. The broadly used concept of electrical contact between a metallic nanopad and a molecular device must be better described as a quantum transduction process. At small coupling and when N is small enough not to compensate for this small coupling, an N(2) power law is preserved for Ωab(N) and for Vab(N).

  8. Parallel phase-shifting digital holography using spectral estimation technique.

    PubMed

    Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu

    2014-09-20

    We propose a parallel phase-shifting digital holography using a spectral estimation technique, which enables the instantaneous acquisition of spectral information and three-dimensional (3D) information of a moving object. In this technique, an interference fringe image that contains six holograms with two phase shifts for three laser lines, such as red, green, and blue, is recorded by a space-division multiplexing method with single-shot exposure. The 3D monochrome images of these three laser lines are numerically reconstructed by a computer and used to estimate the spectral reflectance distribution of object using a spectral estimation technique. Preliminary experiments demonstrate the validity of the proposed technique.

  9. Electrochemical, H2O2-Boosted Catalytic Oxidation System

    NASA Technical Reports Server (NTRS)

    Akse, James R.; Thompson, John O.; Schussel, Leonard J.

    2004-01-01

    An improved water-sterilizing aqueous-phase catalytic oxidation system (APCOS) is based partly on the electrochemical generation of hydrogen peroxide (H2O2). This H2O2-boosted system offers significant improvements over prior dissolved-oxygen water-sterilizing systems in the way in which it increases oxidation capabilities, supplies H2O2 when needed, reduces the total organic carbon (TOC) content of treated water to a low level, consumes less energy than prior systems do, reduces the risk of contamination, and costs less to operate. This system was developed as a variant of part of an improved waste-management subsystem of the life-support system of a spacecraft. Going beyond its original intended purpose, it offers the advantage of being able to produce H2O2 on demand for surface sterilization and/or decontamination: this is a major advantage inasmuch as the benign byproducts of this H2O2 system, unlike those of systems that utilize other chemical sterilants, place no additional burden of containment control on other spacecraft air- or water-reclamation systems.

  10. Boosting productivity: a framework for professional/amateur collaborative teamwork

    NASA Astrophysics Data System (ADS)

    Al-Shedhani, Saleh S.

    2002-11-01

    As technology advances, remote operation of telescopes has paved the way for joint observational projects between Astronomy clubs. Equipped with a small telescope, a standard CCD, and a networked computer, the observatory can be set up to carry out several photometric studies. However, most club members lack the basic training and background required for such tasks. A collaborative network between professionals and amateurs is proposed to utilize professional know-how and amateurs' readiness for continuous observations. Working as a team, various long-term observational projects can be carried out using small telescopes. Professionals can play an important role in raising the standards of astronomy clubs via specialized training programs for members on how to use the available technology to search/observe certain events (e.g. supernovae, comets, etc.). Professionals in return can accumulate a research-relevant database and can set up an early notification scheme based on comparative analyses of the recently-added images in an online archive. Here we present a framework for the above collaborative teamwork that uses web-based communication tools to establish remote/robotic operation of the telescope, and an online archive and discussion forum, to maximize the interactions between professionals and amateurs and to boost the productivity of small telescope observatories.

  11. The dark matter annihilation boost from low-temperature reheating

    NASA Astrophysics Data System (ADS)

    Erickcek, Adrienne L.

    2015-11-01

    The evolution of the Universe between inflation and the onset of big bang nucleosynthesis is difficult to probe and largely unconstrained. This ignorance profoundly limits our understanding of dark matter: we cannot calculate its thermal relic abundance without knowing when the Universe became radiation dominated. Fortunately, small-scale density perturbations provide a probe of the early Universe that could break this degeneracy. If dark matter is a thermal relic, density perturbations that enter the horizon during an early matter-dominated era grow linearly with the scale factor prior to reheating. The resulting abundance of substructure boosts the annihilation rate by several orders of magnitude, which can compensate for the smaller annihilation cross sections that are required to generate the observed dark matter density in these scenarios. In particular, thermal relics with masses less than a TeV that thermally and kinetically decouple prior to reheating may already be ruled out by Fermi-LAT observations of dwarf spheroidal galaxies. Although these constraints are subject to uncertainties regarding the internal structure of the microhalos that form from the enhanced perturbations, they open up the possibility of using gamma-ray observations to learn about the reheating of the Universe.

  12. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  13. Fault diagnosis algorithm based on switching function for boost converters

    NASA Astrophysics Data System (ADS)

    Cho, H.-K.; Kwak, S.-S.; Lee, S.-H.

    2015-07-01

    A fault diagnosis algorithm, which is necessary for constructing a reliable power conversion system, should detect fault occurrences as soon as possible to protect the entire system from fatal damages resulting from system malfunction. In this paper, a fault diagnosis algorithm is proposed to detect open- and short-circuit faults that occur in a boost converter switch. The inductor voltage is abnormally kept at a positive DC value during a short-circuit fault in the switch or at a negative DC value during an open-circuit fault condition until the inductor current becomes zero. By employing these abnormal properties during faulty conditions, the inductor voltage is compared with the switching function to detect each fault type by generating fault alarms when a fault occurs. As a result, from the fault alarm, a decision is made in response to the fault occurrence and the fault type in less than two switching time periods using the proposed algorithm constructed in analogue circuits. In addition, the proposed algorithm has good resistivity to discontinuous current-mode operation. As a result, this algorithm features the advantages of low cost and simplicity because of its simple analogue circuit configuration.

  14. Heterodyning Time Resolution Boosting for Velocimetry and Reflectivity Measurements

    SciTech Connect

    Erskine, D J

    2004-08-02

    A theoretical technique is described for boosting the temporal resolving power by several times, of detectors such as streak cameras in experiments that measure light reflected from or transmitted through a target, including velocity interferometer (VISAR) measurements. This is a means of effectively increasing the number of resolvable time bins in a streak camera record past the limit imposed by input slit width and blur on the output phosphor screen. The illumination intensity is modulated sinusoidally at a frequency similar to the limiting time response of the detector. A heterodyning effect beats the high frequency science signal down a lower frequency beat signal, which is recorded together with the conventional science signal. Using 3 separate illuminating channels having different phases, the beat term is separated algebraically from the conventional signal. By numerically reversing the heterodyning, and combining with the ordinary signal, the science signal can be reconstructed to better effective time resolution than the detector used alone. The effective time resolution can be approximately halved for a single modulation frequency, and further decreased inversely proportional to the number of independent modulation frequencies employed.

  15. Boosting the performance of red PHOLEDs by exciton harvesting

    NASA Astrophysics Data System (ADS)

    Chang, Y.-L.; Wang, Z. B.; Helander, M. G.; Qiu, J.; Lu, Z. H.

    2012-09-01

    Significant development has been made on phosphorescent organic light emitting diodes (PHOLEDs) over the past decade, which eventually resulted in the commercialization of widely distributed active-matrix organic light emitting diode displays for mobile phones. However, higher efficiency PHOLEDs are still needed to further reduce the cost and lower the power consumption for general lighting and LED backlight applications. In particular, red PHOLEDs currently have in general the lowest efficiencies among the three primary colors, due most likely to the energy-gap law. Therefore, a number of groups have of made use of various device configurations, including insertion of a carrier blocking or exciton confining layer, doping the transport layers, as well as employing multiple emissive zone structures to improve the device efficiency. However, these approaches are rather inconvenient for commercial applications. In this work, we have developed a simpler way to boost the performance of red PHOLEDs by incorporating an exciton harvesting green emitter, which transfers a large portion of the energy to the co-deposited red emitter. A high external quantum efficiency (EQE) of 20.6% was achieved, which is among the best performances for red PHOLEDs.

  16. Memory boosting effect of Citrus limon, Pomegranate and their combinations.

    PubMed

    Riaz, Azra; Khan, Rafeeq Alam; Algahtani, Hussein A

    2014-11-01

    Memory is greatly influenced by factors like food, stress and quality of sleep, hence present study was designed to evaluate the effect of Citrus limon and Pomegranate juices on memory of mice using Harvard Panlab Passive Avoidance response apparatus controlled through LE2708 Programmer. Passive avoidance is fear-motivated tests used to assess short or long-term memory of small animals, which measures latency to enter into the black compartment. Animals at MCLD showed highly significant and significant increase in latency to enter into the black compartment after 3 and 24 hours respectively than control, animals at HCLD showed significant increase in latency only after 3hours. Animals both at low and moderate doses of pomegranate showed significant increase in test latency after 3 hours, while animals at high dose showed highly significant and significant increase in latency after 3 and 24 hours respectively. There was highly significant and significant increase in latency in animals at CPJ-1 combination after 3 and 24 hours respectively; however animals received CPJ-2 combination showed significant increase in latency only after 3 hours as compare to control. These results suggest that Citrus limon and Pomegranate has phytochemicals and essential nutrients which boost memory, particularly short term memory. Hence it may be concluded that flavonoids in these juices may be responsible for memory enhancing effects and a synergistic effect is observed by CPJ-1 and CPJ-2 combinations.

  17. Massage-like stroking boosts the immune system in mice

    PubMed Central

    Major, Benjamin; Rattazzi, Lorenza; Brod, Samuel; Pilipović, Ivan; Leposavić, Gordana; D’Acquisto, Fulvio

    2015-01-01

    Recent clinical evidence suggests that the therapeutic effect of massage involves the immune system and that this can be exploited as an adjunct therapy together with standard drug-based approaches. In this study, we investigated the mechanisms behind these effects exploring the immunomodulatory function of stroking as a surrogate of massage-like therapy in mice. C57/BL6 mice were stroked daily for 8 days either with a soft brush or directly with a gloved hand and then analysed for differences in their immune repertoire compared to control non-stroked mice. Our results show that hand- but not brush-stroked mice demonstrated a significant increase in thymic and splenic T cell number (p < 0.05; p < 0.01). These effects were not associated with significant changes in CD4/CD8 lineage commitment or activation profile. The boosting effects on T cell repertoire of massage-like therapy were associated with a decreased noradrenergic innervation of lymphoid organs and counteracted the immunosuppressive effect of hydrocortisone in vivo. Together our results in mice support the hypothesis that massage-like therapies might be of therapeutic value in the treatment of immunodeficiencies and related disorders and suggest a reduction of the inhibitory noradrenergic tone in lymphoid organs as one of the possible explanations for their immunomodulatory function. PMID:26046935

  18. Phasic boosting of auditory perception by visual emotion.

    PubMed

    Selinger, Lenka; Domínguez-Borràs, Judith; Escera, Carles

    2013-12-01

    Emotionally negative stimuli boost perceptual processes. There is little known, however, about the timing of this modulation. The present study aims at elucidating the phasic effects of, emotional processing on auditory processing within subsequent time-windows of visual emotional, processing in humans. We recorded the electroencephalogram (EEG) while participants responded to a, discrimination task of faces with neutral or fearful expressions. A brief complex tone, which subjects, were instructed to ignore, was displayed concomitantly, but with different asynchronies respective to, the image onset. Analyses of the N1 auditory event-related potential (ERP) revealed enhanced brain, responses in presence of fearful faces. Importantly, this effect occurred at picture-tone asynchronies of, 100 and 150ms, but not when these were displayed simultaneously, or at 50ms or 200ms asynchrony. These results confirm the existence of a fast-operating crossmodal effect of visual emotion on auditory, processing, suggesting a phasic variation according to the time-course of emotional processing.

  19. OBSERVATIONS OF DOPPLER BOOSTING IN KEPLER LIGHT CURVES

    SciTech Connect

    Van Kerkwijk, Marten H.; Breton, Rene P.; Justham, Stephen; Rappaport, Saul A.; Podsiadlowski, Philipp; Han, Zhanwen

    2010-05-20

    Among the initial results from Kepler were two striking light curves, for KOI 74 and KOI 81, in which the relative depths of the primary and secondary eclipses showed that the more compact, less luminous object was hotter than its stellar host. That result became particularly intriguing because a substellar mass had been derived for the secondary in KOI 74, which would make the high temperature challenging to explain; in KOI 81, the mass range for the companion was also reported to be consistent with a substellar object. We re-analyze the Kepler data and demonstrate that both companions are likely to be white dwarfs. We also find that the photometric data for KOI 74 show a modulation in brightness as the more luminous star orbits, due to Doppler boosting. The magnitude of the effect is sufficiently large that we can use it to infer a radial velocity amplitude accurate to 1 km s{sup -1}. As far as we are aware, this is the first time a radial-velocity curve has been measured photometrically. Combining our velocity amplitude with the inclination and primary mass derived from the eclipses and primary spectral type, we infer a secondary mass of 0.22 {+-} 0.03 M{sub sun}. We use our estimates to consider the likely evolutionary paths and mass-transfer episodes of these binary systems.

  20. Sparse approximation through boosting for learning large scale kernel machines.

    PubMed

    Sun, Ping; Yao, Xin

    2010-06-01

    Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as O(NM(2)) in time and O(NM) in memory, where N is the number of training points and M is the number of basis vectors as well as the steps of forward selection. For some large scale data sets, to obtain a better solution, we are sometimes required to include more basis vectors, which means that M is not trivial in this situation. However, the limited computational resource (e.g., memory) prevents us from including too many vectors. To handle this dilemma, we propose to add an ensemble of basis vectors instead of only one at each forward step. The proposed method, closely related to gradient boosting, could decrease the required number M of forward steps significantly and thus a large fraction of computational cost is saved. Numerical experiments on three large scale regression tasks and a classification problem demonstrate the effectiveness of the proposed approach.

  1. Redundant Interdependencies Boost the Robustness of Multiplex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Bianconi, Ginestra

    2017-01-01

    In the analysis of the robustness of multiplex networks, it is commonly assumed that a node is functioning only if its interdependent nodes are simultaneously functioning. According to this model, a multiplex network becomes more and more fragile as the number of layers increases. In this respect, the addition of a new layer of interdependent nodes to a preexisting multiplex network will never improve its robustness. Whereas such a model seems appropriate to understand the effect of interdependencies in the simplest scenario of a network composed of only two layers, it may seem unsuitable to characterize the robustness of real systems formed by multiple network layers. In fact, it seems unrealistic that a real system evolved, through the development of multiple layers of interactions, towards a fragile structure. In this paper, we introduce a model of percolation where the condition that makes a node functional is that the node is functioning in at least two of the layers of the network. The model reduces to the commonly adopted percolation model for multiplex networks when the number of layers equals two. a larger number of layers, however, the model describes a scenario where the addition of new layers boosts the robustness of the system by creating redundant interdependencies among layers. We prove this fact thanks to the development of a message-passing theory that is able to characterize the model in both synthetic and real-world multiplex graphs.

  2. Negative emotion boosts quality of visual working memory representation.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2016-08-01

    Negative emotion impacts a variety of cognitive processes, including working memory (WM). The present study investigated whether negative emotion modulated WM capacity (quantity) or resolution (quality), 2 independent limits on WM storage. In Experiment 1, observers tried to remember several colors over 1-s delay and then recalled the color of a randomly picked memory item by clicking a best-matching color on a continuous color wheel. On each trial, before the visual WM task, 1 of 3 emotion conditions (negative, neutral, or positive) was induced by having observers to rate the valence of an International Affective Picture System image. Visual WM under negative emotion showed enhanced resolution compared with neutral and positive conditions, whereas the number of retained representations was comparable across the 3 emotion conditions. These effects were generalized to closed-contour shapes in Experiment 2. To isolate the locus of these effects, Experiment 3 adopted an iconic memory version of the color recall task by eliminating the 1-s retention interval. No significant change in the quantity or quality of iconic memory was observed, suggesting that the resolution effects in the first 2 experiments were critically dependent on the need to retain memory representations over a short period of time. Taken together, these results suggest that negative emotion selectively boosts visual WM quality, supporting the dissociable nature quantitative and qualitative aspects of visual WM representation. (PsycINFO Database Record

  3. Max-confidence boosting with uncertainty for visual tracking.

    PubMed

    Guo, Wen; Cao, Liangliang; Han, Tony X; Yan, Shuicheng; Xu, Changsheng

    2015-05-01

    The challenges in visual tracking call for a method which can reliably recognize the subject of interests in an environment, where the appearance of both the background and the foreground change with time. Many existing studies model this problem as tracking by classification with online updating of the classification models, however, most of them overlook the ambiguity in visual modeling and do not consider the prior information in the tracking process. In this paper, we present a novel visual tracking method called max-confidence boosting (MCB), which explores a new way of online updating ambiguous visual phenomenon. The MCB framework models uncertainty in prior knowledge utilizing the indeterministic labels, which are used in updating models from previous frames and the new frame. Our proposed MCB tracker allows ambiguity in the tracking process and can effectively alleviate the drift problem. Many experimental results in challenging video sequences verify the success of our method, and our MCB tracker outperforms a number of the state-of-the-art tracking by classification methods.

  4. Controlled Vocabularies Boost International Participation and Normalization of Searches

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    2006-01-01

    The Global Change Master Directory's (GCMD) science staff set out to document Earth science data and provide a mechanism for it's discovery in fulfillment of a commitment to NASA's Earth Science progam and to the Committee on Earth Observation Satellites' (CEOS) International Directory Network (IDN.) At the time, whether to offer a controlled vocabulary search or a free-text search was resolved with a decision to support both. The feedback from the user community indicated that being asked to independently determine the appropriate 'English" words through a free-text search would be very difficult. The preference was to be 'prompted' for relevant keywords through the use of a hierarchy of well-designed science keywords. The controlled keywords serve to 'normalize' the search through knowledgeable input by metadata providers. Earth science keyword taxonomies were developed, rules for additions, deletions, and modifications were created. Secondary sets of controlled vocabularies for related descriptors such as projects, data centers, instruments, platforms, related data set link types, and locations, along with free-text searches assist users in further refining their search results. Through this robust 'search and refine' capability in the GCMD users are directed to the data and services they seek. The next step in guiding users more directly to the resources they desire is to build a 'reasoning' capability for search through the use of ontologies. Incorporating twelve sets of Earth science keyword taxonomies has boosted the GCMD S ability to help users define and more directly retrieve data of choice.

  5. Clinton budget squeezes EPA, boosts federal R D

    SciTech Connect

    Begley, R.

    1993-04-21

    Although Environmental Protection Agency chief Carol Browner tried to portray the numbers in a positive light, a budget cut is a budget cut and that is what she was handed by her new boss. Despite Clinton Administration rhetoric on the environment, the $6.4-billion EPA budget for fiscal 1994 is down almost 8% for 1993. The superfund program is hit hardest, down 6%, to $1.5 billion. Browner counts funds from the President's 1993 stimulus bill--currently in limbo in Congress--in her 1994 budget to arrive at an increase. She says 1994 will bring greater emphasis to pollution prevention, collaborative programs with industry on toxic releases, and improvement in EPA's science and research activities. EPA's air and pesticides programs will get more money, as well hazardous waste, which EPA says will [open quotes]eliminate unnecessary and burdensome requirements[close quotes] on industry and speed up corrective action. Water quality programs will be cut, as will the toxic substances program, although the Toxic Release Inventory will get a boost.

  6. Boosting forward-time population genetic simulators through genotype compression

    PubMed Central

    2013-01-01

    Background Forward-time population genetic simulations play a central role in deriving and testing evolutionary hypotheses. Such simulations may be data-intensive, depending on the settings to the various parameters controlling them. In particular, for certain settings, the data footprint may quickly exceed the memory of a single compute node. Results We develop a novel and general method for addressing the memory issue inherent in forward-time simulations by compressing and decompressing, in real-time, active and ancestral genotypes, while carefully accounting for the time overhead. We propose a general graph data structure for compressing the genotype space explored during a simulation run, along with efficient algorithms for constructing and updating compressed genotypes which support both mutation and recombination. We tested the performance of our method in very large-scale simulations. Results show that our method not only scales well, but that it also overcomes memory issues that would cripple existing tools. Conclusions As evolutionary analyses are being increasingly performed on genomes, pathways, and networks, particularly in the era of systems biology, scaling population genetic simulators to handle large-scale simulations is crucial. We believe our method offers a significant step in that direction. Further, the techniques we provide are generic and can be integrated with existing population genetic simulators to boost their performance in terms of memory usage. PMID:23763838

  7. Task parallelism and high-performance languages

    SciTech Connect

    Foster, I.

    1996-03-01

    The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism. The term task parallelism refers to the explicit creation of multiple threads of control, or tasks, which synchronize and communicate under programmer control. Task and data parallelism are complementary rather than competing programming models. While task parallelism is more general and can be used to implement algorithms that are not amenable to data-parallel solutions, many problems can benefit from a mixed approach, with for example a task-parallel coordination layer integrating multiple data-parallel computations. Other problems admit to both data- and task-parallel solutions, with the better solution depending on machine characteristics, compiler performance, or personal taste. For these reasons, we believe that a general-purpose high-performance language should integrate both task- and data-parallel constructs. The challenge is to do so in a way that provides the expressivity needed for applications, while preserving the flexibility and portability of a high-level language. In this paper, we examine and illustrate the considerations that motivate the use of task parallelism. We also describe one particular approach to task parallelism in Fortran, namely the Fortran M extensions. Finally, we contrast Fortran M with other proposed approaches and discuss the implications of this work for task parallelism and high-performance languages.

  8. Sassena — X-ray and neutron scattering calculated from molecular dynamics trajectories using massively parallel computers

    NASA Astrophysics Data System (ADS)

    Lindner, Benjamin; Smith, Jeremy C.

    2012-07-01

    Massively parallel computers now permit the molecular dynamics (MD) simulation of multi-million atom systems on time scales up to the microsecond. However, the subsequent analysis of the resulting simulation trajectories has now become a high performance computing problem in itself. Here, we present software for calculating X-ray and neutron scattering intensities from MD simulation data that scales well on massively parallel supercomputers. The calculation and data staging schemes used maximize the degree of parallelism and minimize the IO bandwidth requirements. The strong scaling tested on the Jaguar Petaflop Cray XT5 at Oak Ridge National Laboratory exhibits virtually linear scaling up to 7000 cores for most benchmark systems. Since both MPI and thread parallelism is supported, the software is flexible enough to cover scaling demands for different types of scattering calculations. The result is a high performance tool capable of unifying large-scale supercomputing and a wide variety of neutron/synchrotron technology. Catalogue identifier: AELW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 003 742 No. of bytes in distributed program, including test data, etc.: 798 Distribution format: tar.gz Programming language: C++, OpenMPI Computer: Distributed Memory, Cluster of Computers with high performance network, Supercomputer Operating system: UNIX, LINUX, OSX Has the code been vectorized or parallelized?: Yes, the code has been parallelized using MPI directives. Tested with up to 7000 processors RAM: Up to 1 Gbytes/core Classification: 6.5, 8 External routines: Boost Library, FFTW3, CMAKE, GNU C++ Compiler, OpenMPI, LibXML, LAPACK Nature of problem: Recent developments in supercomputing allow molecular dynamics simulations to

  9. A generalized parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Binder, Andrew; Lelièvre, Tony; Simpson, Gideon

    2015-03-01

    Metastability is a common obstacle to performing long molecular dynamics simulations. Many numerical methods have been proposed to overcome it. One method is parallel replica dynamics, which relies on the rapid convergence of the underlying stochastic process to a quasi-stationary distribution. Two requirements for applying parallel replica dynamics are knowledge of the time scale on which the process converges to the quasi-stationary distribution and a mechanism for generating samples from this distribution. By combining a Fleming-Viot particle system with convergence diagnostics to simultaneously identify when the process converges while also generating samples, we can address both points. This variation on the algorithm is illustrated with various numerical examples, including those with entropic barriers and the 2D Lennard-Jones cluster of seven atoms.

  10. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  11. Parallel supercomputing with commodity components

    SciTech Connect

    Warren, M.S.; Goda, M.P.; Becker, D.J.

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  12. ASP: a parallel computing technology

    NASA Astrophysics Data System (ADS)

    Lea, R. M.

    1990-09-01

    ASP modules constitute the basis of a parallel computing technology platform for the rapid development of a broad range of numeric and symbolic information processing systems. Based on off-the-shelf general-purpose hardware and software modules ASP technology is intended to increase productivity in the development (and competitiveness in the marketing) of cost-effective low-MIMD/high-SIMD Massively Parallel Processor (MPPs). The paper discusses ASP module philosophy and demonstrates how ASP modules can satisfy the market algorithmic architectural and engineering requirements of such MPPs. In particular two specific ASP modules based on VLSI and WSI technologies are studied as case examples of ASP technology the latter reporting 1 TOPS/fl3 1 GOPS/W and 1 MOPS/$ as ball-park figures-of-merit of cost-effectiveness.

  13. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  14. A generalized parallel replica dynamics

    SciTech Connect

    Binder, Andrew; Lelièvre, Tony; Simpson, Gideon

    2015-03-01

    Metastability is a common obstacle to performing long molecular dynamics simulations. Many numerical methods have been proposed to overcome it. One method is parallel replica dynamics, which relies on the rapid convergence of the underlying stochastic process to a quasi-stationary distribution. Two requirements for applying parallel replica dynamics are knowledge of the time scale on which the process converges to the quasi-stationary distribution and a mechanism for generating samples from this distribution. By combining a Fleming–Viot particle system with convergence diagnostics to simultaneously identify when the process converges while also generating samples, we can address both points. This variation on the algorithm is illustrated with various numerical examples, including those with entropic barriers and the 2D Lennard-Jones cluster of seven atoms.

  15. Parallel supercomputing with commodity components

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Goda, M. P.; Becker, D. J.

    1997-01-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10(sup 15) floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  16. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  17. Parallelism in Manipulator Dynamics. Revision.

    DTIC Science & Technology

    1983-12-01

    excessive, and a VLSI implementation architecutre is suggested. We indicate possible appli- cations to incorporating dynamical considerations into...Inverse Dynamics problem. It investigates the high degree of parallelism inherent in the computations , and presents two "mathematically exact" formulations...and a 3 b Cases ............. ... 109 5 .9-- i 0. OVERVIEW The Inverse Dynamics problem consists (loosely) of computing the motor torques necessary to

  18. Parallel Symmetric Eigenvalue Problem Solvers

    DTIC Science & Technology

    2015-05-01

    graduate school. Debugging somebody else’s MPI code is an immensely frustrating experience, but he would regularly stay late at the oce to assist me...cessfully. In addition, I will describe the parallel kernels required by my code . 5 The next sections will describe my Fortran-based implementations of...Sandia’s publicly available Trace- Min code . Each of the methods has its own unique advantages and disadvantages, summarized in table 3.1. In short, I

  19. Parallel Algorithms for Computer Vision.

    DTIC Science & Technology

    1987-01-01

    73 755 P fiu.LEL ALORITHMS FOR CO PUTER VISIO (U) /MASSACHUSETTS INST OF TECH CRMORIDGE T P00010 ET AL.JAN 8? ETL-0456 DACA7-05-C-8IIO m 7E F/0 1...regularization principles, such as edge detection, stereo , motion, surface interpolation and shape from shading. The basic members of class I are convolution...them in collabo- ration with Thinking Machines Corporation): * Parallel convolution * Zero-crossing detection * Stereo -matching * Surface reconstruction

  20. Lightweight Specifications for Parallel Correctness

    DTIC Science & Technology

    2012-12-05

    this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204... George Necula Professor David Wessel Fall 2012 1 Abstract Lightweight Specifications for Parallel Correctness by Jacob Samuels Burnim Doctor of Philosophy...enthusiasm and endless flow of ideas, and for his keen research sense. I would also like to thank George Necula for chairing my qualifying exam committee and

  1. National Combustion Code: Parallel Performance

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2001-01-01

    This report discusses the National Combustion Code (NCC). The NCC is an integrated system of codes for the design and analysis of combustion systems. The advanced features of the NCC meet designers' requirements for model accuracy and turn-around time. The fundamental features at the inception of the NCC were parallel processing and unstructured mesh. The design and performance of the NCC are discussed.

  2. Parallel Algorithms for Computer Vision.

    DTIC Science & Technology

    1989-01-01

    demonstrated the Vision Machine system processing images and recognizing objects through the inte- gration of several visual cues. The first version of the...achievements. n 2.1 The Vision Machine The overall organization of tie Vision Machine systeliis ased. o parallel processing of tie images by independent...smoothed and made dense by exploiting known constraints within each process (for example., that disparity is smooth). This is the stage of approximation

  3. Industrial Assessment Center Helps Boost Efficiency for Small and Medium Manufacturers

    SciTech Connect

    Johnson, Mark; Friedman, David

    2016-12-15

    The Industrial Assessment Center program helps small and medium manufacturers boost efficiency and save energy. It pairs companies with universities as students perform energy assessments and provide recommendations to improve their facilities.

  4. Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations and Biological Condition

    EPA Science Inventory

    Boosted regression tree (BRT) models were developed to quantify the nonlinear relationships between landscape variables and nutrient concentrations in a mesoscale mixed land cover watershed during base-flow conditions. Factors that affect instream biological components, based on ...

  5. Hour-Long Nap May Boost Brain Function in Older Adults

    MedlinePlus

    ... fullstory_162923.html Hour-Long Nap May Boost Brain Function in Older Adults Linked to improved memory and ... during the day had any effects on their brain function. Nearly 60 percent of the people regularly napped ...

  6. Stable detection of expanded target by the use of boosting random ferns

    NASA Astrophysics Data System (ADS)

    Deng, Li; Wang, Chunhong; Rao, Changhui

    2012-10-01

    This paper studies the problem of keypoints recognition of extended target which lacks of texture information, and introduces an approach of stable detection of these targets called boosting random ferns (BRF). As common descriptors in this circumstance do not work as well as usual cases, matching of keypoints is hence turned into classification task so as to make use of the trainable characteristic of classifier. The kernel of BRF is consisted of random ferns as the classifier and AdaBoost (Adaptive Boosting) as the frame so that accuracy of random ferns classifier can be boosted to a relatively high level. Experiments compare BRF with widely used SURF descriptor and single random ferns classifier. The result shows that BRF obtains higher recognition rate of keypoints. Besides, for image sequence, BRF provides stronger stability than SURF in target detection, which proves the efficiency of BRF aiming to extended target which lacks of texture information.

  7. Industrial Assessment Center Helps Boost Efficiency for Small and Medium Manufacturers

    ScienceCinema

    Johnson, Mark; Friedman, David

    2017-01-06

    The Industrial Assessment Center program helps small and medium manufacturers boost efficiency and save energy. It pairs companies with universities as students perform energy assessments and provide recommendations to improve their facilities.

  8. Boosted objects and jet substructure at the LHC: Report of BOOST2012, held at IFIC Valencia, 23rd-27th of July 2012

    SciTech Connect

    Altheimer, A.

    2014-03-21

    This report of the BOOST2012 workshop presents the results of four working groups that studied key aspects of jet substructure. We discuss the potential of first-principle QCD calculations to yield a precise description of the substructure of jets and study the accuracy of state-of-the-art Monte Carlo tools. Limitations of the experiments’ ability to resolve substructure are evaluated, with a focus on the impact of additional (pile-up) proton proton collisions on jet substructure performance in future LHC operating scenarios. The final section summarizes the lessons learnt from jet substructure analyses in searches for new physics in the production of boosted top quarks.

  9. Parallel processing of genomics data

    NASA Astrophysics Data System (ADS)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  10. Parallelism in integrated fluidic circuits

    NASA Astrophysics Data System (ADS)

    Bousse, Luc J.; Kopf-Sill, Anne R.; Parce, J. W.

    1998-04-01

    Many research groups around the world are working on integrated microfluidics. The goal of these projects is to automate and integrate the handling of liquid samples and reagents for measurement and assay procedures in chemistry and biology. Ultimately, it is hoped that this will lead to a revolution in chemical and biological procedures similar to that caused in electronics by the invention of the integrated circuit. The optimal size scale of channels for liquid flow is determined by basic constraints to be somewhere between 10 and 100 micrometers . In larger channels, mixing by diffusion takes too long; in smaller channels, the number of molecules present is so low it makes detection difficult. At Caliper, we are making fluidic systems in glass chips with channels in this size range, based on electroosmotic flow, and fluorescence detection. One application of this technology is rapid assays for drug screening, such as enzyme assays and binding assays. A further challenge in this area is to perform multiple functions on a chip in parallel, without a large increase in the number of inputs and outputs. A first step in this direction is a fluidic serial-to-parallel converter. Fluidic circuits will be shown with the ability to distribute an incoming serial sample stream to multiple parallel channels.

  11. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  12. Parallel Environment for Quantum Computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Diaz, Bruno Julia

    2009-03-01

    To facilitate numerical study of noise and decoherence in QC algorithms,and of the efficacy of error correction schemes, we have developed a Fortran 90 quantum computer simulator with parallel processing capabilities. It permits rapid evaluation of quantum algorithms for a large number of qubits and for various ``noise'' scenarios. State vectors are distributed over many processors, to employ a large number of qubits. Parallel processing is implemented by the Message-Passing Interface protocol. A description of how to spread the wave function components over many processors, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors will be delineated.Grover's search and Shor's factoring algorithms with noise will be discussed as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to diverse noise effects, corresponding to solving a stochastic Schrodinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its associated entropy. Applications of this powerful tool is made to delineate the stability and correction of QC processes using Hamiltonian based dynamics.

  13. Hospital takes customer service to new level, sees positive effect on bottom line.

    PubMed

    1997-06-01

    Service with a smile boosts bottom line. Treating co-workers and patients like guests is the linchpin of a whole new philosophy at Bradley Memorial Hospital in Cleveland, TN. Administrators there insist a host of new customer service programs, from cap and gown graduation ceremonies to bunny bucks, has resulted in dramatic financial improvements and more satisfied patients and staff.

  14. Parallel Markov chain Monte Carlo simulations.

    PubMed

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  15. Parallel Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao; Orkoulas, G.

    2007-06-01

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  16. Adaptive Replanning to Account for Lumpectomy Cavity Change in Sequential Boost After Whole-Breast Irradiation

    SciTech Connect

    Chen, Xiaojian; Qiao, Qiao; DeVries, Anthony; Li, Wenhui; Currey, Adam; Kelly, Tracy; Bergom, Carmen; Wilson, J. Frank; Li, X. Allen

    2014-12-01

    Purpose: To evaluate the efficiency of standard image-guided radiation therapy (IGRT) to account for lumpectomy cavity (LC) variation during whole-breast irradiation (WBI) and propose an adaptive strategy to improve dosimetry if IGRT fails to address the interfraction LC variations. Methods and Materials: Daily diagnostic-quality CT data acquired during IGRT in the boost stage using an in-room CT for 19 breast cancer patients treated with sequential boost after WBI in the prone position were retrospectively analyzed. Contours of the LC, treated breast, ipsilateral lung, and heart were generated by populating contours from planning CTs to boost fraction CTs using an auto-segmentation tool with manual editing. Three plans were generated on each fraction CT: (1) a repositioning plan by applying the original boost plan with the shift determined by IGRT; (2) an adaptive plan by modifying the original plan according to a fraction CT; and (3) a reoptimization plan by a full-scale optimization. Results: Significant variations were observed in LC. The change in LC volume at the first boost fraction ranged from a 70% decrease to a 50% increase of that on the planning CT. The adaptive and reoptimization plans were comparable. Compared with the repositioning plans, the adaptive plans led to an improvement in target coverage for an increased LC case (1 of 19, 7.5% increase in planning target volume evaluation volume V{sub 95%}), and breast tissue sparing for an LC decrease larger than 35% (3 of 19, 7.5% decrease in breast evaluation volume V{sub 50%}; P=.008). Conclusion: Significant changes in LC shape and volume at the time of boost that deviate from the original plan for WBI with sequential boost can be addressed by adaptive replanning at the first boost fraction.

  17. 2. LOOKING DOWN THE LINED POWER CANAL AS IT WINDS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. LOOKING DOWN THE LINED POWER CANAL AS IT WINDS ITS WAY TOWARD THE CEMENT MILL Photographer: Walter J. Lubken, November 19, 1907 - Roosevelt Power Canal & Diversion Dam, Parallels Salt River, Roosevelt, Gila County, AZ

  18. QCMPI: A parallel environment for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    :http://cpc.cs.qub.ac.uk/summaries/AECS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4866 No. of bytes in distributed program, including test data, etc.: 42 114 Distribution format: tar.gz Programming language: Fortran 90 and MPI Computer: Any system that supports Fortran 90 and MPI Operating system: developed and tested at the Pittsburgh Supercomputer Center, at the Barcelona Supercomputer (BSC/CNS) and on multi-processor Macs and PCs. For cases where distributed density matrix evaluation is invoked, the BLACS and SCALAPACK packages are needed. Has the code been vectorized or parallelized?: Yes Classification: 4.15 External routines: LAPACK, SCALAPACK, BLACS Nature of problem: Analysis of quantum computation algorithms and the effects of noise. Solution method: A Fortran 90/MPI package is provided that contains modular commands to create and analyze quantum circuits. Shor's factorization and Grover's search algorithms are explained in detail. Procedures for distributing state vector amplitudes over processors and for solving concurrent (multiverse) cases with noise effects are implemented. Density matrix and entropy evaluations are provided in both single and parallel versions. Running time: Test run takes less than 1 minute using 2 processors.

  19. Smooth-Transition Simple Digital PWM Modulator for Four-Switch Buck-Boost Converters

    NASA Astrophysics Data System (ADS)

    Rodriguez, Alberto; Rodriguez, Miguel; Vazquez, Aitor; Maija, Pablo F.; Sebastian, Javier

    2014-08-01

    Four Switch non-inverting Buck-Boost (4SBB) converters are extensively used in non-isolated applications where voltage step-up and step-down are required. In order to achieve high efficiency operation it is preferred to control the 4SBB as a Buck or Boost converter, depending on the input/output voltage ratio. However, when input and output voltages are close this approach requires near- unity conversion ratios, which are difficult to achieve in practice. Several alternative operating modes have been proposed in the literature to overcome this issue. In particular, operating the 4SBB as a Buck and Boost at the same time (Buck+Boost mode) has proven to be adequate to achieve near-unity conversion ratios.This paper proposes a simple, hardware-efficient digital pulse width modulator for a 4SBB that enables operation in Buck, Boost and Buck+Boost modes, thus allowing near-unity conversion ratios, while achieving smooth transitions between the different modes. The proposed modulator is simulated with Simulink and experimentally demonstrated using a 500W 4SBB converter with 24V input voltage and 12V-36V output voltage range.

  20. Boosting infrared energy transfer in 3D nanoporous gold antennas.

    PubMed

    Garoli, D; Calandrini, E; Bozzola, A; Ortolani, M; Cattarin, S; Barison, S; Toma, A; De Angelis, F

    2017-01-05

    The applications of plasmonics to energy transfer from free-space radiation to molecules are currently limited to the visible region of the electromagnetic spectrum due to the intrinsic optical properties of bulk noble metals that support strong electromagnetic field confinement only close to their plasma frequency in the visible/ultraviolet range. In this work, we show that nanoporous gold can be exploited as a plasmonic material for the mid-infrared region to obtain strong electromagnetic field confinement, co-localized with target molecules into the nanopores and resonant with their vibrational frequency. The effective optical response of the nanoporous metal enables the penetration of optical fields deep into the nanopores, where molecules can be loaded thus achieving a more efficient light-matter coupling if compared to bulk gold. In order to realize plasmonic resonators made of nanoporous gold, we develop a nanofabrication method based on polymeric templates for metal deposition and we obtain antenna arrays resonating at mid-infrared wavelengths selected by design. We then coat the antennas with a thin (3 nm) silica layer acting as the target dielectric layer for optical energy transfer. We study the strength of the light-matter coupling at the vibrational absorption frequency of silica at 1240 cm(-1) through the analysis of the experimental Fano lineshape that is benchmarked against identical structures made of bulk gold. The boost in the optical energy transfer from free-space mid-infrared radiation to molecular vibrations in nanoporous 3D nanoantenna arrays can open new application routes for plasmon-enhanced physical-chemical reactions.