Science.gov

Sample records for lines boost parallel

  1. Learning and Parallelization Boost Constraint Search

    ERIC Educational Resources Information Center

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  2. Learning and Parallelization Boost Constraint Search

    ERIC Educational Resources Information Center

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  3. Development of a high speed parallel hybrid boost bearing

    NASA Technical Reports Server (NTRS)

    Winn, L. W.; Eusepi, M. W.

    1973-01-01

    The analysis, design, and testing of the hybrid boost bearing are discussed. The hybrid boost bearing consists of a fluid film bearing coupled in parallel with a rolling element bearing. This coupling arrangement makes use of the inherent advantages of both the fluid film and rolling element bearing and at the same time minimizes their disadvantages and limitations. The analytical optimization studies that lead to the final fluid film bearing design are reported. The bearing consisted of a centrifugally-pressurized planar fluid film thrust bearing with oil feed through the shaft center. An analysis of the test ball bearing is also presented. The experimental determination of the hybrid bearing characteristics obtained on the basis of individual bearing component tests and a combined hybrid bearing assembly is discussed and compared to the analytically determined performance characteristics.

  4. Distributed control system for parallel-connected DC boost converters

    DOEpatents

    Goldsmith, Steven

    2017-08-15

    The disclosed invention is a distributed control system for operating a DC bus fed by disparate DC power sources that service a known or unknown load. The voltage sources vary in v-i characteristics and have time-varying, maximum supply capacities. Each source is connected to the bus via a boost converter, which may have different dynamic characteristics and power transfer capacities, but are controlled through PWM. The invention tracks the time-varying power sources and apportions their power contribution while maintaining the DC bus voltage within the specifications. A central digital controller solves the steady-state system for the optimal duty cycle settings that achieve a desired power supply apportionment scheme for a known or predictable DC load. A distributed networked control system is derived from the central system that utilizes communications among controllers to compute a shared estimate of the unknown time-varying load through shared bus current measurements and bus voltage measurements.

  5. Parallel line scanning ophthalmoscope for retinal imaging.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2015-11-15

    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of parallelism in illuminating the retina compared to traditional scanning laser ophthalmoscope systems utilizing scanning mirrors. The system operated at the shot-noise limit with a signal-to-noise ratio of 28 for an optical power measured at the cornea of 100 μW. To demonstrate the imaging capabilities of the system, the macula and the optic nerve head of a healthy volunteer were imaged. Confocal images show good contrast and lateral resolution with a 10°×10° field of view.

  6. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  7. On-line inverse multiple instance boosting for classifier grids

    PubMed Central

    Sternig, Sabine; Roth, Peter M.; Bischof, Horst

    2012-01-01

    Classifier grids have shown to be a considerable choice for object detection from static cameras. By applying a single classifier per image location the classifier’s complexity can be reduced and more specific and thus more accurate classifiers can be estimated. In addition, by using an on-line learner a highly adaptive but stable detection system can be obtained. Even though long-term stability has been demonstrated such systems still suffer from short-term drifting if an object is not moving over a long period of time. The goal of this work is to overcome this problem and thus to increase the recall while preserving the accuracy. In particular, we adapt ideas from multiple instance learning (MIL) for on-line boosting. In contrast to standard MIL approaches, which assume an ambiguity on the positive samples, we apply this concept to the negative samples: inverse multiple instance learning. By introducing temporal bags consisting of background images operating on different time scales, we can ensure that each bag contains at least one sample having a negative label, providing the theoretical requirements. The experimental results demonstrate superior classification results in presence of non-moving objects. PMID:22556453

  8. Parallel acoustic delay lines for photoacoustic tomography

    PubMed Central

    Yapici, Murat Kaya; Kim, Chulhong; Chang, Cheng-Chung; Jeon, Mansik; Guo, Zijian; Cai, Xin

    2012-01-01

    Abstract. Achieving real-time photoacoustic (PA) tomography typically requires multi-element ultrasound transducer arrays and their associated multiple data acquisition (DAQ) electronics to receive PA waves simultaneously. We report the first demonstration of a photoacoustic tomography (PAT) system using optical fiber-based parallel acoustic delay lines (PADLs). By employing PADLs to introduce specific time delays, the PA signals (on the order of a few micro seconds) can be forced to arrive at the ultrasonic transducers at different times. As a result, time-delayed PA signals in multiple channels can be ultimately received and processed in a serial manner with a single-element transducer, followed by single-channel DAQ electronics. Our results show that an optically absorbing target in an optically scattering medium can be photoacoustically imaged using the newly developed PADL-based PAT system. Potentially, this approach could be adopted to significantly reduce the complexity and cost of ultrasonic array receiver systems. PMID:23139043

  9. Experimental verification of internal parameter in magnetically coupled boost used as PV optimizer in parallel association

    NASA Astrophysics Data System (ADS)

    Sawicki, Jean-Paul; Saint-Eve, Frédéric; Petit, Pierre; Aillerie, Michel

    2017-02-01

    This paper presents results of experiments aimed to verify a formula able to compute duty cycle in the case of pulse width modulation control for a DC-DC converter designed and realized in laboratory. This converter, called Magnetically Coupled Boost (MCB) is sized to step up only one photovoltaic module voltage to supply directly grid inverters. Duty cycle formula will be checked in a first time by identifying internal parameter, auto-transformer ratio, and in a second time by checking stability of operating point on the side of photovoltaic module. Thinking on nature of generator source and load connected to converter leads to imagine additional experiments to decide if auto-transformer ratio parameter could be used with fixed value or on the contrary with adaptive value. Effects of load variations on converter behavior or impact of possible shading on photovoltaic module are also mentioned, with aim to design robust control laws, in the case of parallel association, designed to compensate unwanted effects due to output voltage coupling.

  10. Boosting a Drug's Market Share Can Cross a Dangerous Line.

    PubMed

    Reinke, Thomas

    2016-07-01

    Hub programs have emerged as a profitable new line of business in the sales and distribution side of the pharmaceutical industry that has got more than its fair share of wheeling and dealing. But they spell trouble if they spark collusion, threaten patients, or waste federal dollars.

  11. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  12. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  13. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  14. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  15. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  16. 14 CFR 23.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... supporting hinge brackets must be designed to withstand inertial loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertial loads may be assumed to be equal to KW, where—...

  17. VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF PARALLEL LINE OF LARGE BORE HOLES IN NORTHERN QUARRY AREA, FACING NORTHEAST - Granite Hill Plantation, Quarry No. 2, South side of State Route 16, 1.3 miles northeast east of Sparta, Sparta, Hancock County, GA

  18. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...

  19. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...

  20. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...

  1. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...

  2. 14 CFR 25.393 - Loads parallel to hinge line.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... designed for inertia loads acting parallel to the hinge line. (b) In the absence of more rational data, the inertia loads may be assumed to be equal to KW, where— (1) K=24 for vertical surfaces; (2) K=12 for...

  3. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  4. Three cases of idiopathic "multiple-parallel-line" endotheliitis.

    PubMed

    Hori, Yuichi; Maeda, Naoyuki; Kosaki, Ryo; Inoue, Tomoyuki; Tano, Yasuo

    2008-01-01

    To report 3 cases of idiopathic endotheliitis that appeared clinically as multiple parallel lines of keratic precipitates on the corneal endothelium. Interventional case reports. Slit-lamp examinations of the 3 patients showed several parallel dotted lines of keratic precipitates on the corneal endothelium. Although all cases were managed successfully by topical steroid in 1 week, corneal endothelial cell damage occurred. No virus (herpes simplex virus 1, varicella zoster virus, or cytomegalovirus) could be detected by reverse transcriptase polymerase chain reaction in the aqueous humor of 1 of the 3 patients. This series of cases presented a unique type of idiopathic endotheliitis with the clinical appearance of multiple and parallel lines of keratic precipitates. The expression patterns of the keratic precipitates were distinct from those of previously reported linear endotheliitis.

  5. Self-adaptive asymmetric on-line boosting for detecting anatomical structures

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Tajbakhsh, Nima; Xue, Wenzhe; Liang, Jianming

    2012-03-01

    In this paper, we propose a self-adaptive, asymmetric on-line boosting (SAAOB) method for detecting anatomical structures in CT pulmonary angiography (CTPA). SAAOB is novel in that it exploits a new asymmetric loss criterion with self-adaptability according to the ratio of exposed positive and negative samples and in that it has an advanced rule to update sample's importance weight taking account of both classification result and sample's label. Our presented method is evaluated by detecting three distinct thoracic structures, the carina, the pulmonary trunk and the aortic arch, in both balanced and imbalanced conditions.

  6. Parallel line analysis: multifunctional software for the biomedical sciences

    NASA Technical Reports Server (NTRS)

    Swank, P. R.; Lewis, M. L.; Damron, K. L.; Morrison, D. R.

    1990-01-01

    An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time.

  7. Parallel line analysis: multifunctional software for the biomedical sciences

    NASA Technical Reports Server (NTRS)

    Swank, P. R.; Lewis, M. L.; Damron, K. L.; Morrison, D. R.

    1990-01-01

    An easy to use, interactive FORTRAN program for analyzing the results of parallel line assays is described. The program is menu driven and consists of five major components: data entry, data editing, manual analysis, manual plotting, and automatic analysis and plotting. Data can be entered from the terminal or from previously created data files. The data editing portion of the program is used to inspect and modify data and to statistically identify outliers. The manual analysis component is used to test the assumptions necessary for parallel line assays using analysis of covariance techniques and to determine potency ratios with confidence limits. The manual plotting component provides a graphic display of the data on the terminal screen or on a standard line printer. The automatic portion runs through multiple analyses without operator input. Data may be saved in a special file to expedite input at a future time.

  8. Sequential organogenesis sets two parallel sensory lines in medaka

    PubMed Central

    Seleit, Ali; Krämer, Isabel; Ambrosio, Elizabeth; Dross, Nicolas; Engel, Ulrike

    2017-01-01

    Animal organs are typically formed during embryogenesis by following one specific developmental programme. Here, we report that neuromast organs are generated by two distinct and sequential programmes that result in parallel sensory lines in medaka embryos. A ventral posterior lateral line (pLL) is composed of neuromasts deposited by collectively migrating cells whereas a midline pLL is formed by individually migrating cells. Despite the variable number of neuromasts among embryos, the sequential programmes that we describe here fix an invariable ratio between ventral and midline neuromasts. Mechanistically, we show that the formation of both types of neuromasts depends on the chemokine receptor genes cxcr4b and cxcr7b, illustrating how common molecules can mediate different morphogenetic processes. Altogether, we reveal a self-organising feature of the lateral line system that ensures a proper distribution of sensory organs along the body axis. PMID:28087632

  9. Sequential organogenesis sets two parallel sensory lines in medaka.

    PubMed

    Seleit, Ali; Krämer, Isabel; Ambrosio, Elizabeth; Dross, Nicolas; Engel, Ulrike; Centanin, Lázaro

    2017-02-15

    Animal organs are typically formed during embryogenesis by following one specific developmental programme. Here, we report that neuromast organs are generated by two distinct and sequential programmes that result in parallel sensory lines in medaka embryos. A ventral posterior lateral line (pLL) is composed of neuromasts deposited by collectively migrating cells whereas a midline pLL is formed by individually migrating cells. Despite the variable number of neuromasts among embryos, the sequential programmes that we describe here fix an invariable ratio between ventral and midline neuromasts. Mechanistically, we show that the formation of both types of neuromasts depends on the chemokine receptor genes cxcr4b and cxcr7b, illustrating how common molecules can mediate different morphogenetic processes. Altogether, we reveal a self-organising feature of the lateral line system that ensures a proper distribution of sensory organs along the body axis. © 2017. Published by The Company of Biologists Ltd.

  10. Harmonic resonance on parallel high voltage transmission lines

    SciTech Connect

    Harries, J.R.; Randall, J.L.

    1997-01-01

    The Bonneville Power Administration (BPA) has received complaints of telephone interference over a wide area of northwestern Washington State for several years. However, until 1995 investigations had proved inconclusive as either the source of the harmonics or the operating conditions changed whenever investigators arrived. The 2,100 Hz interference had been noticed at several optically isolated telephone exchanges. The area of complaint corresponded to electric service areas near the transmission line corridors of the BPA Custer-Monroe 500-kV lines. High 2,100 Hz field strength was measured near the 500-kV lines and also under lower voltage lines served from stations along the transmission line corridor. Tests and studies made with the Alternative Transients Program version of the Electromagnetic Transients Program (EMTP) were able to define the phenomena and isolate the source. Harmonic resonance has been observed, measured and modeled on parallel 500-kV lines that are about one wavelength at 2,100 Hz, the 35th harmonic. A seemingly small harmonic injection at one location on the system causes significant problems some distance away such as telephone interference.

  11. Parallel field line and stream line tracing algorithms for space physics applications

    NASA Astrophysics Data System (ADS)

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  12. On-line near infrared monitoring of glycerol-boosted anaerobic digestion processes: evaluation of process analytical technologies.

    PubMed

    Holm-Nielsen, Jens Bo; Lomborg, Carina Juel; Oleskowicz-Popiel, Piotr; Esbensen, Kim H

    2008-02-01

    A study of NIR as a tool for process monitoring of thermophilic anaerobic digestion boosted by glycerol has been carried out, aiming at developing simple and robust Process Analytical Technology modalities for on-line surveillance in full scale biogas plants. Three 5 L laboratory fermenters equipped with on-line NIR sensor and special sampling stations were used as a basis for chemometric multivariate calibration. NIR characterisation using Transflexive Embedded Near Infra-Red Sensor (TENIRS) equipment integrated into an external recurrent loop on the fermentation reactors, allows for representative sampling, of the highly heterogeneous fermentation bio slurries. Glycerol is an important by-product from the increasing European bio-diesel production. Glycerol addition can boost biogas yields, if not exceeding a limiting 5-7 g L(-1) concentration inside the fermenter-further increase can cause strong imbalance in the anaerobic digestion process. A secondary objective was to evaluate the effect of addition of glycerol, in a spiking experiment which introduced increasing organic overloading as monitored by volatile fatty acids (VFA) levels. High correlation between on-line NIR determinations of glycerol and VFA contents has been documented. Chemometric regression models (PLS) between glycerol and NIR spectra needed no outlier removals and only one PLS-component was required. Test set validation resulted in excellent measures of prediction performance, precision: r(2) = 0.96 and accuracy = 1.04, slope of predicted versus reference fitting. Similar prediction statistics for acetic acid, iso-butanoic acid and total VFA proves that process NIR spectroscopy is able to quantify all pertinent levels of both volatile fatty acids and glycerol. (c) 2007 Wiley Periodicals, Inc.

  13. Ten added pump packages will boost Colombian line capacity. [Retrofitting a Colombia petroleum pipeline

    SciTech Connect

    Not Available

    1994-06-01

    Ten additional pump packages are expected to more than double crude throughput on Ecopetrol's Central Llanos Pipe Line in Colombia when the project is completed in September. British Petroleum Exploration (BPX) is managing upgrade of the line which is owned and operated by Colombian state oil company, Ecopetrol. BPX anticipates volumes on the 174-mi line to increase from 100,000 bpd to 210,000 bpd. BPX is considering installing two additional pump packages that would raise volume to 300,000 bpd. This paper describes the design of the new pump packages, how they were installed and transported to the sites, and various central systems used with these pumps.

  14. Parallel line raster eliminates ambiguities in reading timing of pulses less than 500 microseconds apart

    NASA Technical Reports Server (NTRS)

    Horne, A. P.

    1966-01-01

    Parallel horizontal line raster is used for precision timing of events occurring less than 500 microseconds apart for observation of hypervelocity phenomena. The raster uses a staircase vertical deflection and eliminates ambiguities in reading timing of pulses close to the end of each line.

  15. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  16. Study of electric fields parallel to the magnetic lines of force using artificially injected energetic electrons

    NASA Technical Reports Server (NTRS)

    Wilhelm, K.; Bernstein, W.; Whalen, B. A.

    1980-01-01

    Electron beam experiments using rocket-borne instrumentation will be discussed. The observations indicate that reflections of energetic electrons may occur at possible electric field configurations parallel to the direction of the magnetic lines of force in an altitude range of several thousand kilometers above the ionosphere.

  17. Designing linings of mutually influencing parallel shallow circular tunnels under seismic effects of earthquake

    NASA Astrophysics Data System (ADS)

    Sammal, A. S.; Antsiferov, S. V.; Deev, P. V.

    2016-09-01

    The paper deals with seismic design of parallel shallow tunnel linings, which is based on identifying the most unfavorable lining stress states under the effects of long longitudinal and shear seismic waves propagating through the cross section of the tunnel in different directions and combinations. For this purpose, the sum and difference of normal tangential stresses on lining internal outline caused by waves of different types are investigated on the extreme relative to the angle of incidence. The method allows analytic plotting of a curve illustrating structure stresses. The paper gives an example of design calculation.

  18. Passing in Command Line Arguments and Parallel Cluster/Multicore Batching in R with batch.

    PubMed

    Hoffmann, Thomas J

    2011-03-01

    It is often useful to rerun a command line R script with some slight change in the parameters used to run it - a new set of parameters for a simulation, a different dataset to process, etc. The R package batch provides a means to pass in multiple command line options, including vectors of values in the usual R format, easily into R. The same script can be setup to run things in parallel via different command line arguments. The R package batch also provides a means to simplify this parallel batching by allowing one to use R and an R-like syntax for arguments to spread a script across a cluster or local multicore/multiprocessor computer, with automated syntax for several popular cluster types. Finally it provides a means to aggregate the results together of multiple processes run on a cluster.

  19. Passing in Command Line Arguments and Parallel Cluster/Multicore Batching in R with batch

    PubMed Central

    Hoffmann, Thomas J.

    2014-01-01

    It is often useful to rerun a command line R script with some slight change in the parameters used to run it – a new set of parameters for a simulation, a different dataset to process, etc. The R package batch provides a means to pass in multiple command line options, including vectors of values in the usual R format, easily into R. The same script can be setup to run things in parallel via different command line arguments. The R package batch also provides a means to simplify this parallel batching by allowing one to use R and an R-like syntax for arguments to spread a script across a cluster or local multicore/multiprocessor computer, with automated syntax for several popular cluster types. Finally it provides a means to aggregate the results together of multiple processes run on a cluster. PMID:25431538

  20. Emission Line Galaxies in the STIS Parallel Survey. 1; Observations and Data Analysis

    NASA Technical Reports Server (NTRS)

    Teplitz, Harry I.; Collins, Nicholas R.; Gardner, Jonathan P.; Hill, Robert S.; Heap, Sara R.; Lindler, Don J.; Rhodes, Jason; Woodgate, Bruce E.

    2002-01-01

    In the first three years of operation STIS obtained slitless spectra of approximately 2500 fields in parallel to prime HST observations as part of the STIS Parallel Survey (SPS). The archive contains approximately 300 fields at high galactic latitude (|b| greater than 30) with spectroscopic exposure times greater than 3000 seconds. This sample contains 220 fields (excluding special regions and requiring a consistent grating angle) observed between 6 June 1997 and 21 September 2000, with a total survey area of approximately 160 square arcminutes. At this depth, the SPS detects an average of one emission line galaxy per three fields. We present the analysis of these data, and the identification of 131 low to intermediate redshift galaxies detected by optical emission lines. The sample contains 78 objects with emission lines that we infer to be redshifted [OII]3727 emission at 0.43 < z < 1.7. The comoving number density of these objects is comparable to that of Halpha-emitting galaxies in the NICMOS parallel observations. One quasar and three probable Seyfert galaxies are detected. Many of the emission-line objects show morphologies suggestive of mergers or interactions. The reduced data are available upon request from the authors.

  1. Thermal profiling for parallel on-line monitoring of biomass growth in miniature stirred bioreactors.

    PubMed

    Gill, N K; Appleton, M; Lye, G J

    2008-09-01

    Recently we have described the design and operation of a miniature bioreactor system in which 4-16 fermentations can be performed (Gill et al., Biochem Eng J 39:164-176, 2008). Here we report on the use of thermal profiling techniques for parallel on-line monitoring of cell growth in these bioreactors based on the natural heat generated by microbial culture. Results show that the integrated heat profile during E. coli TOP10 pQR239 fermentations followed the same pattern as off-line optical density (OD) measurements. The maximum specific growth rates calculated from off-line OD and on-line thermal profiling data were in good agreement, at 0.66+/-0.04 and 0.69+/-0.05 h(-1) respectively. The combination of a parallel miniature bioreactor system with a non-invasive on-line technique for estimation of culture kinetic parameters provides a valuable approach for the rapid optimisation of microbial fermentation processes.

  2. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  3. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  4. ACS Grism Parallel Survey of Emission- line Galaxies at Redshift z Apl 7

    NASA Astrophysics Data System (ADS)

    Yan, Lin

    2002-07-01

    We propose an ACS grism parallel survey to search for emission-line galaxies toward 50 random lines of sight over the redshift interval 0 < z Apl 7. We request ACS parallel observations of duration more than one orbit at high galactic latitude to identify 300 HAlpha emission-line galaxies at 0.2 Apl z Apl 0.5, 720 O IILambda3727 emission-line galaxies at 0.3 Apl z Apl 1.68, and Apg 1000 Ly-alpha emission-line galaxies at 3 Apl z Apl 7 with total emission line flux f Apg 2* 10^-17 ergs s^-1 cm^-2 over 578 arcmin^2. We will obtain direct images with the F814W and F606W filters and dispersed images with the WFC/G800L grism at each position. The direct images will serve to provide a zeroth order model both for wavelength calibration of the extracted 1D spectra and for determining extraction apertures of the corresponding dispersed images. The primary scientific objectives are as follows: {1} We will establish a uniform sample of HAlpha and O II emission-line galaxies at z<1.7 in order to obtain accurate measurements of co-moving star formation rate density versus redshift over this redshift range. {2} We will study the spatial and statistical distribution of star formation rate intensity in individual galaxies using the spatially resolved emission-line morphology in the grism images. And {3} we will study high-redshift universe using Ly-alpha emitting galaxies identified at z Apl 7 in the survey. The data will be available to the community immediately as they are obtained.

  5. High-speed, digitally refocused retinal imaging with line-field parallel swept source OCT

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Ginner, Laurin; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-03-01

    MHz OCT allows mitigating undesired influence of motion artifacts during retinal assessment, but comes in state-of-the-art point scanning OCT at the price of increased system complexity. By changing the paradigm from scanning to parallel OCT for in vivo retinal imaging the three-dimensional (3D) acquisition time is reduced without a trade-off between speed, sensitivity and technological requirements. Furthermore, the intrinsic phase stability allows for applying digital refocusing methods increasing the in-focus imaging depth range. Line field parallel interferometric imaging (LPSI) is utilizing a commercially available swept source, a single-axis galvo-scanner and a line scan camera for recording 3D data with up to 1MHz A-scan rate. Besides line-focus illumination and parallel detection, we mitigate the necessity for high-speed sensor and laser technology by holographic full-range imaging, which allows for increasing the imaging speed by low sampling of the optical spectrum. High B-scan rates up to 1kHz further allow for implementation of lable-free optical angiography in 3D by calculating the inter B-scan speckle variance. We achieve a detection sensitivity of 93.5 (96.5) dB at an equivalent A-scan rate of 1 (0.6) MHz and present 3D in vivo retinal structural and functional imaging utilizing digital refocusing. Our results demonstrate for the first time competitive imaging sensitivity, resolution and speed with a parallel OCT modality. LPSI is in fact currently the fastest OCT device applied to retinal imaging and operating at a central wavelength window around 800 nm with a detection sensitivity of higher than 93.5 dB.

  6. A robust real-time laser measurement method based on noncoding parallel multi-line

    NASA Astrophysics Data System (ADS)

    Zhang, Chenbo; Cui, Haihua; Yin, Wei; Yang, Liu

    2016-11-01

    Single line scanning is the main method in traditional 3D hand-held laser scanning, however its reconstruction speed is very slow and cumulative error is very large. Therefore, we propose a method to reconstruct the 3D profile by parallel multi-line 3D hand-held laser scanning. Firstly, we process the two images that contain multi-line laser stripes shot by the binocular cameras, and then the laser stripe centers will be extracted accurately. Then we use the approach of stereo vision principle, polar constraint and laser plane constraint to match the laser stripes of the left image and the right image correctly and reconstruct them quickly. Our experimental results prove the feasibility of this method, which improves the scanning speed and increases the scanning area greatly.

  7. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  8. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  9. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  10. A computer program based on parallel line assay for analysis of skin tests.

    PubMed

    Martín, S; Cuesta, P; Rico, P; Cortés, C

    1997-01-01

    A computer program for the analysis of differences or changes in skin sensitivity has been developed. It is based on parallel line assay, and its main features are its ability to conduct a validation process which ensures that the data from skin tests conform to the conditions imposed by the analysis which is carried out (regression, parallelism, etc.), the estimation of the difference or change in skin sensitivity, and the determination of the 95% and 99% confidence intervals of this estimation. This program is capable of managing data from independent groups, as well as paired data, and it may be applied to the comparison of allergen extracts, with the aim of determining their biologic activity, as well as to the analysis of changes in skin sensitivity appearing as a consequence of treatment such as immunotherapy.

  11. Acceleration on stretched meshes with line-implicit LU-SGS in parallel implementation

    NASA Astrophysics Data System (ADS)

    Otero, Evelyn; Eliasson, Peter

    2015-02-01

    The implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver is combined with the line-implicit technique to improve convergence on the very anisotropic grids necessary for resolving the boundary layers. The computational fluid dynamics code used is Edge, a Navier-Stokes flow solver for unstructured grids based on a dual grid and edge-based formulation. Multigrid acceleration is applied with the intention to accelerate the convergence to steady state. LU-SGS works in parallel and gives better linear scaling with respect to the number of processors, than the explicit scheme. The ordering techniques investigated have shown that node numbering does influence the convergence and that the orderings from Delaunay and advancing front generation were among the best tested. 2D Reynolds-averaged Navier-Stokes computations have clearly shown the strong efficiency of our novel approach line-implicit LU-SGS which is four times faster than implicit LU-SGS and line-implicit Runge-Kutta. Implicit LU-SGS for Euler and line-implicit LU-SGS for Reynolds-averaged Navier-Stokes are at least twice faster than explicit and line-implicit Runge-Kutta, respectively, for 2D and 3D cases. For 3D Reynolds-averaged Navier-Stokes, multigrid did not accelerate the convergence and therefore may not be needed.

  12. Line-field parallel swept source MHz OCT for structural and functional retinal imaging

    PubMed Central

    Fechtig, Daniel J.; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-01-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation. PMID:25798298

  13. Line-field parallel swept source MHz OCT for structural and functional retinal imaging.

    PubMed

    Fechtig, Daniel J; Grajciar, Branislav; Schmoll, Tilman; Blatter, Cedric; Werkmeister, Rene M; Drexler, Wolfgang; Leitgeb, Rainer A

    2015-03-01

    We demonstrate three-dimensional structural and functional retinal imaging with line-field parallel swept source imaging (LPSI) at acquisition speeds of up to 1 MHz equivalent A-scan rate with sensitivity better than 93.5 dB at a central wavelength of 840 nm. The results demonstrate competitive sensitivity, speed, image contrast and penetration depth when compared to conventional point scanning OCT. LPSI allows high-speed retinal imaging of function and morphology with commercially available components. We further demonstrate a method that mitigates the effect of the lateral Gaussian intensity distribution across the line focus and demonstrate and discuss the feasibility of high-speed optical angiography for visualization of the retinal microcirculation.

  14. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an

  15. Retinal photoreceptor imaging with high-speed line-field parallel spectral domain OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Ginner, Laurin; Kumar, Abhishek; Pircher, Michael; Schmoll, Tilman; Wurster, Lara M.; Drexler, Wolfgang; Leitgeb, Rainer A.

    2016-03-01

    We present retinal photoreceptor imaging with a line-field parallel spectral domain OCT modality, utilizing a commercially available 2D CMOS detector array operating at and imaging speed of 500 B-scans/s. Our results demonstrate for the first time in vivo structural and functional retinal assessment with a line-field OCT setup providing sufficient sensitivity, lateral and axial resolution and 3D acquisition rates in order to resolve individual photoreceptor cells. The phase stability of the system is manifested by the high phase-correlation across the lateral FOV on the level of individual photoreceptors. The setup comprises a Michelson interferometer illuminated by a broadband light source, where a line-focus is formed via a cylindrical lens and the back-propagated light from sample and reference arm is detected by a 2D array after passing a diffraction grating. The spot size of the line-focus on the retina is 5μm, which corresponds to a PSF of 50μm and an oversampling factor of 3.6 at the detector plane, respectively. A full 3D stack was recorded in only 0.8 s. We show representative enface images, tomograms and phase-difference maps of cone photoreceptors with a lateral FOV close to 2°. The high-speed capability and the phase stability due to parallel illumination and detection may potentially lead to novel structural and functional diagnostic tools on a cellular and microvascular imaging level. Furthermore, the presented system enables competitive imaging results as compared to respective point scanning modalities and facilitates utilizing software based digital aberration correction algorithms for achieving 3D isotropic resolution across the full FOV.

  16. Retinal photoreceptor imaging with high-speed line-field parallel spectral domain OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ginner, Laurin; Fechtig, Daniel J.; Schmoll, Tilman; Wurster, Lara M.; Pircher, Michael; Leitgeb, Rainer A.; Drexler, Wolfgang

    2016-03-01

    We present retinal photoreceptor imaging with a line-field parallel spectral domain OCT modality, utilizing a commercially available 2D CMOS detector array operating at and imaging speed of 500 B-scans/s. Our results demonstrate for the first time in vivo structural and functional retinal assessment with a line-field OCT setup providing sufficient sensitivity, lateral and axial resolution and 3D acquisition rates in order to resolve individual photoreceptor cells. The setup comprises a Michelson interferometer illuminated by a broadband light source, where a line-focus is formed via a cylindrical lens and the back-propagated light from sample and reference arm is detected by a 2D array after passing a diffraction grating. The spot size of the line-focus on the retina is 5μm, which corresponds to a PSF of 50μm and an oversampling factor of 3.6 at the detector plane, respectively. A full 3D stack was recorded in only 0.8 s. We show representative enface images, tomograms and phase-difference maps of cone photoreceptors with a lateral FOV close to 2°. The high-speed capability and the phase stability due to parallel illumination and detection may potentially lead to novel structural and functional diagnostic tools on a cellular and microvascular imaging level. Furthermore, the presented system enables competitive imaging results as compared to respective point scanning modalities and facilitates utilizing software based digital aberration correction algorithms for achieving 3D isotropic resolution across the full FOV.

  17. Visual globes, celestial spheres, and the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Rogers, Cassandra

    2009-01-01

    Helmholtz's famous distorted chessboard pattern has been used to make the point that perception of the straightness of peripherally viewed lines is not always veridical. Helmholtz showed that the curved lines of his chessboard pattern appear to be straight when viewed from a critical distance and he argued that, at this distance, the contours stimulated particular 'direction circles' in the field of fixation. We measured the magnitude of the distortion of peripherally viewed contours, and found that the straightness of elongated contours is indeed misperceived in the direction reported by Helmholtz, but that the magnitude of the effect varies with viewing conditions. On the basis of theoretical considerations, we conclude that there cannot, in principle, be particular retinal loci ('loci' is used here in the sense of an arc or an extended set of points that provide a basis for judging collinearity) to underpin our judgments of the straightness and parallelity of peripheral contours, because such judgments also require information about the 3-D surface upon which the contours are located. Moreover, we show experimentally that the contours in the real world that are judged to be straight and parallel can stimulate quite different retinal loci, depending on the shape of the 3-D surface upon which they are drawn.

  18. An on-line learning tracking of non-rigid target combining multiple-instance boosting and level set

    NASA Astrophysics Data System (ADS)

    Chen, Mingming; Cai, Jingju

    2013-10-01

    Visual tracking algorithms based on online boosting generally use a rectangular bounding box to represent the position of the target, while actually the shape of the target is always irregular. This will cause the classifier to learn the features of the non-target parts in the rectangle region, thereby the performance of the classifier is reduced, and drift would happen. To avoid the limitations of the bounding-box, we propose a novel tracking-by-detection algorithm involving the level set segmentation, which ensures the classifier only learn the features of the real target area in the tracking box. Because the shape of the target only changes a little between two adjacent frames and the current level set algorithm can avoid the re-initialization of the signed distance function, it only takes a few iterations to converge to the position of the target contour in the next frame. We also make some improvement on the level set energy function so that the zero level set would have less possible to converge to the false contour. In addition, we use gradient boost to improve the original multi-instance learning (MIL) algorithm like the WMILtracker, which greatly speed up the tracker. Our algorithm outperforms the original MILtracker both on speed and precision. Compared with the WMILtracker, our algorithm runs at a almost same speed, but we can avoid the drift caused by background learning, so the precision is better.

  19. Oxygen boost pump study

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An oxygen boost pump is described which can be used to charge the high pressure oxygen tank in the extravehicular activity equipment from spacecraft supply. The only interface with the spacecraft is the +06 6.205 Pa supply line. The breadboard study results and oxygen tank survey are summarized and the results of the flight-type prototype design and analysis are presented.

  20. Transcriptomic profiling of a chicken lung epithelial cell line (CLEC213) reveals a mitochondrial respiratory chain activity boost during influenza virus infection.

    PubMed

    Meyer, Léa; Leymarie, Olivier; Chevalier, Christophe; Esnault, Evelyne; Moroldo, Marco; Da Costa, Bruno; Georgeault, Sonia; Roingeard, Philippe; Delmas, Bernard; Quéré, Pascale; Le Goffic, Ronan

    2017-01-01

    Avian Influenza virus (AIV) is a major concern for the global poultry industry. Since 2012, several countries have reported AIV outbreaks among domestic poultry. These outbreaks had tremendous impact on poultry production and socio-economic repercussion on farmers. In addition, the constant emergence of highly pathogenic AIV also poses a significant risk to human health. In this study, we used a chicken lung epithelial cell line (CLEC213) to gain a better understanding of the molecular consequences of low pathogenic AIV infection in their natural host. Using a transcriptome profiling approach based on microarrays, we identified a cluster of mitochondrial genes highly induced during the infection. Interestingly, most of the regulated genes are encoded by the mitochondrial genome and are involved in the oxidative phosphorylation metabolic pathway. The biological consequences of this transcriptomic induction result in a 2.5- to 4-fold increase of the ATP concentration within the infected cells. PB1-F2, a viral protein that targets the mitochondria was not found associated to the boost of activity of the respiratory chain. We next explored the possibility that ATP may act as a host-derived danger signal (through production of extracellular ATP) or as a boost to increase AIV replication. We observed that, despite the activation of the P2X7 purinergic receptor pathway, a 1mM ATP addition in the cell culture medium had no effect on the virus replication in our epithelial cell model. Finally, we found that oligomycin, a drug that inhibits the oxidative phosphorylation process, drastically reduced the AIV replication in CLEC213 cells, without apparent cellular toxicity. Collectively, our results suggest that AIV is able to boost the metabolic capacities of its avian host in order to provide the important energy needs required to produce progeny virus.

  1. Transcriptomic profiling of a chicken lung epithelial cell line (CLEC213) reveals a mitochondrial respiratory chain activity boost during influenza virus infection

    PubMed Central

    Meyer, Léa; Leymarie, Olivier; Chevalier, Christophe; Esnault, Evelyne; Moroldo, Marco; Da Costa, Bruno; Georgeault, Sonia; Roingeard, Philippe; Delmas, Bernard; Quéré, Pascale

    2017-01-01

    Avian Influenza virus (AIV) is a major concern for the global poultry industry. Since 2012, several countries have reported AIV outbreaks among domestic poultry. These outbreaks had tremendous impact on poultry production and socio-economic repercussion on farmers. In addition, the constant emergence of highly pathogenic AIV also poses a significant risk to human health. In this study, we used a chicken lung epithelial cell line (CLEC213) to gain a better understanding of the molecular consequences of low pathogenic AIV infection in their natural host. Using a transcriptome profiling approach based on microarrays, we identified a cluster of mitochondrial genes highly induced during the infection. Interestingly, most of the regulated genes are encoded by the mitochondrial genome and are involved in the oxidative phosphorylation metabolic pathway. The biological consequences of this transcriptomic induction result in a 2.5- to 4-fold increase of the ATP concentration within the infected cells. PB1-F2, a viral protein that targets the mitochondria was not found associated to the boost of activity of the respiratory chain. We next explored the possibility that ATP may act as a host-derived danger signal (through production of extracellular ATP) or as a boost to increase AIV replication. We observed that, despite the activation of the P2X7 purinergic receptor pathway, a 1mM ATP addition in the cell culture medium had no effect on the virus replication in our epithelial cell model. Finally, we found that oligomycin, a drug that inhibits the oxidative phosphorylation process, drastically reduced the AIV replication in CLEC213 cells, without apparent cellular toxicity. Collectively, our results suggest that AIV is able to boost the metabolic capacities of its avian host in order to provide the important energy needs required to produce progeny virus. PMID:28441462

  2. Motion correction in periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and turboprop MRI.

    PubMed

    Tamhane, Ashish A; Arfanakis, Konstantinos

    2009-07-01

    Periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and Turboprop MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo (FSE) and gradient and spin-echo (GRASE), respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction are discussed for PROPELLER and Turboprop MRI. (c) 2009 Wiley-Liss, Inc.

  3. In-line print defect inspection system based on parallelized algorithms

    NASA Astrophysics Data System (ADS)

    Lv, Chao; Zhou, Hongjun

    2015-03-01

    The core algorithm of an on-line print defects detection system is template matching. In this paper, we introduce a kind of edge-based template matching based on Canny's edge detection method to find the edge information and do the matching work. Of all the detection algorithms, the most difficult problem is execution time, in order to reduce the execution time and improve the efficiency of execution, we introduce four different ways to solve and compare. They are Pyramidal algorithm, Multicore and Multi-Threading algorithm based on OpenMP, a Parallel algorithm based on Intel AVX Instruction Set, GPU computing based on OpenCL model. Through the results, we can find different characters of different ways, and then choose the best for your own system.

  4. Parametric analysis of hollow conductor parallel and coaxial transmission lines for high frequency space power distribution

    NASA Technical Reports Server (NTRS)

    Jeffries, K. S.; Renz, D. D.

    1984-01-01

    A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.

  5. A new cascaded control strategy for paralleled line-interactive UPS with LCL filter

    NASA Astrophysics Data System (ADS)

    Zhang, X. Y.; Zhang, X. H.; Li, L.; Luo, F.; Zhang, Y. S.

    2016-08-01

    Traditional uninterrupted power supply (UPS) is difficult to meet the output voltage quality and grid-side power quality requirements at the same time, and usually has some disadvantage, such as multi-stage conversion, complex structure, or harmonic current pollution to the utility grid and so on. A three-phase three-level paralleled line-interactive UPS with LCL filter is presented in this paper. It can achieve the output voltage quality and grid-side power quality control simultaneously with only single-conversion power stage, but the multi-objective control strategy design is difficult. Based on the detailed analysis of the circuit structure and operation mechanism, a new cascaded control strategy for the power, voltage, and current is proposed. An outer current control loop based on the resonant control theory is designed to ensure the grid-side power quality. An inner voltage control loop based on the capacitance voltage and capacitance current feedback is designed to ensure the output voltage quality and avoid the resonance peak of the LCL filter. Improved repetitive controller is added to reduce the distortion of the output voltage. The setting of the controller parameters is detailed discussed. A 100kVA UPS prototype is built and experiments under the unbalanced resistive load and nonlinear load are carried out. Theoretical analysis and experimental results show the effectiveness of the control strategy. The paralleled line-interactive UPS can not only remain constant three-phase balanced output voltage, but also has the comprehensive power quality management functions with three-phase balanced grid active power input, low THD of output voltage and grid current, and reactive power compensation. The UPS is a green friendly load to the utility.

  6. Full range line-field parallel swept source imaging utilizing digital refocusing

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-12-01

    We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.

  7. Emission Line Galaxies in the STIS Parallel Survey II: Star Formation Density

    NASA Technical Reports Server (NTRS)

    Teplitz, Harry I.; Collins, Nicholas R.; Gardner, Jonathan P.; Hill, Robert S.; Rhodes, Jason

    2002-01-01

    We present the luminosity function of [OII]-emitting galaxies at a median redshift of z = 0.9, as measured in the deep spectroscopic data in the STIS Parallel Survey (SPS). The luminosity function shows strong evolution from the local value, as expected. By using random lines of sight, the SPS measurement complements previous deep single field studies. We calculate the density of inferred star formation at this redshift by converting from [OII] to H(alpha) line flux as a function of absolute magnitude and find rho = 0.052 +/- 0.017 Solar mass/yr Mpc(sup -3) at a median redshift z approx. 0.9 within the range 0.46 less than z less than 1.415 (H(sub 0) = 50 km/s Mpc(sup -l), Omega(sub M) = 1.0, Omega(sub lambda) = 0.0). This density is consistent with a (1 + z)(sup )4 evolution in global star formation since z approx. 1. To reconcile the density with similar measurements made by surveys targeting H(alpha) may require substantial extinction correction.

  8. A micromachined silicon parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT)

    NASA Astrophysics Data System (ADS)

    Cho, Young Y.; Chang, Cheng-Chung; Wang, Lihong V.; Zou, Jun

    2015-03-01

    To achieve real-time photoacoustic tomography (PAT), massive transducer arrays and data acquisition (DAQ) electronics are needed to receive the PA signals simultaneously, which results in complex and high-cost ultrasound receiver systems. To address this issue, we have developed a new PA data acquisition approach using acoustic time delay. Optical fibers were used as parallel acoustic delay lines (PADLs) to create different time delays in multiple channels of PA signals. This makes the PA signals reach a single-element transducer at different times. As a result, they can be properly received by single-channel DAQ electronics. However, due to their small diameter and fragility, using optical fiber as acoustic delay lines poses a number of challenges in the design, construction and packaging of the PADLs, thereby limiting their performances and use in real imaging applications. In this paper, we report the development of new silicon PADLs, which are directly made from silicon wafers using advanced micromachining technologies. The silicon PADLs have very low acoustic attenuation and distortion. A linear array of 16 silicon PADLs were assembled into a handheld package with one common input port and one common output port. To demonstrate its real-time PAT capability, the silicon PADL array (with its output port interfaced with a single-element transducer) was used to receive 16 channels of PA signals simultaneously from a tissue-mimicking optical phantom sample. The reconstructed PA image matches well with the imaging target. Therefore, the silicon PADL array can provide a 16× reduction in the ultrasound DAQ channels for real-time PAT.

  9. Parallel secretion of pancreastatin and somatostatin from human pancreastatin producing cell line (QGP-1N).

    PubMed

    Funakoshi, A; Tateishi, K; Kitayama, N; Jimi, A; Matsuoka, Y; Kono, A

    1993-05-01

    In this investigation we studied pancreastatin (PST) secretion from a human PST producing cell line (QGP-1N) in response to various secretagogues. Immunocytochemical study revealed the immunoreactivity of PST and somatostatin (SMT) in the same cells of a monolayer culture. Ki-ras DNA point mutation on codon 12 was found. Carbachol stimulated secretion of PST and SMT and intracellular Ca2+ mobilization in the range of 10(-6)-10(-4) M. The secretion and Ca2+ mobilization were inhibited by atropine, a muscarinic receptor antagonist. Phorbol ester and calcium ionophore (A23187) stimulated secretion of PST and SMT. The removal of extracellular calcium suppressed both secretions throughout stimulation with 10(-5) M carbachol. Fluoride, a well-known activator of guanine nucleotide binding (G) protein, stimulated intracellular Ca2+ mobilization and secretion of PST and SMT in a dose-dependent manner in the range of 5-40 mM. Also, 10(-5) M carbachol and 20 mM fluoride stimulated inositol 1,4,5-triphosphate production. However, cholecystokinin and gastrin-releasing peptide did not stimulate Ca2+ mobilization or secretion of the two peptides. These results suggest that secretion of PST and SMT from QGP-1N cells is regulated mainly by acetylcholine in a parallel fashion through muscarinic receptors coupled to the activation of polyphosphoinositide breakdown by a G-protein and that increases in intracellular Ca2+ and protein kinase C play an important role in stimulus-secretion coupling.

  10. Non-parallel stability analysis of three-dimensional boundary layers along an infinite attachment line

    NASA Astrophysics Data System (ADS)

    Itoh, Nobutake

    2000-09-01

    Instability of a non-parallel similar-boundary-layer flow to small and wavy disturbances is governed by partial differential equations with respect to the non-dimensional vertical coordinate ζ and the local Reynolds number R1 based on chordwise velocity of external stream and a boundary-layer thickness. In the particular case of swept Hiemenz flow, the equations admit a series solution expanded in inverse powers of R12 and then are decomposed into an infinite sequence of ordinary differential systems with the leading one posing an eigenvalue problem to determine the first approximation to the complex dispersion relation. Numerical estimation of the series solution indicates a much lower critical Reynolds number of the so-called oblique-wave instability than the classical value Rc=583 of the spanwise-traveling Tollmien-Schlichting instability. Extension of the formulation to general Falkner-Skan-Cooke boundary layers is proposed in the form of a double power series with respect to 1/ R12 and a small parameter ɛ denoting the difference of the Falkner-Skan parameter m from the attachment-line value m=1.

  11. Extraction of loess shoulder-line based on the parallel GVF snake model in the loess hilly area of China

    NASA Astrophysics Data System (ADS)

    Song, Xiaodong; Tang, Guoan; Li, Fayuan; Jiang, Ling; Zhou, Yi; Qian, Kejian

    2013-03-01

    Loess shoulder-lines are the most critical terrain feature in representing and modeling the landforms of the Loess Plateau of China. Existing algorithms usually fail in obtaining a continuous shoulder-line for complicated surface, DEM quality and algorithm limitation. This paper proposes a new method, by which gradient vector flow (GVF) snake model is employed to generate an integrated contour which could connect the discontinuous fragments of shoulder-line. Moreover, a new criterion for the selection of initial seeds is created for the snake model, which takes the value of median smoothing of the local neighborhood regions. By doing this, we can extract the adjacent boundary of loess positive-negative terrains from the shoulder-line zones, which build a basis to found the real shoulder-lines by the gradient vector flow. However, the computational burden of this method remains heavy for large DEM dataset. In this study, a parallel computing scheme of the cluster for automatic shoulder-line extraction is proposed and implemented with a parallel GVF snake model. After analyzing the principle of the method, the paper develops an effective parallel algorithm integrating both single program multiple data (SPMD) and master/slave (M/S) programming modes. Based on domain decomposition of DEM data, each partition is decomposed regularly and calculated simultaneously. The experimental results on different DEM datasets indicate that parallel programming can achieve the main objective of distinctly reducing execution time without losing accuracy compared with the sequential model. The hybrid algorithm in this study achieves a mean shoulder-line offset of 15.8 m, a quite satisfied result in both accuracy and efficiency compared with published extraction methods.

  12. Impending U.S. lighting standards will boost market for halogen-infrared lamps: New product line expanding

    SciTech Connect

    Sardinsky, R.; Shepard, M.

    1993-12-31

    Many of the incandescent floodlights and spotlights manufactured today will not meet lighting efficiency standards taking effect in the US in 1995. As these models cease production, demand will grow for higher efficiency units to fill this huge market, which now totals about 100 million lamps per year. One prime contender is a new class of halogen lamps that use a spectrally selective coating to reflect heat back onto the filament, reducing the amount of electricity needed to generate light. GE Lighting`s Halogen-IR line is the only series of such lamps currently available to replace the conventional floodlights and spotlights that will be banned by the new standards. Other manufacturers may adopt the technology, however, and the Japanese producer Ushio already sells in the US a line of smaller halogen lamps with a similar heat-reflective coating. In terms of efficacy and lifetime, Halogen-IR lamps out perform standard incandescents and standard halogens, but fall far short of fluorescent, metal halide, and high-pressure sodium sources. These other lighting systems are more appropriate and cost-effective than incandescents for many ambient lighting applications. For accent lighting and other tasks that are best suited to incandescent lighting, however, the Halogen-IR lamp is often a superior choice.

  13. The new moon illusion and the role of perspective in the perception of straight and parallel lines.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2015-01-01

    In the new moon illusion, the sun does not appear to be in a direction perpendicular to the boundary between the lit and dark sides of the moon, and aircraft jet trails appear to follow curved paths across the sky. In both cases, lines that are physically straight and parallel to the horizon appear to be curved. These observations prompted us to investigate the neglected question of how we are able to judge the straightness and parallelism of extended lines. To do this, we asked observers to judge the 2-D alignment of three artificial "stars" projected onto the dome of the Saint Petersburg Planetarium that varied in both their elevation and their separation in horizontal azimuth. The results showed that observers make substantial, systematic errors, biasing their judgments away from the veridical great-circle locations and toward equal-elevation settings. These findings further demonstrate that whenever information about the distance of extended lines or isolated points is insufficient, observers tend to assume equidistance, and as a consequence, their straightness judgments are biased toward the angular separation of straight and parallel lines.

  14. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures

    PubMed Central

    Guan, Jungang; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Mattausch, Hans Jürgen

    2017-01-01

    The Hough Transform (HT) is a method for extracting straight lines from an edge image. The main limitations of the HT for usage in actual applications are computation time and storage requirements. This paper reports a hardware architecture for HT implementation on a Field Programmable Gate Array (FPGA) with parallelized voting procedure. The 2-dimensional accumulator array, namely the Hough space in parametric form (ρ, θ), for computing the strength of each line by a voting mechanism is mapped on a 1-dimensional array with regular increments of θ. Then, this Hough space is divided into a number of parallel parts. The computation of (ρ, θ) for the edge pixels and the voting procedure for straight-line determination are therefore executable in parallel. In addition, a synchronized initialization for the Hough space further increases the speed of straight-line detection, so that XGA video processing becomes possible. The designed prototype system has been synthesized on a DE4 platform with a Stratix-IV FPGA device. In the application of road-lane detection, the average processing speed of this HT implementation is 5.4 ms per XGA-frame at 200 MHz working frequency. PMID:28146101

  15. Sustainable Materials Management (SMM) Web Academy Webinar: Wasted Food to Energy: How 6 Water Resource Recovery Facilities are Boosting Biogas Production & the Bottom Line

    EPA Pesticide Factsheets

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  16. Development of parallel line analysis criteria for recombinant adenovirus potency assay and definition of a unit of potency.

    PubMed

    Ogawa, Yasushi; Fawaz, Farah; Reyes, Candice; Lai, Julie; Pungor, Erno

    2007-01-01

    Parameter settings of a parallel line analysis procedure were defined by applying statistical analysis procedures to the absorbance data from a cell-based potency bioassay for a recombinant adenovirus, Adenovirus 5 Fibroblast Growth Factor-4 (Ad5FGF-4). The parallel line analysis was performed with a commercially available software, PLA 1.2. The software performs Dixon outlier test on replicates of the absorbance data, performs linear regression analysis to define linear region of the absorbance data, and tests parallelism between the linear regions of standard and sample. Width of Fiducial limit, expressed as a percent of the measured potency, was developed as a criterion for rejection of the assay data and to significantly improve the reliability of the assay results. With the linear range-finding criteria of the software set to a minimum of 5 consecutive dilutions and best statistical outcome, and in combination with the Fiducial limit width acceptance criterion of <135%, 13% of the assay results were rejected. With these criteria applied, the assay was found to be linear over the range of 0.25 to 4 relative potency units, defined as the potency of the sample normalized to the potency of Ad5FGF-4 standard containing 6 x 10(6) adenovirus particles/mL. The overall precision of the assay was estimated to be 52%. Without the application of Fiducial limit width criterion, the assay results were not linear over the range, and an overall precision of 76% was calculated from the data. An absolute unit of potency for the assay was defined by using the parallel line analysis procedure as the amount of Ad5FGF-4 that results in an absorbance value that is 121% of the average absorbance readings of the wells containing cells not infected with the adenovirus.

  17. Boost type PWM HVDC transmission system

    SciTech Connect

    Ooi, B.T.; Wang, X. . Dept. of Electrical Engineering)

    1991-10-01

    This paper reports that conventional HVdc is built around the mercury arc rectifier or the thyristor which requires line commutation. The advances of fast, high power GTO's and future devices such as MCT's with turn off capabilities, are bringing PWM techniques within the range of HVdc applications. By combining PWM techniques to the boost type bridge topology, one has an alternate system of HVdc Transmission. On the ac side, the converter station has active controls over: the voltage amplitude, the voltage angle and the frequency. On the dc side, parallel connections facilitate multi-terminal load sharing by simple local controls so that redundant communication channels are not required. Bidirectional power through each station is accomplished by the reversal of the direction of dc current flow. These claims have been substantiated by experimental results from laboratory size multi-terminal models.

  18. Carrier Phase Error Detection Method and Synchronization Control of Parallel-Connected PWM Inverters without Signal Line

    NASA Astrophysics Data System (ADS)

    Kohara, Tatsuya; Noguchi, Toshihiko; Kondo, Seiji

    In recent years, parallel-operation of inverters is employed to increase reliability and capacity in an uninterruptible power supply (UPS) system. A phase error in PWM carrier-signals of each inverter causes high frequency loop current between inverters. Therefore, the PWM carrier-signal of each inverter should be adjusted in phase. This paper proposes a detection method of phase error in PWM carrier-signal and its application to synchronization control for parallel-connected inverters. A simple definite-integral circuit achieves the detection of carrier phase error from high frequency loop current using no signal line between inverters. The detected carrier phase error is applied to synchronize the PWM carrier-signal through a PI-compensator, and then the high frequency loop current can be suppressed. Several experimental test-results show the validity of the proposed detection method and synchronization control.

  19. Resolving magnetic field line stochasticity and parallel thermal transport in MHD simulations

    SciTech Connect

    Nishimura, Y.; Callen, J.D.; Hegna, C.C.

    1998-12-31

    Heat transport along braided, or chaotic magnetic field lines is a key to understand the disruptive phase of tokamak operations, both the major disruption and the internal disruption (sawtooth oscillation). Recent sawtooth experimental results in the Tokamak Fusion Test Reactor (TFTR) have inferred that magnetic field line stochasticity in the vicinity of the q = 1 inversion radius plays an important role in rapid changes in the magnetic field structures and resultant thermal transport. In this study, the characteristic Lyapunov exponents and spatial correlation of field line behaviors are calculated to extract the characteristic scale length of the microscopic magnetic field structure (which is important for net radial global transport). These statistical values are used to model the effect of finite thermal transport along magnetic field lines in a physically consistent manner.

  20. Quantitative Profiling of Protein Tyrosine Kinases in Human Cancer Cell Lines by Multiplexed Parallel Reaction Monitoring Assays*

    PubMed Central

    Kim, Hye-Jung; Lin, De; Lee, Hyoung-Joo; Li, Ming; Liebler, Daniel C.

    2016-01-01

    Protein tyrosine kinases (PTKs) play key roles in cellular signal transduction, cell cycle regulation, cell division, and cell differentiation. Dysregulation of PTK-activated pathways, often by receptor overexpression, gene amplification, or genetic mutation, is a causal factor underlying numerous cancers. In this study, we have developed a parallel reaction monitoring-based assay for quantitative profiling of 83 PTKs. The assay detects 308 proteotypic peptides from 54 receptor tyrosine kinases and 29 nonreceptor tyrosine kinases in a single run. Quantitative comparisons were based on the labeled reference peptide method. We implemented the assay in four cell models: 1) a comparison of proliferating versus epidermal growth factor-stimulated A431 cells, 2) a comparison of SW480Null (mutant APC) and SW480APC (APC restored) colon tumor cell lines, and 3) a comparison of 10 colorectal cancer cell lines with different genomic abnormalities, and 4) lung cancer cell lines with either susceptibility (11–18) or acquired resistance (11–18R) to the epidermal growth factor receptor tyrosine kinase inhibitor erlotinib. We observed distinct PTK expression changes that were induced by stimuli, genomic features or drug resistance, which were consistent with previous reports. However, most of the measured expression differences were novel observations. For example, acquired resistance to erlotinib in the 11–18 cell model was associated not only with previously reported up-regulation of MET, but also with up-regulation of FLK2 and down-regulation of LYN and PTK7. Immunoblot analyses and shotgun proteomics data were highly consistent with parallel reaction monitoring data. Multiplexed parallel reaction monitoring assays provide a targeted, systems-level profiling approach to evaluate cancer-related proteotypes and adaptations. Data are available through Proteome eXchange Accession PXD002706. PMID:26631510

  1. The proposed planning method as a parallel element to a real service system for dynamic sharing of service lines.

    PubMed

    Klampfer, Saša; Chowdhury, Amor

    2015-07-01

    This paper presents a solution to the bottleneck problem with dynamic sharing or leasing of service capacities. From this perspective the use of the proposed method as a parallel element in service capacities sharing is very important, because it enables minimization of the number of interfaces, and consequently of the number of leased lines, with a combination of two service systems with time-opposite peak loads. In this paper we present a new approach, methodology, models and algorithms which solve the problems of dynamic leasing and sharing of service capacities.

  2. Wave-particle interaction in parallel transport of long mean-free-path plasmas along open field magnetic field lines

    NASA Astrophysics Data System (ADS)

    Guo, Zehua; Tang, Xianzhu

    2012-03-01

    A tokamak fusion reactor dumps a large amount of heat and particle flux to the divertor through the scrape-off plasma (SOL). Situation exists either by necessity or through deliberate design that the SOL plasma attains long mean-free-path along large segments of the open field lines. The rapid parallel streaming of electrons requires a large parallel electric field to maintain ambipolarity. The confining effect of the parallel electric field on electrons leads to a trap/passing boundary in the velocity space for electrons. In the normal situation where the upstream electron source populates both the trapped and passing region, a mechanism must exist to produce a flux across the electron trap/passing boundary. In a short mean-free-path plasma, this is provided by collisions. For long mean-free-path plasmas, wave-particle interaction is the primary candidate for detrapping the electrons. Here we present simulation results and a theoretical analysis using a model distribution function of trapped electrons. The dominating electromagnetic plasma instability and the associated collisionless scattering, that produces both particle and energy fluxes across the electron trap/passing boundary in velocity space, are discussed.

  3. Line-Focused Optical Excitation of Parallel Acoustic Focused Sample Streams for High Volumetric and Analytical Rate Flow Cytometry.

    PubMed

    Kalb, Daniel M; Fencl, Frank A; Woods, Travis A; Swanson, August; Maestas, Gian C; Juárez, Jaime J; Edwards, Bruce S; Shreve, Andrew P; Graves, Steven W

    2017-09-19

    Flow cytometry provides highly sensitive multiparameter analysis of cells and particles but has been largely limited to the use of a single focused sample stream. This limits the analytical rate to ∼50K particles/s and the volumetric rate to ∼250 μL/min. Despite the analytical prowess of flow cytometry, there are applications where these rates are insufficient, such as rare cell analysis in high cellular backgrounds (e.g., circulating tumor cells and fetal cells in maternal blood), detection of cells/particles in large dilute samples (e.g., water quality, urine analysis), or high-throughput screening applications. Here we report a highly parallel acoustic flow cytometer that uses an acoustic standing wave to focus particles into 16 parallel analysis points across a 2.3 mm wide optical flow cell. A line-focused laser and wide-field collection optics are used to excite and collect the fluorescence emission of these parallel streams onto a high-speed camera for analysis. With this instrument format and fluorescent microsphere standards, we obtain analysis rates of 100K/s and flow rates of 10 mL/min, while maintaining optical performance comparable to that of a commercial flow cytometer. The results with our initial prototype instrument demonstrate that the integration of key parallelizable components, including the line-focused laser, particle focusing using multinode acoustic standing waves, and a spatially arrayed detector, can increase analytical and volumetric throughputs by orders of magnitude in a compact, simple, and cost-effective platform. Such instruments will be of great value to applications in need of high-throughput yet sensitive flow cytometry analysis.

  4. Parallel-Plate Transmission Line Type of EMP Simulators: Systematic Review and Recommendations.

    DTIC Science & Technology

    1980-05-01

    have condensed the available information on two types of pulsers (Van de Graaff and Marx ) with the view of providing a working knowledge of these EMP...Pulser equivalent circuit 20 11.3 Marx Generator 22 a) Equivalent circuit 25 III CONICAL-PLATE TRANSMISSION LINES 31 I11.1 Impedance 33 111.2 Fields 39...Graaff pulse generator (used, for instance, in the ARES facility) and the Marx pulse generator employed in the ATLAS I facility. This section furnishes

  5. Application of the parallel line assay to assessment of biosimilar products based on binary endpoints.

    PubMed

    Lin, Jr-Rung; Chow, Shein-Chung; Chang, Chih-Hsi; Lin, Ya-Ching; Liu, Jen-pei

    2013-02-10

    Biological drug products are therapeutic moieties manufactured by a living system or organisms. These are important life-saving drug products for patients with unmet medical needs. Because of expensive cost, only a few patients have access to life-saving biological products. Most of the early biological products will lose their patent in the next few years. This provides the opportunity for generic versions of the biological products, referred to as biosimilar drug products. The US Biologic Price Competition and Innovation Act passed in 2009 and the draft guidance issued in 2012 provide an approval pathway for biological products shown to be biosimilar to, or interchangeable with, a Food and Drug Administration-licensed reference biological product. Hence, cost reduction and affordability of the biosimilar products to the average patients may become possible. However, the complexity and heterogeneity of the molecular structures, complicated manufacturing processes, different analytical methods, and possibility of severe immunogenicity reactions make evaluation of equivalence between the biosimilar products and their corresponding reference product a great challenge for statisticians and regulatory agencies. To accommodate the stepwise approach and totality of evidence, we propose to apply a parallel assay to evaluate the extrapolation of the similarity in product characteristics such as doses or pharmacokinetic responses to the similarity in binary efficacy endpoints. We also report the results of simulation studies to evaluate the performance, in terms of size and power, of our proposed methods. We present numerical examples to illustrate the suggested procedures. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Kinetic PIC simulations of reconnection signal propagation parallel to magnetic field lines: Implifications for substorms

    NASA Astrophysics Data System (ADS)

    Shay, M. A.; Drake, J. F.

    2009-12-01

    In a recent substorm case study using THEMIS data [1], it was inferred that auroral intensification occurred 96 seconds after reconnection onset initiated a substorm in the magnetotail. These conclusions have been the subject of some controversy [2,3]. The time delay between reconnection and auroral intensification requires a propagation speed significantly faster than can be explained by Alfvén waves. Kinetic Alfvén waves, however, can be much faster and could possibly explain the time lag. To test this possiblity, we simulate large scale reconnection events with the kinetic PIC code P3D and examine the disturbances on a magnetic field line as it propagates through a reconnection region. In the regions near the separatrices but relatively far from the x-line, the propagation physics is expected to be governed by the physics of kinetic Alfvén waves. Indeed, we find that the propagation speed of the magnetic disturbance roughly scales with kinetic Alfvén speeds. We also examine energization of electrons due to this disturbance. Consequences for our understanding of substorms will be discussed. [1] Angelopoulos, V. et al., Science, 321, 931, 2008. [2] Lui, A. T. Y., Science, 324, 1391-b, 2009. [3] Angelopoulos, V. et al., Science, 324, 1391-c, 2009.

  7. High-voltage isolation transformer for sub-nanosecond rise time pulses constructed with annular parallel-strip transmission lines.

    PubMed

    Homma, Akira

    2011-07-01

    A novel annular parallel-strip transmission line was devised to construct high-voltage high-speed pulse isolation transformers. The transmission lines can easily realize stable high-voltage operation and good impedance matching between primary and secondary circuits. The time constant for the step response of the transformer was calculated by introducing a simple low-frequency equivalent circuit model. Results show that the relation between the time constant and low-cut-off frequency of the transformer conforms to the theory of the general first-order linear time-invariant system. Results also show that the test transformer composed of the new transmission lines can transmit about 600 ps rise time pulses across the dc potential difference of more than 150 kV with insertion loss of -2.5 dB. The measured effective time constant of 12 ns agreed exactly with the theoretically predicted value. For practical applications involving the delivery of synchronized trigger signals to a dc high-voltage electron gun station, the transformer described in this paper exhibited advantages over methods using fiber optic cables for the signal transfer system. This transformer has no jitter or breakdown problems that invariably occur in active circuit components.

  8. Parallel Configuration For Fast Superconducting Strip Line Detectors With Very Large Area In Time Of Flight Mass Spectrometry

    SciTech Connect

    Casaburi, A.; Zen, N.; Suzuki, K.; Ohkubo, M.; Ejrnaes, M.; Cristiano, R.; Pagano, S.

    2009-12-16

    We realized a very fast and large Superconducting Strip Line Detector based on a parallel configuration of nanowires. The detector with size 200x200 {mu}m{sup 2} recorded a sub-nanosecond pulse width of 700 ps in FWHM (400 ps rise time and 530 ps relaxation time) for lysozyme monomers/multimers molecules accelerated at 175 keV in a Time of Flight Mass Spectrometer. This record is the best in the class of superconducting detectors and comparable with the fastest NbN superconducting single photon detector of 10x10 {mu}m{sup 2}. We succeeded in acquiring mass spectra as the first step for a scale-up to {approx}mm pixel size for high throughput MS analysis, while keeping a fast response.

  9. Extending statistical boosting. An overview of recent methodological developments.

    PubMed

    Mayr, A; Binder, H; Gefeller, O; Schmid, M

    2014-01-01

    Boosting algorithms to simultaneously estimate and select predictor effects in statistical models have gained substantial interest during the last decade. This review highlights recent methodological developments regarding boosting algorithms for statistical modelling especially focusing on topics relevant for biomedical research. We suggest a unified framework for gradient boosting and likelihood-based boosting (statistical boosting) which have been addressed separately in the literature up to now. The methodological developments on statistical boosting during the last ten years can be grouped into three different lines of research: i) efforts to ensure variable selection leading to sparser models, ii) developments regarding different types of predictor effects and how to choose them, iii) approaches to extend the statistical boosting framework to new regression settings. Statistical boosting algorithms have been adapted to carry out unbiased variable selection and automated model choice during the fitting process and can nowadays be applied in almost any regression setting in combination with a large amount of different types of predictor effects.

  10. Characterization of a microwave-excited atmospheric-pressure argon plasma jet using two-parallel-wires transmission line resonator

    NASA Astrophysics Data System (ADS)

    Choi, J.; Eom, I. S.; Kim, S. J.; Kwon, Y. W.; Joh, H. M.; Jeong, B. S.; Chung, T. H.

    2017-09-01

    This paper presents a method to produce a microwave-excited atmospheric-pressure plasma jet (ME-APPJ) with argon. The plasma was generated by a microwave-driven micro-plasma source that uses a two-parallel-wire transmission line resonator (TPWR) operating at around 900 MHz. The TPWR has a simple structure and is easier to fabricate than coaxial transmission line resonator (CTLR) devices. In particular, the TPWR can sustain more stable ME-APPJ than the CTLR can because the gap between the electrodes is narrower than that in the CTLR. In experiments performed with an Ar flow rate from 0.5 to 8.0 L.min-1 and an input power from 1 to 6 W, the rotational temperature was determined by comparing the measured and simulated spectra of rotational lines of the OH band and the electron excitation temperature determined by the Boltzmann plot method. The rotational temperature obtained from OH(A-X) spectra was 700 K to 800 K, whereas the apparent gas temperature of the plasma jet remains lower than ˜325 K, which is compatible with biomedical applications. The electron number density was determined using the method based on the Stark broadening of the hydrogen Hβ line, and the measured electron density ranged from 6.5 × 1014 to 7.6 × 1014 cm-3. TPWR ME-APPJ can be operated at low flows of the working gas and at low power and is very stable and effective for interactions of the plasma with cells.

  11. Boosted ellipsoid ARTMAP

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Georgios C.; Georgiopoulos, Michael; Verzi, Steven J.; Heileman, Gregory L.

    2002-03-01

    Ellipsoid ARTMAP (EAM) is an adaptive-resonance-theory neural network architecture that is capable of successfully performing classification tasks using incremental learning. EAM achieves its task by summarizing labeled input data via hyper-ellipsoidal structures (categories). A major property of EAM, when using off-line fast learning, is that it perfectly learns its training set after training has completed. Depending on the classification problems at hand, this fact implies that off-line EAM training may potentially suffer from over-fitting. For such problems we present an enhancement to the basic Ellipsoid ARTMAP architecture, namely Boosted Ellipsoid ARTMAP (bEAM), that is designed to simultaneously improve the generalization properties and reduce the number of created categories for EAM's off-line fast learning. This is being accomplished by forcing EAM to be tolerant about occasional misclassification errors during fast learning. An additional advantage provided by bEAM's desing is the capability of learning inconsistent cases, that is, learning identical patterns with contradicting class labels. After we present the theory behind bEAM's enhancements, we provide some preliminary experimental results, which compare the new variant to the original EAM network, Probabilistic EAM and three different variants of the Restricted Coulomb Energy neural network on the square-in-a-square classification problem.

  12. Bidirectional buck boost converter

    DOEpatents

    Esser, Albert Andreas Maria

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero.

  13. Bidirectional buck boost converter

    DOEpatents

    Esser, A.A.M.

    1998-03-31

    A bidirectional buck boost converter and method of operating the same allows regulation of power flow between first and second voltage sources in which the voltage level at each source is subject to change and power flow is independent of relative voltage levels. In one embodiment, the converter is designed for hard switching while another embodiment implements soft switching of the switching devices. In both embodiments, first and second switching devices are serially coupled between a relatively positive terminal and a relatively negative terminal of a first voltage source with third and fourth switching devices serially coupled between a relatively positive terminal and a relatively negative terminal of a second voltage source. A free-wheeling diode is coupled, respectively, in parallel opposition with respective ones of the switching devices. An inductor is coupled between a junction of the first and second switching devices and a junction of the third and fourth switching devices. Gating pulses supplied by a gating circuit selectively enable operation of the switching devices for transferring power between the voltage sources. In the second embodiment, each switching device is shunted by a capacitor and the switching devices are operated when voltage across the device is substantially zero. 20 figs.

  14. Parallel extraction columns and parallel analytical columns coupled with liquid chromatography/tandem mass spectrometry for on-line simultaneous quantification of a drug candidate and its six metabolites in dog plasma.

    PubMed

    Xia, Y Q; Hop, C E; Liu, D Q; Vincent, S H; Chiu, S H

    2001-01-01

    A method with parallel extraction columns and parallel analytical columns (PEC-PAC) for on-line high-flow liquid chromatography/tandem mass spectrometry (LC/MS/MS) was developed and validated for simultaneous quantification of a drug candidate and its six metabolites in dog plasma. Two on-line extraction columns were used in parallel for sample extraction and two analytical columns were used in parallel for separation and analysis. The plasma samples, after addition of an internal standard solution, were directly injected onto the PEC-PAC system for purification and analysis. This method allowed the use of one of the extraction columns for analyte purification while the other was being equilibrated. Similarly, one of the analytical columns was employed to separate the analytes while the other was undergoing equilibration. Therefore, the time needed for re-conditioning both extraction and analytical columns was not added to the total analysis time, which resulted in a shorter run time and higher throughput. Moreover, the on-line column extraction LC/MS/MS method made it possible to extract and analyze all seven analytes simultaneously with good precision and accuracy despite their chemical class diversity that included primary, secondary and tertiary amines, an alcohol, an aldehyde and a carboxylic acid. The method was validated with the standard curve ranging from 5.00 to 5000 ng/mL. The intra- and inter-day precision was no more than 8% CV and the assay accuracy was between 95 and 107%.

  15. LDA boost classification: boosting by topics

    NASA Astrophysics Data System (ADS)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  16. Performance Boosting Additive

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.

  17. A geometrically adjustable 16-channel transmit/receive transmission line array for improved RF efficiency and parallel imaging performance at 7 Tesla.

    PubMed

    Adriany, Gregor; Van de Moortele, Pierre-Francois; Ritter, Johannes; Moeller, Steen; Auerbach, Edward J; Akgün, Can; Snyder, Carl J; Vaughan, Thomas; Uğurbil, Kâmil

    2008-03-01

    A novel geometrically adjustable transceiver array system is presented. A key feature of the geometrically adjustable array was the introduction of decoupling capacitors that allow for automatic change in capacitance dependent on neighboring resonant element distance. The 16-element head array version of such an adjustable coil based on transmission line technology was compared to fixed geometry transmission line arrays (TLAs) of various sizes at 7T. The focus of this comparison was on parallel imaging performance, RF transmit efficiency, and signal-to-noise ratio (SNR). Significant gains in parallel imaging performance and SNR were observed for the new coil and attributed to its adjustability and to the design of the individual elements with a three-sided ground plane. (c) 2008 Wiley-Liss, Inc.

  18. Resonance line transfer calculations by doubling thin layers. I - Comparison with other techniques. II - The use of the R-parallel redistribution function. [planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Yelle, Roger V.; Wallace, Lloyd

    1989-01-01

    A versatile and efficient technique for the solution of the resonance line scattering problem with frequency redistribution in planetary atmospheres is introduced. Similar to the doubling approach commonly used in monochromatic scattering problems, the technique has been extended to include the frequency dependence of the radiation field. Methods for solving problems with external or internal sources and coupled spectral lines are presented, along with comparison of some sample calculations with results from Monte Carlo and Feautrier techniques. The doubling technique has also been applied to the solution of resonance line scattering problems where the R-parallel redistribution function is appropriate, both neglecting and including polarization as developed by Yelle and Wallace (1989). With the constraint that the atmosphere is illuminated from the zenith, the only difficulty of consequence is that of performing precise frequency integrations over the line profiles. With that problem solved, it is no longer necessary to use the Monte Carlo method to solve this class of problem.

  19. Development and qualification of the parallel line model for the estimation of human influenza haemagglutinin content using the single radial immunodiffusion assay.

    PubMed

    van Kessel, G; Geels, M J; de Weerd, S; Buijs, L J; de Bruijni, M A M; Glansbeek, H L; van den Bosch, J F; Heldens, J G; van den Heuvel, E R

    2012-01-05

    Infection with human influenza virus leads to serious respiratory disease. Vaccination is the most common and effective prophylactic measure to prevent influenza. Influenza vaccine manufacturing and release is controlled by the correct determination of the potency-defining haemagglutinin (HA) content. This determination is historically done by single radial immunodiffusion (SRID), which utilizes a statistical slope-ratio model to estimate the actual HA content. In this paper we describe the development and qualification of a parallel line model for analysis of HA quantification by SRID in cell culture-derived whole virus final monovalent and trivalent influenza vaccines. We evaluated plate layout, sample randomization, and validity of data and statistical model. The parallel line model was shown to be robust and reproducible. The precision studies for HA content demonstrated 3.8-5.0% repeatability and 3.8%-7.9% intermediate precision. Furthermore, system suitability criteria were developed to guarantee long-term stability of this assay in a regulated production environment. SRID is fraught with methodological and logistical difficulties and the determination of the HA content requires the acceptance of new and modern release assays, but until that moment, the described parallel line model represents a significant and robust update for the current global influenza vaccine release assay.

  20. Boosting Lyα and He II λ1640 Line Fluxes from Population III Galaxies: Stochastic IMF Sampling and Departures from Case-B

    NASA Astrophysics Data System (ADS)

    Mas-Ribas, Lluís; Dijkstra, Mark; Forero-Romero, Jaime E.

    2016-12-01

    We revisit calculations of nebular hydrogen Lyα and He ii λ1640 line strengths for Population III (Pop III) galaxies, undergoing continuous, and bursts of, star formation. We focus on initial mass functions (IMFs) motivated by recent theoretical studies, which generally span a lower range of stellar masses than earlier works. We also account for case-B departures and the stochastic sampling of the IMF. In agreement with previous work, we find that departures from case-B can enhance the Lyα flux by a factor of a few, but we argue that this enhancement is driven mainly by collisional excitation and ionization, and not due to photoionization from the n = 2 state of atomic hydrogen. The increased sensitivity of the Lyα flux to the high-energy end of the galaxy spectrum makes it more subject to stochastic sampling of the IMF. The latter introduces a dispersion in the predicted nebular line fluxes around the deterministic value by as much as a factor of ∼4. In contrast, the stochastic sampling of the IMF has less impact on the emerging Lyman Werner photon flux. When case-B departures and stochasticity effects are combined, nebular line emission from Pop III galaxies can be up to one order of magnitude brighter than predicted by “standard” calculations that do not include these effects. This enhances the prospects for detection with future facilities such as the James Webb Space Telescope and large, ground-based telescopes.

  1. Non-cytotoxic copper overload boosts mitochondrial energy metabolism to modulate cell proliferation and differentiation in the human erythroleukemic cell line K562.

    PubMed

    Ruiz, Lina M; Jensen, Erik L; Rossel, Yancing; Puas, German I; Gonzalez-Ibanez, Alvaro M; Bustos, Rodrigo I; Ferrick, David A; Elorza, Alvaro A

    2016-07-01

    Copper is integral to the mitochondrial respiratory complex IV and contributes to proliferation and differentiation, metabolic reprogramming and mitochondrial function. The K562 cell line was exposed to a non-cytotoxic copper overload to evaluate mitochondrial dynamics, function and cell fate. This induced higher rates of mitochondrial turnover given by an increase in mitochondrial fusion and fission events and in the autophagic flux. The appearance of smaller and condensed mitochondria was also observed. Bioenergetics activity included more respiratory complexes, higher oxygen consumption rate, superoxide production and ATP synthesis, with no decrease in membrane potential. Increased cell proliferation and inhibited differentiation also occurred. Non-cytotoxic copper levels can modify mitochondrial metabolism and cell fate, which could be used in cancer biology and regenerative medicine. Copyright © 2016 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  2. GPU-based, parallel-line, omni-directional integration of measured acceleration field to obtain the 3D pressure distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Cao; Katz, Joseph

    2016-11-01

    A PIV based method to reconstruct the volumetric pressure field by direct integration of the 3D material acceleration directions has been developed. Extending the 2D virtual-boundary omni-directional method (Omni2D, Liu & Katz, 2013), the new 3D parallel-line omni-directional method (Omni3D) integrates the material acceleration along parallel lines aligned in multiple directions. Their angles are set by a spherical virtual grid. The integration is parallelized on a Tesla K40c GPU, which reduced the computing time from three hours to one minute for a single realization. To validate its performance, this method is utilized to calculate the 3D pressure fields in isotropic turbulence and channel flow using the JHU DNS Databases (http://turbulence.pha.jhu.edu). Both integration of the DNS acceleration as well as acceleration from synthetic 3D particles are tested. Results are compared to other method, e.g. solution to the Pressure Poisson Equation (e.g. PPE, Ghaemi et al., 2012) with Bernoulli based Dirichlet boundary conditions, and the Omni2D method. The error in Omni3D prediction is uniformly low, and its sensitivity to acceleration errors is local. It agrees with the PPE/Bernoulli prediction away from the Dirichlet boundary. The Omni3D method is also applied to experimental data obtained using tomographic PIV, and results are correlated with deformation of a compliant wall. ONR.

  3. Online Bagging and Boosting

    NASA Technical Reports Server (NTRS)

    Oza, Nikunji C.

    2005-01-01

    Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.

  4. High-sensitivity supercontinuum-based parallel line-field optical coherence tomography with 1 million A-lines/s (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Barrick, Jessica; Doblas, Ana; Sears, Patrick R.; Ostrowski, Lawrence E.; Oldenburg, Amy L.

    2017-02-01

    While traditional, flying-spot, spectral domain OCT systems can achieve MHz linerates, they are limited by the need for mechanical scanning to produce a B-mode image. Line-field OCT (LF OCT) removes the need for mechanical scanning by simultaneously recording all A-lines on a 2D CMOS sensor. Our LF OCT system operates at the highest A-line rate of any spectral domain (SD) LF OCT system reported to date (1,024,000 A-lines/s). This is comparable with the fastest flying-spot SDOCT system reported. Additionally, all OCT systems face a tradeoff between imaging speed and sensitivity. Long exposure times improve sensitivity but can lead to undesirable motion artifacts. LF OCT has the potential to relax this tradeoff between sensitivity and imaging speed because all A-lines are exposed during the entire frame acquisition time. However, this advantage has not yet been realized due to the loss of power-per-A-line by spreading the illumination light across all A-lines on the sample. Here we use a supercontinuum source to illuminate the sample with 500mW of light in the 605-950 nm wavelength band, effectively providing 480 µW of power-per-A-line, with axial and lateral resolutions of 1.8 µm and 14 µm, respectively. With this system we achieve the highest reported sensitivity (113 dB) of any LF OCT system. We then demonstrate the capability of this system by capturing the rapidly beating cilia of human bronchial-epithelial cells in vitro. The combination of high speed and high sensitivity offered by supercontinuum-based LF SD OCT offers new opportunities for studying cell and tissue dynamics.

  5. Body composition and metabolic outcomes after 96 weeks of treatment with ritonavir-boosted lopinavir plus either nucleoside or nucleotide reverse transcriptase inhibitors or raltegravir in patients with HIV with virological failure of a standard first-line antiretroviral therapy regimen: a substudy of the randomised, open-label, non-inferiority SECOND-LINE study.

    PubMed

    Boyd, Mark A; Amin, Janaki; Mallon, Patrick W G; Kumarasamy, Nagalingeswaran; Lombaard, Johan; Wood, Robin; Chetchotisakd, Ploenchan; Phanuphak, Praphan; Mohapi, Lerato; Azwa, Iskandar; Belloso, Waldo H; Molina, Jean-Michel; Hoy, Jennifer; Moore, Cecilia L; Emery, Sean; Cooper, David A

    2017-01-01

    Lipoatrophy is one of the most feared complications associated with the use of nucleoside or nucleotide reverse transcriptase inhibitors (N[t]RTIs). We aimed to assess soft-tissue changes in participants with HIV who had virological failure of a first-line antiretroviral (ART) regimen containing a non-nucleoside reverse transcriptase inhibitor plus two N(t)RTIs and were randomly assigned to receive a second-line regimen containing a boosted protease inhibitor given with either N(t)RTIs or raltegravir. Of the 37 sites that participated in the randomised, open-label, non-inferiority SECOND-LINE study, eight sites from five countries (Argentina, India, Malaysia, South Africa, and Thailand) participated in the body composition substudy. All sites had a dual energy x-ray absorptiometry (DXA) scanner and all participants enrolled in SECOND-LINE were eligible for inclusion in the substudy. Participants were randomly assigned (1:1), via a computer-generated allocation schedule, to receive either ritonavir-boosted lopinavir plus raltegravir (raltegravir group) or ritonavir-boosted lopinavir plus two or three N(t)RTIs (N[t]RTI group). Randomisation was stratified by site and screening HIV-1 RNA. Participants and investigators were not masked to group assignment, but allocation was concealed until after interventions were assigned. DXA scans were done at weeks 0, 48, and 96. The primary endpoint was mean percentage and absolute change in peripheral limb fat from baseline to week 96. We did intention-to-treat analyses of available data. This substudy is registered with ClinicalTrials.gov, number NCT01513122. Between Aug 1, 2010, and July 10, 2011, we recruited 211 participants into the substudy. The intention-to-treat population comprised 102 participants in the N(t)RTI group and 108 participants in the raltegravir group, of whom 91 and 105 participants, respectively, reached 96 weeks. Mean percentage change in limb fat from baseline to week 96 was 16·8% (SD 32·6) in the N

  6. Hypersonic Boost Glider

    NASA Image and Video Library

    1957-04-15

    Hypersonic Boost Glider in 11 Inch Hypersonic Tunnel L57-1681 In 1957 Langley tested its HYWARDS design in the 11 Inch Hypersonic Tunnel. Photograph published in Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958 by James R. Hansen. Page 369.

  7. Can you boost your metabolism?

    MedlinePlus

    ... can ward off weight gain as you age. Alternative Names Weight-loss boost metabolism; Obesity - boost metabolism; ... Does glycogen availability influence the motivation to eat, energy intake or food choice? Sports Med . 2011;41( ...

  8. Boosted apparent horizons

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp

    Boosted black holes play an important role in General Relativity (GR), especially in relation to the binary black hole problem. Solving Einstein vac- uum equations in the strong field regime had long been the holy grail of numerical relativity until the significant breakthroughs made in 2005 and 2006. Numerical relativity plays a crucial role in gravitational wave detection by providing numerically generated gravitational waveforms that help search for actual signatures of gravitational radiation exciting laser interferometric de- tectors such as LIGO, VIRGO and GEO600 here on Earth. Binary black holes orbit each other in an ever tightening adiabatic inspiral caused by energy loss due to gravitational radiation emission. As the orbits shrinks, the holes speed up and eventually move at relativistic speeds in the vicinity of each other (separated by ~ 10M or so where 2M is the Schwarzschild radius). As such, one must abandon the Newtonian notion of a point mass on a circular orbit with tangential velocity and replace it with the concept of black holes, cloaked behind spheroidal event horizons that become distorted due to strong gravity, and further appear distorted because of Lorentz effects from the high orbital velocity. Apparent horizons (AHs) are 2-dimensional boundaries that are trapped surfaces. Conceptually, one can think of them as 'quasi-local' definitions for a black hole horizon. This will be explained in more detail in chapter 2. Apparent horizons are especially important in numerical relativity as they provide a computationally efficient way of describing and locating a black hole horizon. For a stationary spacetime, apparent horizons are 2-dimensional cross-sections of the event horizon, which is itself a 3-dimensional null surface in spacetime. Because an AH is a 2-dimensional cross-section of an event horizon, its area remains invariant under distortions due to Lorentz boosts although its shape changes. This fascinating property of the AH can be

  9. On-line electrochemistry-bioaffinity screening with parallel HR-LC-MS for the generation and characterization of modified p38α kinase inhibitors.

    PubMed

    Falck, David; de Vlieger, Jon S B; Giera, Martin; Honing, Maarten; Irth, Hubertus; Niessen, Wilfried M A; Kool, Jeroen

    2012-04-01

    In this study, an integrated approach is developed for the formation, identification and biological characterization of electrochemical conversion products of p38α mitogen-activated protein kinase inhibitors. This work demonstrates the hyphenation of an electrochemical reaction cell with a continuous-flow bioaffinity assay and parallel LC-HR-MS. Competition of the formed products with a tracer (SKF-86002) that shows fluorescence enhancement in the orthosteric binding site of the p38α kinase is the readout for bioaffinity. Parallel HR-MS(n) experiments provided information on the identity of binders and non-binders. Finally, the data produced with this on-line system were compared to electrochemical conversion products generated off-line. The electrochemical conversion of 1-{6-chloro-5-[(2R,5S)-4-(4-fluorobenzyl)-2,5-dimethylpiperazine-1-carbonyl]-3aH-indol-3-yl}-2-morpholinoethane-1,2-dione resulted in eight products, three of which showed bioaffinity in the continuous-flow p38α bioaffinity assay used. Electrochemical conversion of BIRB796 resulted, amongst others, in the formation of the reactive quinoneimine structure and its corresponding hydroquinone. Both products were detected in the p38α bioaffinity assay, which indicates binding to the p38α kinase.

  10. StructBoost: Boosting Methods for Predicting Structured Output Variables.

    PubMed

    Chunhua Shen; Guosheng Lin; van den Hengel, Anton

    2014-10-01

    Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Inspired by structured support vector machines (SSVM), here we propose a new boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation.

  11. Stability of arsenic peptides in plant extracts: off-line versus on-line parallel elemental and molecular mass spectrometric detection for liquid chromatographic separation.

    PubMed

    Bluemlein, Katharina; Raab, Andrea; Feldmann, Jörg

    2009-01-01

    The instability of metal and metalloid complexes during analytical processes has always been an issue of an uncertainty regarding their speciation in plant extracts. Two different speciation protocols were compared regarding the analysis of arsenic phytochelatin (As(III)PC) complexes in fresh plant material. As the final step for separation/detection both methods used RP-HPLC simultaneously coupled to ICP-MS and ES-MS. However, one method was the often used off-line approach using two-dimensional separation, i.e. a pre-cleaning step using size-exclusion chromatography with subsequent fraction collection and freeze-drying prior to the analysis using RP-HPLC-ICP-MS and/or ES-MS. This approach revealed that less than 2% of the total arsenic was bound to peptides such as phytochelatins in the root extract of an arsenate exposed Thunbergia alata, whereas the direct on-line method showed that 83% of arsenic was bound to peptides, mainly as As(III)PC(3) and (GS)As(III)PC(2). Key analytical factors were identified which destabilise the As(III)PCs. The low pH of the mobile phase (0.1% formic acid) using RP-HPLC-ICP-MS/ES-MS stabilises the arsenic peptide complexes in the plant extract as well as the free peptide concentration, as shown by the kinetic disintegration study of the model compound As(III)(GS)(3) at pH 2.2 and 3.8. But only short half-lives of only a few hours were determined for the arsenic glutathione complex. Although As(III)PC(3) showed a ten times higher half-life (23 h) in a plant extract, the pre-cleaning step with subsequent fractionation in a mobile phase of pH 5.6 contributes to the destabilisation of the arsenic peptides in the off-line method. Furthermore, it was found that during a freeze-drying process more than 90% of an As(III)PC(3) complex and smaller free peptides such as PC(2) and PC(3) can be lost. Although the two-dimensional off-line method has been used successfully for other metal complexes, it is concluded here that the fractionation and

  12. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    NASA Astrophysics Data System (ADS)

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boron-lined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter-Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  13. Collisional Line Mixing in Parallel and Perpendicular Bands of Linear Molecules by a Non-Markovian Approach

    NASA Astrophysics Data System (ADS)

    Buldyreva, Jeanna

    2013-06-01

    Reliable modeling of radiative transfer in planetary atmospheres requires accounting for the collisional line mixing effects in the regions of closely spaced vibrotational lines as well as in the spectral wings. Because of too high CPU cost of calculations from ab initio potential energy surfaces (if available), the relaxation matrix describing the influence of collisions is usually built by dynamical scaling laws, such as Energy-Corrected Sudden law. Theoretical approaches currently used for calculation of absorption near the band center are based on the impact approximation (Markovian collisions without memory effects) and wings are modeled via introducing some empirical parameters [1,2]. Operating with the traditional non-symmetric metric in the Liouville space, these approaches need corrections of the ECS-modeled relaxation matrix elements ("relaxation times" and "renormalization procedure") in order to ensure the fundamental relations of detailed balance and sum rules.We present an extension to the infrared absorption case of the previously developed [3] for rototranslational Raman scattering spectra of linear molecules non-Markovian approach of ECS-type. Owing to the specific choice of symmetrized metric in the Liouville space, the relaxation matrix is corrected for initial bath-molecule correlations and satisfies non-Markovian sum rules and detailed balance. A few standard ECS parameters determined by fitting to experimental linewidths of the isotropic Q-branch enable i) retrieval of these isolated-line parameters for other spectroscopies (IR absorption and anisotropic Raman scattering); ii) reproducing of experimental intensities of these spectra. Besides including vibrational angular momenta in the IR bending shapes, Coriolis effects are also accounted for. The efficiency of the method is demonstrated on OCS-He and CO_2-CO_2 spectra up to 300 and 60 atm, respectively. F. Niro, C. Boulet, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 88, 483

  14. Line Mixing in Parallel and Perpendicular Bands of CO2: A Further Test of the Refined Robert-Bonamy Formalism

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Qiancheng; Tipping, R. H.

    2015-01-01

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modeling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modeling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (sigma yields sigma and sigma yields pi) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  15. Line mixing in parallel and perpendicular bands of CO2: A further test of the refined Robert-Bonamy formalism.

    PubMed

    Boulet, C; Ma, Q; Tipping, R H

    2015-09-28

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modelling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modelling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (Σ → Σ and Σ → Π) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  16. Line Mixing in Parallel and Perpendicular Bands of CO2: A Further Test of the Refined Robert-Bonamy Formalism

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Qiancheng; Tipping, R. H.

    2015-01-01

    Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modeling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modeling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (sigma yields sigma and sigma yields pi) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.

  17. Exercise boosts immune response.

    PubMed

    Sander, Ruth

    2012-06-29

    Ageing is associated with a decline in normal functioning of the immune system described as 'immunosenescence'. This contributes to poorer vaccine response and increased incidence of infection and malignancy seen in older people. Regular exercise can enhance vaccination response, increase T-cells and boost the function of the natural killer cells in the immune system. Exercise also lowers levels of the inflammatory cytokines that cause the 'inflamm-ageing' that is thought to play a role in conditions including cardiovascular disease; type 2 diabetes; Alzheimer's disease; osteoporosis and some cancers.

  18. Observation of hole injection boost via two parallel paths in Pentacene thin-film transistors by employing Pentacene: 4, 4″-tris(3-methylphenylphenylamino) triphenylamine: MoO{sub 3} buffer layer

    SciTech Connect

    Yan, Pingrui; Liu, Ziyang; Liu, Dongyang; Wang, Xuehui; Yue, Shouzhen; Zhao, Yi; Zhang, Shiming

    2014-11-01

    Pentacene organic thin-film transistors (OTFTs) were prepared by introducing 4, 4″-tris(3-methylphenylphenylamino) triphenylamine (m-MTDATA): MoO{sub 3}, Pentacene: MoO{sub 3}, and Pentacene: m-MTDATA: MoO{sub 3} as buffer layers. These OTFTs all showed significant performance improvement comparing to the reference device. Significantly, we observe that the device employing Pentacene: m-MTDATA: MoO{sub 3} buffer layer can both take advantage of charge transfer complexes formed in the m-MTDATA: MoO{sub 3} device and suitable energy level alignment existed in the Pentacene: MoO{sub 3} device. These two parallel paths led to a high mobility, low threshold voltage, and contact resistance of 0.72 cm{sup 2}/V s, −13.4 V, and 0.83 kΩ at V{sub ds} = − 100 V. This work enriches the understanding of MoO{sub 3} doped organic materials for applications in OTFTs.

  19. An open-source, massively parallel code for non-LTE synthesis and inversion of spectral lines and Zeeman-induced Stokes profiles

    NASA Astrophysics Data System (ADS)

    Socas-Navarro, H.; de la Cruz Rodríguez, J.; Asensio Ramos, A.; Trujillo Bueno, J.; Ruiz Cobo, B.

    2015-05-01

    With the advent of a new generation of solar telescopes and instrumentation, interpreting chromospheric observations (in particular, spectropolarimetry) requires new, suitable diagnostic tools. This paper describes a new code, NICOLE, that has been designed for Stokes non-LTE radiative transfer, for synthesis and inversion of spectral lines and Zeeman-induced polarization profiles, spanning a wide range of atmospheric heights from the photosphere to the chromosphere. The code features a number of unique features and capabilities and has been built from scratch with a powerful parallelization scheme that makes it suitable for application on massive datasets using large supercomputers. The source code is written entirely in Fortran 90/2003 and complies strictly with the ANSI standards to ensure maximum compatibility and portability. It is being publicly released, with the idea of facilitating future branching by other groups to augment its capabilities. The source code is currently hosted at the following repository: http://https://github.com/hsocasnavarro/NICOLE

  20. Quantification of crotamine, a small basic myotoxin, in South American rattlesnake (Crotalus durissus terrificus) venom by enzyme-linked immunosorbent assay with parallel-lines analysis.

    PubMed

    Oguiur, N; Camargo, M E; da Silva, A R; Horton, D S

    2000-03-01

    Intraspecific variation in Crotalus durissus terrificus venom composition was studied in relation to crotamine activity. Crotamine induces paralysis in extension of hind legs of mice and myonecrosis in skeletal muscle cells. To determine whether the venom of crotamine-negative rattlesnake contains a quantity of myotoxin incapable of inducing paralysis, we have developed a very sensitivity immunological assay method, an enzyme-linked immunoabsorbent assay (ELISA), capable of detecting 0.6 ng of purified crotamine. The parallel-lines analysis of ELISA data showed to be useful because it shows the reliability of the experimental conditions. A variation in the amount of myotoxin in the crotamine-positive venom was observed, but not less than 0.1 mg of crotamine per mg of venom. It was not possible to detect it in crotamine-negative venom even at high venom concentrations.

  1. Multifunctionalization of cetuximab with bioorthogonal chemistries and parallel EGFR profiling of cell-lines using imaging, FACS and immunoprecipitation approaches.

    PubMed

    Reschke, Melanie L; Uprety, Rajendra; Bodhinayake, Imithri; Banu, Matei; Boockvar, John A; Sauve, Anthony A

    2014-12-01

    The ability to derivatize antibodies is currently limited by the chemical structure of antibodies as polypeptides. Modern methods of bioorthogonal and biocompatible chemical modifications could make antibody functionalization more predictable and easier, without compromising the functions of the antibody. To explore this concept, we modified the well-known anti-epidermal growth factor receptor (EGFR) drug, cetuximab (Erbitux®), with 5-azido-2-nitro-benzoyl (ANB) modifications by optimization of an acylation protocol. We then show that the resulting ANB-cetuximab can be reliably modified with dyes (TAMRA and carboxyrhodamine) or a novel synthesized cyclooctyne modified biotin. The resulting dye- and biotin-modified cetuximabs were then tested across several assay platforms with several cell lines including U87, LN229, F98EGFR, F98WT and HEK293 cells. The assay platforms included fluorescence microscopy, FACS and biotin-avidin based immunoprecipitation methods. The modified antibody performs consistently in all of these assay platforms, reliably determining relative abundances of EGFR expression on EGFR expressing cells (LN229 and F98EGFR) and failing to cross react with weak to negative EGFR expressing cells (U87, F98WT and HEK293). The ease of achieving diverse and assay relevant functionalizations as well as the consequent rapid construction of highly correlated antigen expression data sets highlights the power of bioorthogonal and biocompatible methods to conjugate macromolecules. These data provide a proof of concept for a multifunctionalization strategy that leverages the biochemical versatility and antigen specificity of antibodies.

  2. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGES

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  3. Analytic boosted boson discrimination

    SciTech Connect

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

  4. Analytic boosted boson discrimination

    DOE PAGES

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits.more » By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less

  5. Boosted Beta Regression

    PubMed Central

    Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas

    2013-01-01

    Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706

  6. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  7. PORTA: A three-dimensional multilevel radiative transfer code for modeling the intensity and polarization of spectral lines with massively parallel computers

    NASA Astrophysics Data System (ADS)

    Štěpán, Jiří; Trujillo Bueno, Javier

    2013-09-01

    The interpretation of the intensity and polarization of the spectral line radiation produced in the atmosphere of the Sun and of other stars requires solving a radiative transfer problem that can be very complex, especially when the main interest lies in modeling the spectral line polarization produced by scattering processes and the Hanle and Zeeman effects. One of the difficulties is that the plasma of a stellar atmosphere can be highly inhomogeneous and dynamic, which implies the need to solve the non-equilibrium problem of the generation and transfer of polarized radiation in realistic three-dimensional (3D) stellar atmospheric models. Here we present PORTA, an efficient multilevel radiative transfer code we have developed for the simulation of the spectral line polarization caused by scattering processes and the Hanle and Zeeman effects in 3D models of stellar atmospheres. The numerical method of solution is based on the non-linear multigrid iterative method and on a novel short-characteristics formal solver of the Stokes-vector transfer equation which uses monotonic Bézier interpolation. Therefore, with PORTA the computing time needed to obtain at each spatial grid point the self-consistent values of the atomic density matrix (which quantifies the excitation state of the atomic system) scales linearly with the total number of grid points. Another crucial feature of PORTA is its parallelization strategy, which allows us to speed up the numerical solution of complicated 3D problems by several orders of magnitude with respect to sequential radiative transfer approaches, given its excellent linear scaling with the number of available processors. The PORTA code can also be conveniently applied to solve the simpler 3D radiative transfer problem of unpolarized radiation in multilevel systems.

  8. Lines

    ERIC Educational Resources Information Center

    Mires, Peter B.

    2006-01-01

    National Geography Standards for the middle school years generally stress the teaching of latitude and longitude. There are many creative ways to explain the great grid that encircles our planet, but the author has found that students in his college-level geography courses especially enjoy human-interest stories associated with lines of latitude…

  9. Lines

    ERIC Educational Resources Information Center

    Mires, Peter B.

    2006-01-01

    National Geography Standards for the middle school years generally stress the teaching of latitude and longitude. There are many creative ways to explain the great grid that encircles our planet, but the author has found that students in his college-level geography courses especially enjoy human-interest stories associated with lines of latitude…

  10. At-line nanofractionation with parallel mass spectrometry and bioactivity assessment for the rapid screening of thrombin and factor Xa inhibitors in snake venoms.

    PubMed

    Mladic, Marija; Zietek, Barbara M; Iyer, Janaki Krishnamoorthy; Hermarij, Philip; Niessen, Wilfried M A; Somsen, Govert W; Kini, R Manjunatha; Kool, Jeroen

    2016-02-01

    Snake venoms comprise complex mixtures of peptides and proteins causing modulation of diverse physiological functions upon envenomation of the prey organism. The components of snake venoms are studied as research tools and as potential drug candidates. However, the bioactivity determination with subsequent identification and purification of the bioactive compounds is a demanding and often laborious effort involving different analytical and pharmacological techniques. This study describes the development and optimization of an integrated analytical approach for activity profiling and identification of venom constituents targeting the cardiovascular system, thrombin and factor Xa enzymes in particular. The approach developed encompasses reversed-phase liquid chromatography (RPLC) analysis of a crude snake venom with parallel mass spectrometry (MS) and bioactivity analysis. The analytical and pharmacological part in this approach are linked using at-line nanofractionation. This implies that the bioactivity is assessed after high-resolution nanofractionation (6 s/well) onto high-density 384-well microtiter plates and subsequent freeze drying of the plates. The nanofractionation and bioassay conditions were optimized for maintaining LC resolution and achieving good bioassay sensitivity. The developed integrated analytical approach was successfully applied for the fast screening of snake venoms for compounds affecting thrombin and factor Xa activity. Parallel accurate MS measurements provided correlation of observed bioactivity to peptide/protein masses. This resulted in identification of a few interesting peptides with activity towards the drug target factor Xa from a screening campaign involving venoms of 39 snake species. Besides this, many positive protease activity peaks were observed in most venoms analysed. These protease fingerprint chromatograms were found to be similar for evolutionary closely related species and as such might serve as generic snake protease

  11. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  12. Long-term effectiveness of initiating non-nucleoside reverse transcriptase inhibitor- versus ritonavir-boosted protease inhibitor-based antiretroviral therapy: implications for first-line therapy choice in resource-limited settings

    PubMed Central

    Lima, Viviane D; Hull, Mark; McVea, David; Chau, William; Harrigan, P Richard; Montaner, Julio SG

    2016-01-01

    Introduction In many resource-limited settings, combination antiretroviral therapy (cART) failure is diagnosed clinically or immunologically. As such, there is a high likelihood that patients may stay on a virologically failing regimen for a substantial period of time. Here, we compared the long-term impact of initiating non-nucleoside reverse transcriptase inhibitor (NNRTI)- versus boosted protease inhibitor (bPI)-based cART in British Columbia (BC), Canada. Methods We followed prospectively 3925 ART-naïve patients who started NNRTIs (N=1963, 50%) or bPIs (N=1962; 50%) from 1 January 2000 until 30 June 2013 in BC. At six months, we assessed whether patients virologically failed therapy (a plasma viral load (pVL) >50 copies/mL), and we stratified them based on the pVL at the time of failure ≤500 versus >500 copies/mL. We then followed these patients for another six months and calculated their probability of achieving subsequent viral suppression (pVL <50 copies/mL twice consecutively) and of developing drug resistance. These probabilities were adjusted for fixed and time-varying factors, including cART adherence. Results At six months, virologic failure rates were 9.5 and 14.3 cases per 100 person-months for NNRTI and bPI initiators, respectively. NNRTI initiators who failed with a pVL ≤500 copies/mL had a 16% higher probability of achieving subsequent suppression at 12 months than bPI initiators (0.81 (25th–75th percentile 0.75–0.83) vs. 0.72 (0.61–0.75)). However, if failing NNRTI initiators had a pVL >500 copies/mL, they had a 20% lower probability of suppressing at 12 months than pVL-matched bPI initiators (0.37 (0.29–0.45) vs. 0.46 (0.38–0.54)). In terms of evolving HIV drug resistance, those who failed on NNRTI performed worse than bPI in all scenarios, especially if they failed with a viral load >500 copies/mL. Conclusions Our results show that patients who virologically failed at six months on NNRTI and continued on the same regimen had a

  13. Project BOOST implementation: lessons learned.

    PubMed

    Williams, Mark V; Li, Jing; Hansen, Luke O; Forth, Victoria; Budnitz, Tina; Greenwald, Jeffrey L; Howell, Eric; Halasyamani, Lakshmi; Vidyarthi, Arpana; Coleman, Eric A

    2014-07-01

    Enhancing care coordination and reducing hospital readmissions have been a focus of multiple quality improvement (QI) initiatives. Project BOOST (Better Outcomes by Optimizing Safe Transitions) aims to enhance the discharge transition from hospital to home. Previous research indicates that QI initiatives originating externally often face difficulties gaining momentum or effecting lasting change in a hospital. We performed a qualitative evaluation of Project BOOST implementation by examining the successes and failures experienced by six pilot sites. We also evaluated the unique physician mentoring component of this program. Finally, we examined the impact of intensification of the physician mentoring model on adoption of BOOST interventions in two later Illinois cohorts (27 hospitals). Qualitative analysis of six pilot hospitals used a process of methodological triangulation and analysis of the BOOST enrollment applications, the listserv, and content from telephone interviews. Evaluation of BOOST implementation at Illinois hospitals occurred via mid-year and year-end surveys. The identified common barriers included inadequate understanding of the current discharge process, insufficient administrative support, lack of protected time or dedicated resources, and lack of frontline staff buy-in. Facilitators of implementation included the mentor, a small beginning, teamwork, and proactive engagement of the patient. Notably, hospitals viewed their mentors as essential facilitators of change. Sites consistently commented that the individualized mentoring was extremely helpful and provided significant accountability and stimulated creativity. In the Illinois cohorts, the improved mentoring model showed more complete implementation of BOOST interventions. The implementation of Project BOOST was well received by hospitals, although sites faced substantial barriers consistent with other QI research reports. The unique mentorship element of Project BOOST proved extremely

  14. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  15. Rapid screening of bioactive compounds from natural products by integrating 5-channel parallel chromatography coupled with on-line mass spectrometry and microplate based assays.

    PubMed

    Zhang, Yufeng; Xiao, Shun; Sun, Lijuan; Ge, Zhiwei; Fang, Fengkai; Zhang, Wen; Wang, Yi; Cheng, Yiyu

    2013-05-13

    A high throughput method was developed for rapid screening and identification of bioactive compounds from traditional Chinese medicine, marine products and other natural products. The system, integrated with five-channel chromatographic separation and dual UV-MS detection, is compatible with in vitro 96-well microplate based bioassays. The stability and applicability of the proposed method was validated by testing radical scavenging capability of a mixture of seven known compounds (rutin, dihydroquercetin, salvianolic acid A, salvianolic acid B, glycyrrhizic acid, rubescensin A and tangeretin). Moreover, the proposed method was successfully applied to the crude extracts of traditional Chinese medicine and a marine sponge from which 12 bioactive compounds were screened and characterized based on their anti-oxidative or anti-tumor activities. In particular, two diterpenoid derivatives, agelasine B and (-)-agelasine D, were identified for the first time as anti-tumor compounds from the sponge Agelas mauritiana, showing a considerable activity toward MCF-7 cells (IC50 values of 7.84±0.65 and 10.48±0.84 μM, respectively). Our findings suggested that the integrated system of 5-channel parallel chromatography coupled with on-line mass spectrometry and microplate based assays can be a versatile and high efficient approach for the discovery of active compounds from natural products. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  17. Yoga May Boost Aging Brains

    MedlinePlus

    ... news/fullstory_167693.html Yoga May Boost Aging Brains Changes seen in areas involved with attention and ... may have greater "thickness" in areas of the brain involved in memory and attention, a small study ...

  18. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  19. Improved semi-supervised online boosting for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Yicui; Qi, Lin; Tan, Shukun

    2016-10-01

    The advantage of an online semi-supervised boosting method which takes object tracking problem as a classification problem, is training a binary classifier from labeled and unlabeled examples. Appropriate object features are selected based on real time changes in the object. However, the online semi-supervised boosting method faces one key problem: The traditional self-training using the classification results to update the classifier itself, often leads to drifting or tracking failure, due to the accumulated error during each update of the tracker. To overcome the disadvantages of semi-supervised online boosting based on object tracking methods, the contribution of this paper is an improved online semi-supervised boosting method, in which the learning process is guided by positive (P) and negative (N) constraints, termed P-N constraints, which restrict the labeling of the unlabeled samples. First, we train the classification by an online semi-supervised boosting. Then, this classification is used to process the next frame. Finally, the classification is analyzed by the P-N constraints, which are used to verify if the labels of unlabeled data assigned by the classifier are in line with the assumptions made about positive and negative samples. The proposed algorithm can effectively improve the discriminative ability of the classifier and significantly alleviate the drifting problem in tracking applications. In the experiments, we demonstrate real-time tracking of our tracker on several challenging test sequences where our tracker outperforms other related on-line tracking methods and achieves promising tracking performance.

  20. Early Boost and Slow Consolidation in Motor Skill Learning

    ERIC Educational Resources Information Center

    Hotermans, Christophe; Peigneux, Philippe; de Noordhout, Alain Maertens; Moonen, Gustave; Maquet, Pierre

    2006-01-01

    Motor skill learning is a dynamic process that continues covertly after training has ended and eventually leads to delayed increments in performance. Current theories suggest that this off-line improvement takes time and appears only after several hours. Here we show an early transient and short-lived boost in performance, emerging as early as…

  1. Early Boost and Slow Consolidation in Motor Skill Learning

    ERIC Educational Resources Information Center

    Hotermans, Christophe; Peigneux, Philippe; de Noordhout, Alain Maertens; Moonen, Gustave; Maquet, Pierre

    2006-01-01

    Motor skill learning is a dynamic process that continues covertly after training has ended and eventually leads to delayed increments in performance. Current theories suggest that this off-line improvement takes time and appears only after several hours. Here we show an early transient and short-lived boost in performance, emerging as early as…

  2. Magnetic shielding investigation for a 6 MV in-line linac within the parallel configuration of a linac-MR system.

    PubMed

    Santos, D M; St Aubin, J; Fallone, B G; Steciw, S

    2012-02-01

    In our current linac-magnetic resonance (MR) design, a 6 MV in-line linac is placed along the central axis of the MR's magnet where the MR's fringe magnetic fields are parallel to the overall electron trajectories in the linac waveguide. Our previous study of this configuration comprising a linac-MR SAD of 100 cm and a 0.5 T superconducting (open, split) MR imager. It showed the presence of longitudinal magnetic fields of 0.011 T at the electron gun, which caused a reduction in target current to 84% of nominal. In this study, passive and active magnetic shielding was investigated to recover the linac output losses caused by magnetic deflections of electron trajectories in the linac within a parallel linac-MR configuration. Magnetic materials and complex shield structures were used in a 3D finite element method (FEM) magnetic field model, which emulated the fringe magnetic fields of the MR imagers. The effects of passive magnetic shielding was studied by surrounding the electron gun and its casing with a series of capped steel cylinders of various inner lengths (26.5-306.5 mm) and thicknesses (0.75-15 mm) in the presence of the fringe magnetic fields from a commercial MR imager. In addition, the effects of a shield of fixed length (146.5 mm) with varying thicknesses were studied against a series of larger homogeneous magnetic fields (0-0.2 T). The effects of active magnetic shielding were studied by adding current loops around the electron gun and its casing. The loop currents, separation, and location were optimized to minimize the 0.011 T longitudinal magnetic fields in the electron gun. The magnetic field solutions from the FEM model were added to a validated linac simulation, consisting of a 3D electron gun (using OPERA-3d/scala) and 3D waveguide (using comsol Multiphysics and PARMELA) simulations. PARMELA's target current and output phase-space were analyzed to study the linac's output performance within the magnetic shields. The FEM model above agreed within 1

  3. Early boost and slow consolidation in motor skill learning.

    PubMed

    Hotermans, Christophe; Peigneux, Philippe; Maertens de Noordhout, Alain; Moonen, Gustave; Maquet, Pierre

    2006-01-01

    Motorskill learning is a dynamic process that continues covertly after training has ended and eventually leads to delayed increments in performance. Current theories suggest that this off-line improvement takes time and appears only after several hours. Here we show an early transient and short-lived boost in performance, emerging as early as 5-30 min after training but no longer observed 4 h later. This early boost is predictive of the performance achieved 48 h later, suggesting its functional relevance for memory processes.

  4. Resolving boosted jets with XCone

    NASA Astrophysics Data System (ADS)

    Thaler, Jesse; Wilkason, Thomas F.

    2015-12-01

    We show how the recently proposed XCone jet algorithm [1] smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies — dijet resonances, Higgs decays to bottom quarks, and all-hadronic top pairs — that demonstrate the physics applications of XCone over a wide kinematic range.

  5. SemiBoost: boosting for semi-supervised learning.

    PubMed

    Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi

    2009-11-01

    Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.

  6. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  7. Parallel computation

    NASA Astrophysics Data System (ADS)

    Huberman, Bernardo A.

    1989-11-01

    This paper reviews three different aspects of parallel computation which are useful for physics. The first part deals with special architectures for parallel computing (SIMD and MIMD machines) and their differences, with examples of their uses. The second section discusses the speedup that can be achieved in parallel computation and the constraints generated by the issues of communication and synchrony. The third part describes computation by distributed networks of powerful workstations without global controls and the issues involved in understanding their behavior.

  8. Representing Arbitrary Boosts for Undergraduates.

    ERIC Educational Resources Information Center

    Frahm, Charles P.

    1979-01-01

    Presented is a derivation for the matrix representation of an arbitrary boost, a Lorentz transformation without rotation, suitable for undergraduate students with modest backgrounds in mathematics and relativity. The derivation uses standard vector and matrix techniques along with the well-known form for a special Lorentz transformation. (BT)

  9. High Efficient Universal Buck Boost Solar Array Regulator SAR Module

    NASA Astrophysics Data System (ADS)

    Kimmelmann, Stefan; Knorr, Wolfgang

    2014-08-01

    The high efficient universal Buck Boost Solar Array Regulator (SAR) module concept is applicable for a wide range of input and output voltages. The single point failure tolerant SAR module contains 3 power converters for the transfer of the SAR power to the battery dominated power bus. The converters are operating parallel in a 2 out of 3 redundancy and are driven by two different controllers. The output power of one module can be adjusted up to 1KW depending on the requirements. The maximum power point tracker (MPPT) is placed on a separate small printed circuit board and can be used if no external tracker signal is delivered. Depending on the mode and load conditions an efficiency of more than 97% is achievable. The stable control performance is achieved by implementing the magnetic current sense detection. The sensed power coil current is used in Buck and Boost control mode.

  10. Interferometric resolution boosting for spectrographs

    SciTech Connect

    Erskine, D J; Edelstein, J

    2004-05-25

    Externally dispersed interferometry (EDI) is a technique for enhancing the performance of spectrographs for wide bandwidth high resolution spectroscopy and Doppler radial velocimetry. By placing a small angle-independent interferometer near the slit of a spectrograph, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moir{acute e} pattern, which manifests high detailed spectral information heterodyned down to detectably low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry. Previous demonstrations of {approx}2.5x resolution boost used an interferometer having a single fixed delay. We report new data indicating {approx}6x Gaussian resolution boost (140,000 from a spectrograph with 25,000 native resolving power), taken by using multiple exposures at widely different interferometer delays.

  11. Reweighting with Boosted Decision Trees

    NASA Astrophysics Data System (ADS)

    Rogozhnikov, Alex

    2016-10-01

    Machine learning tools are commonly used in modern high energy physics (HEP) experiments. Different models, such as boosted decision trees (BDT) and artificial neural networks (ANN), are widely used in analyses and even in the software triggers [1]. In most cases, these are classification models used to select the “signal” events from data. Monte Carlo simulated events typically take part in training of these models. While the results of the simulation are expected to be close to real data, in practical cases there is notable disagreement between simulated and observed data. In order to use available simulation in training, corrections must be introduced to generated data. One common approach is reweighting — assigning weights to the simulated events. We present a novel method of event reweighting based on boosted decision trees. The problem of checking the quality of reweighting step in analyses is also discussed.

  12. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  13. Study on reduction in electric field, charged voltage, ion current and ion density under HVDC transmission lines by parallel shield wires

    SciTech Connect

    Amano, Y.; Sunaga, Y.

    1989-04-01

    An important problem in the design and operation of HVDC transmission lines is to reduce electrical field effects such as ion flow electrification of objects, electric field, ion current and ion density at ground level in the vicinity of HVDC lines. Several models of shield wire were tested with the Shiobara HVDC test line. The models contain typical stranded wires that are generally used to reduce field effects at ground level, neutral conductors placed at lower parts of the DC line, and an ''earth corona model'' to cancel positive or negative ions intentionally by generating ions having opposite polarity to ions flowing into the wire. This report describes the experimental results of the effects of these shield wires and a method to predict shielding effects.

  14. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  15. Where boosted significances come from

    NASA Astrophysics Data System (ADS)

    Plehn, Tilman; Schichtel, Peter; Wiegand, Daniel

    2014-03-01

    In an era of increasingly advanced experimental analysis techniques it is crucial to understand which phase space regions contribute a signal extraction from backgrounds. Based on the Neyman-Pearson lemma we compute the maximum significance for a signal extraction as an integral over phase space regions. We then study to what degree boosted Higgs strategies benefit ZH and tt¯H searches and which transverse momenta of the Higgs are most promising. We find that Higgs and top taggers are the appropriate tools, but would profit from a targeted optimization towards smaller transverse momenta. MadMax is available as an add-on to MadGraph 5.

  16. Electric rockets get a boost

    SciTech Connect

    Ashley, S.

    1995-12-01

    This article reports that xenon-ion thrusters are expected to replace conventional chemical rockets in many nonlaunch propulsion tasks, such as controlling satellite orbits and sending space probes on long exploratory missions. The space age dawned some four decades ago with the arrival of powerful chemical rockets that could propel vehicles fast enough to escape the grasp of earth`s gravity. Today, chemical rocket engines still provide the only means to boost payloads into orbit and beyond. The less glamorous but equally important job of moving vessels around in space, however, may soon be assumed by a fundamentally different rocket engine technology that has been long in development--electric propulsion.

  17. Stochastic approximation boosting for incomplete data problems.

    PubMed

    Sexton, Joseph; Laake, Petter

    2009-12-01

    Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.

  18. Recursive bias estimation and L2 boosting

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  19. Comparing Two Non-parallel Regression Lines with the Parametric Alternative to Analysis of Covariance Using SPSS-X or SAS--the Johnson-Neyman Technique.

    ERIC Educational Resources Information Center

    Karpman, Mitchell

    1986-01-01

    The Johnson-Neyman (JN) technique is a parametric alternative to analysis of covariance that permits nonparallel regression lines. This article presents computer programs for J-N using the transformational languages of SPSS-X and SAS. The programs are designed for two groups and one covariate. (Author/JAZ)

  20. Parallel induction of tetrahydrobiopterin biosynthesis and indoleamine 2,3-dioxygenase activity in human cells and cell lines by interferon-gamma.

    PubMed Central

    Werner, E R; Werner-Felmayer, G; Fuchs, D; Hausen, A; Reibnegger, G; Wachter, H

    1989-01-01

    In all of eight tested human cells and cell lines with inducible indoleamine 2,3-dioxygenase (EC 1.13.11.17) tetrahydrobiopterin biosynthesis was activated by interferon-gamma. This was demonstrated by GTP cyclohydrolase I (EC 3.5.4.16) activities and intracellular neopterin and biopterin concentrations. Pteridine synthesis was influenced by extracellular tryptophan. In T 24-cell extracts, submillimolar concentrations of tetrahydrobiopterin stimulated the indoleamine 2,3-dioxygenase reaction. PMID:2511835

  1. GPU-based parallel clustered differential pulse code modulation

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Li, Wenze; Kong, Wanqiu

    2015-10-01

    Hyperspectral remote sensing technology is widely used in marine remote sensing, geological exploration, atmospheric and environmental remote sensing. Owing to the rapid development of hyperspectral remote sensing technology, resolution of hyperspectral image has got a huge boost. Thus data size of hyperspectral image is becoming larger. In order to reduce their saving and transmission cost, lossless compression for hyperspectral image has become an important research topic. In recent years, large numbers of algorithms have been proposed to reduce the redundancy between different spectra. Among of them, the most classical and expansible algorithm is the Clustered Differential Pulse Code Modulation (CDPCM) algorithm. This algorithm contains three parts: first clusters all spectral lines, then trains linear predictors for each band. Secondly, use these predictors to predict pixels, and get the residual image by subtraction between original image and predicted image. Finally, encode the residual image. However, the process of calculating predictors is timecosting. In order to improve the processing speed, we propose a parallel C-DPCM based on CUDA (Compute Unified Device Architecture) with GPU. Recently, general-purpose computing based on GPUs has been greatly developed. The capacity of GPU improves rapidly by increasing the number of processing units and storage control units. CUDA is a parallel computing platform and programming model created by NVIDIA. It gives developers direct access to the virtual instruction set and memory of the parallel computational elements in GPUs. Our core idea is to achieve the calculation of predictors in parallel. By respectively adopting global memory, shared memory and register memory, we finally get a decent speedup.

  2. Series Connected Buck-Boost Regulator

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G. (Inventor)

    2006-01-01

    A Series Connected Buck-Boost Regulator (SCBBR) that switches only a fraction of the input power, resulting in relatively high efficiencies. The SCBBR has multiple operating modes including a buck, a boost, and a current limiting mode, so that an output voltage of the SCBBR ranges from below the source voltage to above the source voltage.

  3. Boost-phase discrimination research

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen R.; Feiereisen, William J.

    1993-01-01

    The final report describes the combined work of the Computational Chemistry and Aerothermodynamics branches within the Thermosciences Division at NASA Ames Research Center directed at understanding the signatures of shock-heated air. Considerable progress was made in determining accurate transition probabilities for the important band systems of NO that account for much of the emission in the ultraviolet region. Research carried out under this project showed that in order to reproduce the observed radiation from the bow shock region of missiles in their boost phase it is necessary to include the Burnett terms in the constituent equation, account for the non-Boltzmann energy distribution, correctly model the NO formation and rotational excitation process, and use accurate transition probabilities for the NO band systems. This work resulted in significant improvements in the computer code NEQAIR that models both the radiation and fluid dynamics in the shock region.

  4. Boosted KZ and LLL Algorithms

    NASA Astrophysics Data System (ADS)

    Lyu, Shanxiang; Ling, Cong

    2017-09-01

    There exists two issues among popular lattice reduction (LR) algorithms that should cause our concern. The first one is Korkine Zolotarev (KZ) and Lenstra Lenstra Lovasz (LLL) algorithms may increase the lengths of basis vectors. The other is KZ reduction suffers much worse performance than Minkowski reduction in terms of providing short basis vectors, despite its superior theoretical upper bounds. To address these limitations, we improve the size reduction steps in KZ and LLL to set up two new efficient algorithms, referred to as boosted KZ and LLL, for solving the shortest basis problem (SBP) with exponential and polynomial complexity, respectively. Both of them offer better actual performance than their classic counterparts, and the performance bounds for KZ are also improved. We apply them to designing integer-forcing (IF) linear receivers for multi-input multi-output (MIMO) communications. Our simulations confirm their rate and complexity advantages.

  5. Boosting human learning by hypnosis.

    PubMed

    Nemeth, Dezso; Janacsek, Karolina; Polner, Bertalan; Kovacs, Zoltan Ambrus

    2013-04-01

    Human learning and memory depend on multiple cognitive systems related to dissociable brain structures. These systems interact not only in cooperative but also sometimes competitive ways in optimizing performance. Previous studies showed that manipulations reducing the engagement of frontal lobe-mediated explicit attentional processes could lead to improved performance in striatum-related procedural learning. In our study, hypnosis was used as a tool to reduce the competition between these 2 systems. We compared learning in hypnosis and in the alert state and found that hypnosis boosted striatum-dependent sequence learning. Since frontal lobe-dependent processes are primarily affected by hypnosis, this finding could be attributed to the disruption of the explicit attentional processes. Our result sheds light not only on the competitive nature of brain systems in cognitive processes but also could have important implications for training and rehabilitation programs, especially for developing new methods to improve human learning and memory performance.

  6. Rapid bioanalysis of vancomycin in serum and urine by high-performance liquid chromatography tandem mass spectrometry using on-line sample extraction and parallel analytical columns.

    PubMed

    Cass, R T; Villa, J S; Karr, D E; Schmidt, D E

    2001-01-01

    A novel high-performance liquid chromatography tandem mass spectrometry (LC/MS/MS) method is described for the determination of vancomycin in serum and urine. After the addition of internal standard (teicoplanin), serum and urine samples were directly injected onto an HPLC system consisting of an extraction column and dual analytical columns. The columns are plumbed through two switching valves. A six-port valve directs extraction column effluent either to waste or to an analytical column. A ten-port valve simultaneously permits equilibration of one analytical column while the other is used for sample analysis. Thus, off-line analytical column equilibration time does not require mass spectrometer time, freeing the detector for increased sample throughput. The on-line sample extraction step takes 15 seconds followed by gradient chromatography taking another 90 seconds. Having minimal sample pretreatment the method is both simple and fast. This system has been used to successfully develop a validated positive-ion electrospray bioanalytical method for the quantitation of vancomycin. Detection of vancomycin was accurate and precise, with a limit of detection of 1 ng/mL in serum and urine. The calibration curves for vancomycin in rat, dog and primate were linear in a concentration range of 0.001-10 microg/mL for serum and urine. This method has been successfully applied to determine the concentration of vancomycin in rat, dog and primate serum and urine samples from pharmacokinetic and urinary excretion studies.

  7. Speeding up Boosting decision trees training

    NASA Astrophysics Data System (ADS)

    Zheng, Chao; Wei, Zhenzhong

    2015-10-01

    To overcome the drawback that Boosting decision trees perform fast speed in the test time while the training process is relatively too slow to meet the requirements of applications with real-time learning, we propose a fast decision trees training method by pruning those noneffective features in advance. And basing on this method, we also design a fast Boosting decision trees training algorithm. Firstly, we analyze the structure of each decision trees node, and prove that the classification error of each node has a bound through derivation. Then, by using the error boundary to prune non-effective features in the early stage, we greatly accelerate the decision tree training process, and would not affect the training results at all. Finally, the decision tree accelerated training method is integrated into the general Boosting process forming a fast boosting decision trees training algorithm. This algorithm is not a new variant of Boosting, on the contrary, it should be used in conjunction with existing Boosting algorithms to achieve more training acceleration. To test the algorithm's speedup performance and performance combined with other accelerated algorithms, the original AdaBoost and two typical acceleration algorithms LazyBoost and StochasticBoost were respectively used in conjunction with this algorithm into three fast versions, and their classification performance was tested by using the Lsis face database which contained 12788 images. Experimental results reveal that this fast algorithm can achieve more than double training speedup without affecting the results of the trained classifier, and can be combined with other acceleration algorithms. Key words: Boosting algorithm, decision trees, classifier training, preliminary classification error, face detection

  8. Orthodontics Align Crooked Teeth and Boost Self-Esteem

    MedlinePlus

    ... desktop! more... Orthodontics Align Crooked Teeth and Boost Self- esteem Article Chapters Orthodontics Align Crooked Teeth and Boost Self- esteem print full article print this chapter email this ...

  9. Riemann curvature of a boosted spacetime geometry

    NASA Astrophysics Data System (ADS)

    Battista, Emmanuele; Esposito, Giampiero; Scudellaro, Paolo; Tramontano, Francesco

    2016-10-01

    The ultrarelativistic boosting procedure had been applied in the literature to map the metric of Schwarzschild-de Sitter spacetime into a metric describing de Sitter spacetime plus a shock-wave singularity located on a null hypersurface. This paper evaluates the Riemann curvature tensor of the boosted Schwarzschild-de Sitter metric by means of numerical calculations, which make it possible to reach the ultrarelativistic regime gradually by letting the boost velocity approach the speed of light. Thus, for the first time in the literature, the singular limit of curvature, through Dirac’s δ distribution and its derivatives, is numerically evaluated for this class of spacetimes. Moreover, the analysis of the Kretschmann invariant and the geodesic equation shows that the spacetime possesses a “scalar curvature singularity” within a 3-sphere and it is possible to define what we here call “boosted horizon”, a sort of elastic wall where all particles are surprisingly pushed away, as numerical analysis demonstrates. This seems to suggest that such “boosted geometries” are ruled by a sort of “antigravity effect” since all geodesics seem to refuse to enter the “boosted horizon” and are “reflected” by it, even though their initial conditions are aimed at driving the particles toward the “boosted horizon” itself. Eventually, the equivalence with the coordinate shift method is invoked in order to demonstrate that all δ2 terms appearing in the Riemann curvature tensor give vanishing contribution in distributional sense.

  10. Processing Semblances Induced through Inter-Postsynaptic Functional LINKs, Presumed Biological Parallels of K-Lines Proposed for Building Artificial Intelligence

    PubMed Central

    Vadakkan, Kunjumon I.

    2011-01-01

    The internal sensation of memory, which is available only to the owner of an individual nervous system, is difficult to analyze for its basic elements of operation. We hypothesize that associative learning induces the formation of functional LINK between the postsynapses. During memory retrieval, the activation of either postsynapse re-activates the functional LINK evoking a semblance of sensory activity arriving at its opposite postsynapse, nature of which defines the basic unit of internal sensation – namely, the semblion. In neuronal networks that undergo continuous oscillatory activity at certain levels of their organization re-activation of functional LINKs is expected to induce semblions, enabling the system to continuously learn, self-organize, and demonstrate instantiation, features that can be utilized for developing artificial intelligence (AI). This paper also explains suitability of the inter-postsynaptic functional LINKs to meet the expectations of Minsky’s K-lines, basic elements of a memory theory generated to develop AI and methods to replicate semblances outside the nervous system. PMID:21845180

  11. Processing Semblances Induced through Inter-Postsynaptic Functional LINKs, Presumed Biological Parallels of K-Lines Proposed for Building Artificial Intelligence.

    PubMed

    Vadakkan, Kunjumon I

    2011-01-01

    The internal sensation of memory, which is available only to the owner of an individual nervous system, is difficult to analyze for its basic elements of operation. We hypothesize that associative learning induces the formation of functional LINK between the postsynapses. During memory retrieval, the activation of either postsynapse re-activates the functional LINK evoking a semblance of sensory activity arriving at its opposite postsynapse, nature of which defines the basic unit of internal sensation - namely, the semblion. In neuronal networks that undergo continuous oscillatory activity at certain levels of their organization re-activation of functional LINKs is expected to induce semblions, enabling the system to continuously learn, self-organize, and demonstrate instantiation, features that can be utilized for developing artificial intelligence (AI). This paper also explains suitability of the inter-postsynaptic functional LINKs to meet the expectations of Minsky's K-lines, basic elements of a memory theory generated to develop AI and methods to replicate semblances outside the nervous system.

  12. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    SciTech Connect

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designed and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.

  13. Boosting Wigner's nj-symbols

    NASA Astrophysics Data System (ADS)

    Speziale, Simone

    2017-03-01

    We study the SL (2 ,ℂ ) Clebsch-Gordan coefficients appearing in the Lorentzian EPRL spin foam amplitudes for loop quantum gravity. We show how the amplitudes decompose into SU(2) nj- symbols at the vertices and integrals over boosts at the edges. The integrals define edge amplitudes that can be evaluated analytically using and adapting results in the literature, leading to a pure state sum model formulation. This procedure introduces virtual representations which, in a manner reminiscent of virtual momenta in Feynman amplitudes, are off-shell of the simplicity constraints present in the theory, but with the integrands that peak at the on-shell values. We point out some properties of the edge amplitudes which are helpful for numerical and analytical evaluations of spin foam amplitudes, and suggest among other things a simpler model useful for calculations of certain lowest order amplitudes. As an application, we estimate the large spin scaling behaviour of the simpler model, on a closed foam with all 4-valent edges and Euler characteristic χ , to be Nχ -5 E +V /2. The paper contains a review and an extension of the results on SL (2 ,ℂ ) Clebsch-Gordan coefficients among unitary representations of the principal series that can be useful beyond their application to quantum gravity considered here.

  14. Nonlinear program based optimization of boost and buck-boost converter designs

    NASA Technical Reports Server (NTRS)

    Rahman, S.; Lee, F. C.

    1981-01-01

    The facility of an Augmented Lagrangian (ALAG) multiplier based nonlinear programming technique is demonstrated for minimum-weight design optimizations of boost and buck-boost power converters. Certain important features of ALAG are presented in the framework of a comprehensive design example for buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight and loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  15. Nonlinear program based optimization of boost and buck-boost converter designs

    NASA Technical Reports Server (NTRS)

    Rahman, S.; Lee, F. C.

    1981-01-01

    The facility of an Augmented Lagrangian (ALAG) multiplier based nonlinear programming technique is demonstrated for minimum-weight design optimizations of boost and buck-boost power converters. Certain important features of ALAG are presented in the framework of a comprehensive design example for buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight and loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  16. Boosting Manufacturing through Modular Chemical Process Intensification

    ScienceCinema

    None

    2017-01-06

    Manufacturing USA's Rapid Advancement in Process Intensification Deployment Institute will focus on developing breakthrough technologies to boost domestic energy productivity and energy efficiency by 20 percent in five years through manufacturing processes.

  17. Boosting Manufacturing through Modular Chemical Process Intensification

    SciTech Connect

    2016-12-09

    Manufacturing USA's Rapid Advancement in Process Intensification Deployment Institute will focus on developing breakthrough technologies to boost domestic energy productivity and energy efficiency by 20 percent in five years through manufacturing processes.

  18. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-01-01

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  19. Relativistic projection and boost of solitons

    SciTech Connect

    Wilets, L.

    1991-12-31

    This report discusses the following topics on the relativistic projection and boost of solitons: The center of mass problem; momentum eigenstates; variation after projection; and the nucleon as a composite. (LSP).

  20. Capillary zone electrophoresis-electrospray ionization-tandem mass spectrometry for quantitative parallel reaction monitoring of peptide abundance and single-shot proteomic analysis of a human cell line

    PubMed Central

    Sun, Liangliang; Zhu, Guijie; Mou, Si; Zhao, Yimeng; Champion, Matthew M.; Dovichi, Norman J .

    2014-01-01

    We coupled capillary zone electrophoresis (CZE) with an ultrasensitive electrokinetically pumped nanospray ionization source for tandem mass spectrometry (MS/MS) analysis of complex proteomes. We first used the system for the parallel reaction monitoring (PRM) analysis of angiotensin II spiked in 0.45 mg/mL of bovine serum albumin (BSA) digest. A calibration curve was generated between the loading amount of angiotensin II and intensity of angiotensin II fragment ions. CZE-PRM generated a linear calibration curve across over 4.5 orders of magnitude dynamic range corresponding to angiotensin II loading amount from 2 amole to 150 fmole. The relative standard deviations (RSDs) of migration time were <4% and the RSDs of fragment ion intensity were ~20% or less except 150 fmole angiotensin II loading amount data (~36% RSD). We further applied the system for the first bottom up proteomic analysis of a human cell line using CZE-MS/MS. We generated 283 protein identifications from a 1 hour long, single-shot CZE MS/MS analysis of the MCF7 breast cancer cell line digest, corresponding to ~80 ng loading amount. The MCF7 digest was fractionated using a C18 solid phase extraction column; single-shot analysis of a single fraction resulted in 468 protein identifications, which is by far the largest number of protein identifications reported for a mammalian proteomic sample using CZE. PMID:25082526

  1. Centaur liquid oxygen boost pump vibration test

    NASA Technical Reports Server (NTRS)

    Tang, H. M.

    1975-01-01

    The Centaur LOX boost pump was subjected to both the simulated Titan Centaur proof flight and confidence demonstration vibration test levels. For each test level, both sinusoidal and random vibration tests were conducted along each of the three orthogonal axes of the pump and turbine assembly. In addition to these tests, low frequency longitudinal vibration tests for both levels were conducted. All tests were successfully completed without damage to the boost pump.

  2. Vasopressin Boosts Placebo Analgesic Effects in Women: A Randomized Trial

    PubMed Central

    Colloca, Luana; Pine, Daniel S.; Ernst, Monique; Miller, Franklin G.; Grillon, Christian

    2015-01-01

    Background Social cues and interpersonal interactions strongly contribute to evoke placebo effects that are pervasive in medicine and depend upon the activation of endogenous modulatory systems. Here we explore the possibility to boost placebo effects by targeting pharmacologically the vasopressin system, characterized by a sexually dimorphic response and involved in the regulation of human and nonhuman social behaviors. Methods We enrolled 109 healthy participants and studied the effects of intranasal administration of Avp1a and Avp1b arginine vasopressin receptor agonists against 1) no-treatment, 2) oxytocin, and 3) saline, in a randomized, placebo-controlled, double-blind, parallel design trial using a well-established model of placebo analgesia while controlling for sex differences. Results Vasopressin agonists boosted placebo effects in women but had no effect in men. The effects of vasopressin on expectancy-induced analgesia were significantly larger than those observed in the no-treatment (p<004), oxytocin (p<0.001) and saline (p<0.015) groups. Moreover, women with lower dispositional anxiety and cortisol levels showed the largest vasopressin-induced modulation of placebo effects, suggesting a moderating interplay between pre-existing psychological factors and cortisol changes. Conclusions This is the first study that demonstrates that arginine vasopressin boosts placebo effects and that the effect of vasopressin depends upon a significant sex by treatment interaction. These findings are novel and might open up new avenues for clinically relevant research due to the therapeutic potentials of vasopressin and oxytocin as well as the possibility to systematically control for influences of placebo responses in clinical trials. PMID:26321018

  3. Boosted Jets at the LHC

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew

    2015-04-01

    Jets are collimated streams of high-energy particles ubiquitous at any particle collider experiment and serve as proxy for the production of elementary particles at short distances. As the Large Hadron Collider at CERN continues to extend its reach to ever higher energies and luminosities, an increasingly important aspect of any particle physics analysis is the study and identification of jets, electroweak bosons, and top quarks with large Lorentz boosts. In addition to providing a unique insight into potential new physics at the tera-electron volt energy scale, high energy jets are a sensitive probe of emergent phenomena within the Standard Model of particle physics and can teach us an enormous amount about quantum chromodynamics itself. Jet physics is also invaluable for lower-level experimental issues including triggering and background reduction. It is especially important for the removal of pile-up, which is radiation produced by secondary proton collisions that contaminates every hard proton collision event in the ATLAS and CMS experiments at the Large Hadron Collider. In this talk, I will review the myriad ways that jets and jet physics are being exploited at the Large Hadron Collider. This will include a historical discussion of jet algorithms and the requirements that these algorithms must satisfy to be well-defined theoretical objects. I will review how jets are used in searches for new physics and ways in which the substructure of jets is being utilized for discriminating backgrounds from both Standard Model and potential new physics signals. Finally, I will discuss how jets are broadening our knowledge of quantum chromodynamics and how particular measurements performed on jets manifest the universal dynamics of weakly-coupled conformal field theories.

  4. Aerodynamics of a turbojet-boosted launch vehicle concept

    NASA Technical Reports Server (NTRS)

    Small, W. J.; Riebe, G. D.; Taylor, A. H.

    1980-01-01

    Results from analytical and experimental studies of the aerodynamic characteristics of a turbojet-boosted launch vehicle are presented. The success of this launch vehicle concept depends upon several novel applications of aerodynamic technology, particularly in the area of takeoff lift and minimum transonic drag requirements. The take-off mode stresses leading edge vortex lift generated in parallel by a complex arrangement of low aspect ratio booster and orbiter wings. Wind-tunnel tests on a representative model showed that this low-speed lift is sensitive to geometric arrangements of the booster-orbiter combination and is not predictable by standard analytic techniques. Transonic drag was also experimentally observed to be very sensitive to booster location; however, these drag levels were accurately predicted by standard farfield wave drag theory.

  5. Features in Continuous Parallel Coordinates.

    PubMed

    Lehmann, Dirk J; Theisel, Holger

    2011-12-01

    Continuous Parallel Coordinates (CPC) are a contemporary visualization technique in order to combine several scalar fields, given over a common domain. They facilitate a continuous view for parallel coordinates by considering a smooth scalar field instead of a finite number of straight lines. We show that there are feature curves in CPC which appear to be the dominant structures of a CPC. We present methods to extract and classify them and demonstrate their usefulness to enhance the visualization of CPCs. In particular, we show that these feature curves are related to discontinuities in Continuous Scatterplots (CSP). We show this by exploiting a curve-curve duality between parallel and Cartesian coordinates, which is a generalization of the well-known point-line duality. Furthermore, we illustrate the theoretical considerations. Concluding, we discuss relations and aspects of the CPC's/CSP's features concerning the data analysis.

  6. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  7. Tracking down hyper-boosted top quarks

    SciTech Connect

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  8. Tracking down hyper-boosted top quarks

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-01

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directly employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.

  9. Tracking down hyper-boosted top quarks

    DOE PAGES

    Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele

    2015-06-05

    The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directlymore » employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.« less

  10. Long-term outcome and toxicity of hypofractionated stereotactic body radiotherapy as a boost treatment for head and neck cancer: the importance of boost volume assessment

    PubMed Central

    2012-01-01

    Background The aim of this study was to report the long-term clinical outcomes of patients who received stereotactic body radiotherapy (SBRT) as a boost treatment for head and neck cancer. Materials and methods Between March 2004 and July 2007, 26 patients with locally advanced, medically inoperable head and neck cancer or gross residual tumors in close proximity to critical structures following head and neck surgery were treated with SBRT as a boost treatment. All patients were initially treated with standard external beam radiotherapy (EBRT). SBRT boost was prescribed to the median 80% isodose line with a median dose of 21 (range 10–25) Gy in 2–5 (median, 5) fractions. Results The median follow-up after SBRT was 56 (range 27.6 − 80.2) months. The distribution of treatment sites in 26 patients was as follows: the nasopharynx, including the base of the skull in 10 (38.5%); nasal cavity or paranasal sinus in 8 (30.8%); periorbit in 4 (15.4%); tongue in 3 (11.5%); and oropharyngeal wall in 1 (3.8%). The median EBRT dose before SBRT was 50.4 Gy (range 39.6 − 70.2). The major response rate was 100% with 21 (80.8%) complete responses (CR). Severe (grade ≥ 3) late toxicities developed in 9 (34.6%) patients, and SBRT boost volume was a significant parameter predicting severe late complication. Conclusions The present study demonstrates that a modern SBRT boost is a highly efficient tool for local tumor control. However, we observed a high frequency of serious late complications. More optimized dose fractionation schedule and patient selection are required to achieve excellent local control without significant late morbidities in head and neck boost treatment. PMID:22691266

  11. Centrifugal compressor design for electrically assisted boost

    NASA Astrophysics Data System (ADS)

    Y Yang, M.; Martinez-Botas, R. F.; Zhuge, W. L.; Qureshi, U.; Richards, B.

    2013-12-01

    Electrically assisted boost is a prominent method to solve the issues of transient lag in turbocharger and remains an optimized operation condition for a compressor due to decoupling from turbine. Usually a centrifugal compressor for gasoline engine boosting is operated at high rotational speed which is beyond the ability of an electric motor in market. In this paper a centrifugal compressor with rotational speed as 120k RPM and pressure ratio as 2.0 is specially developed for electrically assisted boost. A centrifugal compressor including the impeller, vaneless diffuser and the volute is designed by meanline method followed by 3D detailed design. Then CFD method is employed to predict as well as analyse the performance of the design compressor. The results show that the pressure ratio and efficiency at design point is 2.07 and 78% specifically.

  12. Boost breaking in the EFT of inflation

    NASA Astrophysics Data System (ADS)

    Delacrétaz, Luca V.; Noumi, Toshifumi; Senatore, Leonardo

    2017-02-01

    If time-translations are spontaneously broken, so are boosts. This symmetry breaking pattern can be non-linearly realized by either just the Goldstone boson of time translations, or by four Goldstone bosons associated with time translations and boosts. In this paper we extend the Effective Field Theory of Multifield Inflation to consider the case in which the additional Goldstone bosons associated with boosts are light and coupled to the Goldstone boson of time translations. The symmetry breaking pattern forces a coupling to curvature so that the mass of the additional Goldstone bosons is predicted to be equal to √2H in the vast majority of the parameter space where they are light. This pattern therefore offers a natural way of generating self-interacting particles with Hubble mass during inflation. After constructing the general effective Lagrangian, we study how these particles mix and interact with the curvature fluctuations, generating potentially detectable non-Gaussian signals.

  13. Behavior of Werner states under relativistic boosts

    NASA Astrophysics Data System (ADS)

    Palge, Veiko; Dunningham, Jacob

    2015-12-01

    We study the structure of maps that Lorentz boosts induce on the spin degree of freedom of a system consisting of two massive spin- 1 / 2 particles. We consider the case where the spin state is described by the Werner state and the momenta are discrete. Transformations on the spins are systematically investigated in various boost scenarios by calculating the orbit and concurrence of the bipartite spin state with different kinds of product and entangled momenta. We confirm the general conclusion that Lorentz boosts cause non-trivial behavior of bipartite spin entanglement. Visualization of the evolution of the spin state is shown to be valuable in explaining the pattern of concurrence. The idealized model provides a basis of explanation in terms of which phenomena in systems involving continuous momenta can be understood.

  14. Boosting Access to Government Rocket Science

    DTIC Science & Technology

    2014-10-01

    September–October 2014 8 with MSFC, through an SAA signed in 2012, using Marshall’s expertise and resources to perform wind tunnel testing on various...Defense AT&L: September–October 2014 6 Boosting Access to Government Rocket Science John F. Rice Defense AT&L: September–October 2014 6 Report...REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Boosting Access to Government Rocket Science 5a. CONTRACT NUMBER

  15. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  16. Modeling self-priming circuits for dielectric elastomer generators towards optimum voltage boost

    NASA Astrophysics Data System (ADS)

    Zanini, Plinio; Rossiter, Jonathan; Homer, Martin

    2016-04-01

    One of the main challenges for the practical implementation of dielectric elastomer generators (DEGs) is supplying high voltages. To address this issue, systems using self-priming circuits (SPCs) — which exploit the DEG voltage swing to increase its supplied voltage — have been used with success. A self-priming circuit consists of a charge pump implemented in parallel with the DEG circuit. At each energy harvesting cycle, the DEG receives a low voltage input and, through an almost constant charge cycle, generates a high voltage output. SPCs receive the high voltage output at the end of the energy harvesting cycle and supply it back as input for the following cycle, using the DEG as a voltage multiplier element. Although rules for designing self-priming circuits for dielectric elastomer generators exist, they have been obtained from intuitive observation of simulation results and lack a solid theoretical foundation. Here we report the development of a mathematical model to predict voltage boost using self-priming circuits. The voltage on the DEG attached to the SPC is described as a function of its initial conditions, circuit parameters/layout, and the DEG capacitance. Our mathematical model has been validated on an existing DEG implementation from the literature, and successfully predicts the voltage boost for each cycle. Furthermore, it allows us to understand the conditions for the boost to exist, and obtain the design rules that maximize the voltage boost.

  17. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and

  18. The Attentional Boost Effect with Verbal Materials

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Spataro, Pietro; Picklesimer, Milton

    2014-01-01

    Study stimuli presented at the same time as unrelated targets in a detection task are better remembered than stimuli presented with distractors. This attentional boost effect (ABE) has been found with pictorial (Swallow & Jiang, 2010) and more recently verbal materials (Spataro, Mulligan, & Rossi-Arnaud, 2013). The present experiments…

  19. Cleanouts boost Devonian shale gas flow

    SciTech Connect

    Not Available

    1991-02-04

    Cleaning shale debris from the well bores is an effective way to boost flow rates from old open hole Devonian shale gas wells, research on six West Virginia wells begun in 1985 has shown. Officials involved with the study say the Appalachian basin could see 20 year recoverable gas reserves hiked by 315 bcf if the process is used on a wide scale.

  20. Schools Enlisting Defense Industry to Boost STEM

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2008-01-01

    Defense contractors Northrop Grumman Corp. and Lockheed Martin Corp. are joining forces in an innovative partnership to develop high-tech simulations to boost STEM--or science, technology, engineering, and mathematics--education in the Baltimore County schools. The Baltimore County partnership includes the local operations of two major military…

  1. Energy Boost. Q & A with Steve Kiesner.

    ERIC Educational Resources Information Center

    Schneider, Jay W.

    2002-01-01

    Presents an interview with the director of national accounts for the Edison Electric Institute in Washington, DC about the association, its booklet on energy conservation within education facilities, and ways in which educational facilities can reduce costs by boosting energy conservation. (EV)

  2. The Attentional Boost Effect and Context Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Smith, S. Adam; Spataro, Pietro

    2016-01-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors--the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is…

  3. Niacin to Boost Your HDL "Good" Cholesterol

    MedlinePlus

    Niacin can boost 'good' cholesterol Niacin is a B vitamin that may raise your HDL ("good") cholesterol. But side effects might outweigh benefits for most ... been used to increase high-density lipoprotein (HDL) cholesterol — the "good" cholesterol that helps remove low-density ...

  4. The Attentional Boost Effect with Verbal Materials

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Spataro, Pietro; Picklesimer, Milton

    2014-01-01

    Study stimuli presented at the same time as unrelated targets in a detection task are better remembered than stimuli presented with distractors. This attentional boost effect (ABE) has been found with pictorial (Swallow & Jiang, 2010) and more recently verbal materials (Spataro, Mulligan, & Rossi-Arnaud, 2013). The present experiments…

  5. Boosting fire drill participation in hospital settings.

    PubMed

    Prosper, Darryl

    2015-01-01

    In a health system with over 100 sites in a geographically dispersed region, boosting fire drill participation to meet government requirements, according to the author, is a constant effort, both to achieve and to maintain. In this article, he describes a comprehensive approach that entails engagement of executive, site committees and local fire authorities, as well as comprehensive training and awareness campaigns.

  6. The Attentional Boost Effect and Context Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Smith, S. Adam; Spataro, Pietro

    2016-01-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors--the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is…

  7. Concomitant GRID boost for Gamma Knife radiosurgery

    SciTech Connect

    Ma Lijun; Kwok, Young; Chin, Lawrence S.; Simard, J. Marc; Regine, William F.

    2005-11-15

    We developed an integrated GRID boost technique for Gamma Knife radiosurgery. The technique generates an array of high dose spots within the target volume via a grid of 4-mm shots. These high dose areas were placed over a conventional Gamma Knife plan where a peripheral dose covers the full target volume. The beam weights of the 4-mm shots were optimized iteratively to maximize the integral dose inside the target volume. To investigate the target volume coverage and the dose to the adjacent normal brain tissue for the technique, we compared the GRID boosted treatment plans with conventional Gamma Knife treatment plans using physical and biological indices such as dose-volume histogram (DVH), DVH-derived indices, equivalent uniform dose (EUD), tumor control probabilities (TCP), and normal tissue complication probabilities (NTCP). We found significant increase in the target volume indices such as mean dose (5%-34%; average 14%), TCP (4%-45%; average 21%), and EUD (2%-22%; average 11%) for the GRID boost technique. No significant change in the peripheral dose coverage for the target volume was found per RTOG protocol. In addition, the EUD and the NTCP for the normal brain adjacent to the target (i.e., the near region) were decreased for the GRID boost technique. In conclusion, we demonstrated a new technique for Gamma Knife radiosurgery that can escalate the dose to the target while sparing the adjacent normal brain tissue.

  8. Schools Enlisting Defense Industry to Boost STEM

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2008-01-01

    Defense contractors Northrop Grumman Corp. and Lockheed Martin Corp. are joining forces in an innovative partnership to develop high-tech simulations to boost STEM--or science, technology, engineering, and mathematics--education in the Baltimore County schools. The Baltimore County partnership includes the local operations of two major military…

  9. Boost Converters for Gas Electric and Fuel Cell Hybrid Electric Vehicles

    SciTech Connect

    McKeever, JW

    2005-06-16

    Hybrid electric vehicles (HEVs) are driven by at least two prime energy sources, such as an internal combustion engine (ICE) and propulsion battery. For a series HEV configuration, the ICE drives only a generator, which maintains the state-of-charge (SOC) of propulsion and accessory batteries and drives the electric traction motor. For a parallel HEV configuration, the ICE is mechanically connected to directly drive the wheels as well as the generator, which likewise maintains the SOC of propulsion and accessory batteries and drives the electric traction motor. Today the prime energy source is an ICE; tomorrow it will very likely be a fuel cell (FC). Use of the FC eliminates a direct drive capability accentuating the importance of the battery charge and discharge systems. In both systems, the electric traction motor may use the voltage directly from the batteries or from a boost converter that raises the voltage. If low battery voltage is used directly, some special control circuitry, such as dual mode inverter control (DMIC) which adds a small cost, is necessary to drive the electric motor above base speed. If high voltage is chosen for more efficient motor operation or for high speed operation, the propulsion battery voltage must be raised, which would require some type of two-quadrant bidirectional chopper with an additional cost. Two common direct current (dc)-to-dc converters are: (1) the transformer-based boost or buck converter, which inverts a dc voltage, feeds the resulting alternating current (ac) into a transformer to raise or lower the voltage, and rectifies it to complete the conversion; and (2) the inductor-based switch mode boost or buck converter [1]. The switch-mode boost and buck features are discussed in this report as they operate in a bi-directional chopper. A benefit of the transformer-based boost converter is that it isolates the high voltage from the low voltage. Usually the transformer is large, further increasing the cost. A useful feature

  10. Mediterranean Diet Plus Olive Oil a Boost to Heart Health?

    MedlinePlus

    ... gov/news/fullstory_163557.html Mediterranean Diet Plus Olive Oil a Boost to Heart Health? It enhances ... HealthDay News) -- A Mediterranean diet high in virgin olive oil may boost the protective effects of "good" ...

  11. Occurrence of perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) in N.E. Spanish surface waters and their removal in a drinking water treatment plant that combines conventional and advanced treatments in parallel lines.

    PubMed

    Flores, Cintia; Ventura, Francesc; Martin-Alonso, Jordi; Caixach, Josep

    2013-09-01

    Perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) are two emerging contaminants that have been detected in all environmental compartments. However, while most of the studies in the literature deal with their presence or removal in wastewater treatment, few of them are devoted to their detection in treated drinking water and fate during drinking water treatment. In this study, analyses of PFOS and PFOA have been carried out in river water samples and in the different stages of a drinking water treatment plant (DWTP) which has recently improved its conventional treatment process by adding ultrafiltration and reverse osmosis in a parallel treatment line. Conventional and advanced treatments have been studied in several pilot plants and in the DWTP, which offers the opportunity to compare both treatments operating simultaneously. From the results obtained, neither preoxidation, sand filtration, nor ozonation, removed both perfluorinated compounds. As advanced treatments, reverse osmosis has proved more effective than reverse electrodialysis to remove PFOA and PFOS in the different configurations of pilot plants assayed. Granular activated carbon with an average elimination efficiency of 64±11% and 45±19% for PFOS and PFOA, respectively and especially reverse osmosis, which was able to remove ≥99% of both compounds, were the sole effective treatment steps. Trace levels of PFOS (3.0-21 ng/L) and PFOA (<4.2-5.5 ng/L) detected in treated drinking water were significantly lowered in comparison to those measured in precedent years. These concentrations represent overall removal efficiencies of 89±22% for PFOA and 86±7% for PFOS.

  12. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  13. Conformal pure radiation with parallel rays

    NASA Astrophysics Data System (ADS)

    Leistner, Thomas; Nurowski, Paweł

    2012-03-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves.

  14. Factorization for substructures of boosted Higgs jets

    NASA Astrophysics Data System (ADS)

    Isaacson, Joshua; Li, Hsiang-nan; Li, Zhao; Yuan, C.-P.

    2017-08-01

    We present a perturbative QCD factorization formula for substructures of an energetic Higgs jet, taking the energy profile resulting from the H → b b bar decay as an example. The formula is written as a convolution of a hard Higgs decay kernel with two b-quark jet functions and a soft function that links the colors of the two b quarks. We derive an analytical expression to approximate the energy profile within a boosted Higgs jet, which significantly differs from those of ordinary QCD jets. This formalism also extends to boosted W and Z bosons in their hadronic decay modes, allowing an easy and efficient discrimination of fat jets produced from different processes.

  15. Gradient Boosting for Conditional Random Fields

    DTIC Science & Technology

    2014-09-23

    Information Processing Systems 26 ( NIPS ’13), pages 647–655. 2013. [4] J. Friedman. Greedy function approximation: a gradient boosting machine. Annals of...and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 ( NIPS ’13), pages 3111–3119. 2013. [15] A. Quattoni, M...Collins, and T. Darrell. Conditional random fields for object recognition. In Advances in Neural Information Processing Systems 17 ( NIPS ’04), pages

  16. Voltage-Boosting Driver For Switching Regulator

    NASA Technical Reports Server (NTRS)

    Trump, Ronald C.

    1990-01-01

    Driver circuit assures availability of 10- to 15-V gate-to-source voltage needed to turn on n-channel metal oxide/semiconductor field-effect transistor (MOSFET) acting as switch in switching voltage regulator. Includes voltage-boosting circuit efficiently providing gate voltage 10 to 15 V above supply voltage. Contains no exotic parts and does not require additional power supply. Consists of NAND gate and dual voltage booster operating in conjunction with pulse-width modulator part of regulator.

  17. Voltage-Boosting Driver For Switching Regulator

    NASA Technical Reports Server (NTRS)

    Trump, Ronald C.

    1990-01-01

    Driver circuit assures availability of 10- to 15-V gate-to-source voltage needed to turn on n-channel metal oxide/semiconductor field-effect transistor (MOSFET) acting as switch in switching voltage regulator. Includes voltage-boosting circuit efficiently providing gate voltage 10 to 15 V above supply voltage. Contains no exotic parts and does not require additional power supply. Consists of NAND gate and dual voltage booster operating in conjunction with pulse-width modulator part of regulator.

  18. Kill: boosting HIV-specific immune responses.

    PubMed

    Trautmann, Lydie

    2016-07-01

    Increasing evidence suggests that purging the latent HIV reservoir in virally suppressed individuals will require both the induction of viral replication from its latent state and the elimination of these reactivated HIV-infected cells ('Shock and Kill' strategy). Boosting potent HIV-specific CD8 T cells is a promising way to achieve an HIV cure. Recent studies provided the rationale for developing immune interventions to increase the numbers, function and location of HIV-specific CD8 T cells to purge HIV reservoirs. Multiple approaches are being evaluated including very early suppression of HIV replication in acute infection, adoptive cell transfer, therapeutic vaccination or use of immunomodulatory molecules. New assays to measure the killing and antiviral function of induced HIV-specific CD8 T cells have been developed to assess the efficacy of these new approaches. The strategies combining HIV reactivation and immunobased therapies to boost HIV-specific CD8 T cells can be tested in in-vivo and in-silico models to accelerate the design of new clinical trials. New immunobased strategies are explored to boost HIV-specific CD8 T cells able to purge the HIV-infected cells with the ultimate goal of achieving spontaneous control of viral replication without antiretroviral treatment.

  19. Boosted Random Ferns for Object Detection.

    PubMed

    Villamizar, Michael; Andrade-Cetto, Juan; Sanfeliu, Alberto; Moreno-Noguer, Francesc

    2017-03-01

    In this paper we introduce the Boosted Random Ferns (BRFs) to rapidly build discriminative classifiers for learning and detecting object categories. At the core of our approach we use standard random ferns, but we introduce four main innovations that let us bring ferns from an instance to a category level, and still retain efficiency. First, we define binary features on the histogram of oriented gradients-domain (as opposed to intensity-), allowing for a better representation of intra-class variability. Second, both the positions where ferns are evaluated within the sliding window, and the location of the binary features for each fern are not chosen completely at random, but instead we use a boosting strategy to pick the most discriminative combination of them. This is further enhanced by our third contribution, that is to adapt the boosting strategy to enable sharing of binary features among different ferns, yielding high recognition rates at a low computational cost. And finally, we show that training can be performed online, for sequentially arriving images. Overall, the resulting classifier can be very efficiently trained, densely evaluated for all image locations in about 0.1 seconds, and provides detection rates similar to competing approaches that require expensive and significantly slower processing times. We demonstrate the effectiveness of our approach by thorough experimentation in publicly available datasets in which we compare against state-of-the-art, and for tasks of both 2D detection and 3D multi-view estimation.

  20. Boost breaking in the EFT of inflation

    DOE PAGES

    Delacrétaz, Luca V.; Noumi, Toshifumi; Senatore, Leonardo

    2017-02-17

    If time-translations are spontaneously broken, so are boosts. This symmetry breaking pattern can be non-linearly realized by either just the Goldstone boson of time translations, or by four Goldstone bosons associated with time translations and boosts. Here in this paper we extend the Effective Field Theory of Multifield Inflation to consider the case in which the additional Goldstone bosons associated with boosts are light and coupled to the Goldstone boson of time translations. The symmetry breaking pattern forces a coupling to curvature so that the mass of the additional Goldstone bosons is predicted to be equal to √2H in themore » vast majority of the parameter space where they are light. This pattern therefore offers a natural way of generating self-interacting particles with Hubble mass during inflation. After constructing the general effective Lagrangian, we study how these particles mix and interact with the curvature fluctuations, generating potentially detectable non-Gaussian signals.« less

  1. Reversal of trauma-induced coagulopathy using first-line coagulation factor concentrates or fresh frozen plasma (RETIC): a single-centre, parallel-group, open-label, randomised trial.

    PubMed

    Innerhofer, Petra; Fries, Dietmar; Mittermayr, Markus; Innerhofer, Nicole; von Langen, Daniel; Hell, Tobias; Gruber, Gottfried; Schmid, Stefan; Friesenecker, Barbara; Lorenz, Ingo H; Ströhle, Mathias; Rastner, Verena; Trübsbach, Susanne; Raab, Helmut; Treml, Benedikt; Wally, Dieter; Treichl, Benjamin; Mayr, Agnes; Kranewitter, Christof; Oswald, Elgar

    2017-06-01

    Effective treatment of trauma-induced coagulopathy is important; however, the optimal therapy is still not known. We aimed to compare the efficacy of first-line therapy using fresh frozen plasma (FFP) or coagulation factor concentrates (CFC) for the reversal of trauma-induced coagulopathy, the arising transfusion requirements, and consequently the development of multiple organ failure. This single-centre, parallel-group, open-label, randomised trial was done at the Level 1 Trauma Center in Innsbruck Medical University Hospital (Innsbruck, Austria). Patients with trauma aged 18-80 years, with an Injury Severity Score (ISS) greater than 15, bleeding signs, and plasmatic coagulopathy identified by abnormal fibrin polymerisation or prolonged coagulation time using rotational thromboelastometry (ROTEM) were eligible. Patients with injuries that were judged incompatible with survival, cardiopulmonary resuscitation on the scene, isolated brain injury, burn injury, avalanche injury, or prehospital coagulation therapy other than tranexamic acid were excluded. We used a computer-generated randomisation list, stratification for brain injury and ISS, and closed opaque envelopes to randomly allocate patients to treatment with FFP (15 mL/kg of bodyweight) or CFC (primarily fibrinogen concentrate [50 mg/kg of bodyweight]). Bleeding management began immediately after randomisation and continued until 24 h after admission to the intensive care unit. The primary clinical endpoint was multiple organ failure in the modified intention-to-treat population (excluding patients who discontinued treatment). Reversal of coagulopathy and need for massive transfusions were important secondary efficacy endpoints that were the reason for deciding the continuation or termination of the trial. This trial is registered with ClinicalTrials.gov, number NCT01545635. Between March 3, 2012, and Feb 20, 2016, 100 out of 292 screened patients were included and randomly allocated to FFP (n=48) and CFC (n

  2. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  3. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  4. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  5. Introduction to parallel programming

    SciTech Connect

    Brawer, S. )

    1989-01-01

    This book describes parallel programming and all the basic concepts illustrated by examples in a simplified FORTRAN. Concepts covered include: The parallel programming model; The creation of multiple processes; Memory sharing; Scheduling; Data dependencies. In addition, a number of parallelized applications are presented, including a discrete-time, discrete-event simulator, numerical integration, Gaussian elimination, and parallelized versions of the traveling salesman problem and the exploration of a maze.

  6. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  7. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  8. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  9. Boost matrix converters in clean energy systems

    NASA Astrophysics Data System (ADS)

    Karaman, Ekrem

    This dissertation describes an investigation of novel power electronic converters, based on the ultra-sparse matrix topology and characterized by the minimum number of semiconductor switches. The Z-source, Quasi Z-source, Series Z-source and Switched-inductor Z-source networks were originally proposed for boosting the output voltage of power electronic inverters. These ideas were extended here on three-phase to three-phase and three-phase to single-phase indirect matrix converters. For the three-phase to three-phase matrix converters, the Z-source networks are placed between the three-switch input rectifier stage and the output six-switch inverter stage. A brief shoot-through state produces the voltage boost. An optimal pulse width modulation technique was developed to achieve high boosting capability and minimum switching losses in the converter. For the three-phase to single-phase matrix converters, those networks are placed similarly. For control purposes, a new modulation technique has been developed. As an example application, the proposed converters constitute a viable alternative to the existing solutions in residential wind-energy systems, where a low-voltage variable-speed generator feeds power to the higher-voltage fixed-frequency grid. Comprehensive analytical derivations and simulation results were carried out to investigate the operation of the proposed converters. Performance of the proposed converters was then compared between each other as well as with conventional converters. The operation of the converters was experimentally validated using a laboratory prototype.

  10. Boosting family income to promote child development.

    PubMed

    Duncan, Greg J; Magnuson, Katherine; Votruba-Drzal, Elizabeth

    2014-01-01

    Families who live in poverty face disadvantages that can hinder their children's development in many ways, write Greg Duncan, Katherine Magnuson, and Elizabeth Votruba-Drzal. As they struggle to get by economically, and as they cope with substandard housing, unsafe neighborhoods, and inadequate schools, poor families experience more stress in their daily lives than more affluent families do, with a host of psychological and developmental consequences. Poor families also lack the resources to invest in things like high-quality child care and enriched learning experiences that give more affluent children a leg up. Often, poor parents also lack the time that wealthier parents have to invest in their children, because poor parents are more likely to be raising children alone or to work nonstandard hours and have inflexible work schedules. Can increasing poor parents' incomes, independent of any other sort of assistance, help their children succeed in school and in life? The theoretical case is strong, and Duncan, Magnuson, and Votruba-Drzal find solid evidence that the answer is yes--children from poor families that see a boost in income do better in school and complete more years of schooling, for example. But if boosting poor parents' incomes can help their children, a crucial question remains: Does it matter when in a child's life the additional income appears? Developmental neurobiology strongly suggests that increased income should have the greatest effect during children's early years, when their brains and other systems are developing rapidly, though we need more evidence to prove this conclusively. The authors offer examples of how policy makers could incorporate the findings they present to create more effective programs for families living in poverty. And they conclude with a warning: if a boost in income can help poor children, then a drop in income--for example, through cuts to social safety net programs like food stamps--can surely harm them.

  11. Can role models boost entrepreneurial attitudes?

    PubMed Central

    Fellnhofer, Katharina; Puumalainen, Kaisu

    2017-01-01

    This multi-country study used role models to boost perceptions of entrepreneurial feasibility and desirability. The results of a structural equation model based on a sample comprising 426 individuals who were primarily from Austria, Finland and Greece revealed a significant positive influence on perceived entrepreneurial desirability and feasibility. These findings support the argument for embedding entrepreneurial role models in entrepreneurship education courses to promote entrepreneurial activities. This direction is not only relevant for the academic community but also essential for nascent entrepreneurs, policymakers and society at large. PMID:28458611

  12. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  13. A comparative approach between heterologous prime-boost vaccination strategy and DNA vaccinations for rabies.

    PubMed

    Borhani, Kiandokht; Ajorloo, Mehdi; Bamdad, Taravat; Mozhgani, Sayed Hamid Reza; Ghaderi, Mostafa; Gholami, Ali Reza

    2015-04-01

    Rabies is a widespread neurological zoonotic disease causing significant mortality rates, especially in developing countries. Although a vaccine for rabies is available, its production and scheduling are costly in such countries. Advances in recombinant DNA technology have made it a good candidate for an affordable vaccine. Among the proteins of rabies virus, the Glycoprotein (RVG) has been the major target for new vaccine development which plays the principal role in providing complete protection against RV challenge. The aim of this study is to produce recombinant RVG which could be a DNA vaccine candidate and to evaluate the efficiency of this construct in a prime-boost vaccination regimen, compared to commercial vaccine. Cloning to pcDNA3.1(+) and expression of rabies virus glycoprotein gene in BSR cell  line were performed followed by SDS-PAGE and Western blot analysis of the expressed glycoprotein. The resulting genetic construct was used as a DNA vaccine by injecting 80 µg of the plasmid to MNRI mice twice. Prime-Boost vaccination strategy was performed using 80 µg plasmid construct as prime dose and the second dose of an inactivated rabies virus vaccine. Production of rabies virus neutralizing antibody (RVNA) titers of the serum samples were determined by RFFIT. In comparisons between heterologous prime-boost vaccination strategy and DNA vaccinations, the potency of group D that received Prime-Boost vaccine with the second dose of pcDNA3.1(+)-Gp was enhanced significantly compared to the group C which had received pcDNA3.1(+)-Gp as first injection. In this study, RVGP expressing construct was used in a comparative approach between Prime-Boost vaccination strategy and DNA vaccination and compared with the standard method of rabies vaccination. It was concluded that this strategy could lead to induction of acceptable humoral immunity.

  14. Presentation of antigen in immune complexes is boosted by soluble bacterial immunoglobulin binding proteins.

    PubMed

    Léonetti, M; Galon, J; Thai, R; Sautès-Fridman, C; Moine, G; Ménez, A

    1999-04-19

    Using a snake toxin as a proteic antigen (Ag), two murine toxin-specific monoclonal antibodies (mAbs), splenocytes, and two murine Ag-specific T cell hybridomas, we showed that soluble protein A (SpA) from Staphylococcus aureus and protein G from Streptococcus subspecies, two Ig binding proteins (IBPs), not only abolish the capacity of the mAbs to decrease Ag presentation but also increase Ag presentation 20-100-fold. Five lines of evidence suggest that this phenomenon results from binding of an IBP-Ab-Ag complex to B cells possessing IBP receptors. First, we showed that SpA is likely to boost presentation of a free mAb, suggesting that the IBP-boosted presentation of an Ag in an immune complex results from the binding of IBP to the mAb. Second, FACS analyses showed that an Ag-Ab complex is preferentially targeted by SpA to a subpopulation of splenocytes mainly composed of B cells. Third, SpA-dependent boosted presentation of an Ag-Ab complex is further enhanced when splenocytes are enriched in cells containing SpA receptors. Fourth, the boosting effect largely diminishes when splenocytes are depleted of cells containing SpA receptors. Fifth, the boosting effect occurs only when IBP simultaneously contains a Fab and an Fc binding site. Altogether, our data suggest that soluble IBPs can bridge immune complexes to APCs containing IBP receptors, raising the possibility that during an infection process by bacteria secreting these IBPs, Ag-specific T cells may activate IBP receptor-containing B cells by a mechanism of intermolecular help, thus leading to a nonspecific immune response.

  15. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  16. Parallel Climate Analysis Toolkit (ParCAT)

    SciTech Connect

    Smith, Brian Edward

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  17. Precision Jet Substructure from Boosted Event Shapes

    NASA Astrophysics Data System (ADS)

    Feige, Ilya; Schwartz, Matthew D.; Stewart, Iain W.; Thaler, Jesse

    2012-08-01

    Jet substructure has emerged as a critical tool for LHC searches, but studies so far have relied heavily on shower Monte Carlo simulations, which formally approximate QCD at the leading-log level. We demonstrate that systematic higher-order QCD computations of jet substructure can be carried out by boosting global event shapes by a large momentum Q and accounting for effects due to finite jet size, initial-state radiation (ISR), and the underlying event (UE) as 1/Q corrections. In particular, we compute the 2-subjettiness substructure distribution for boosted Z→qq¯ events at the LHC at next-to-next-to-next-to-leading-log order. The calculation is greatly simplified by recycling known results for the thrust distribution in e+e- collisions. The 2-subjettiness distribution quickly saturates, becoming Q independent for Q≳400GeV. Crucially, the effects of jet contamination from ISR/UE can be subtracted out analytically at large Q without knowing their detailed form. Amusingly, the Q=∞ and Q=0 distributions are related by a scaling by e up to next-to-leading-log order.

  18. A multiview boosting approach to tissue segmentation

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Xu, Sheng; Pinto, Peter A.; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter L.; Wood, Bradford J.

    2014-04-01

    Digitized histopathology images have a great potential for improving or facilitating current assessment tools in cancer pathology. In order to develop accurate and robust automated methods, the precise segmentation of histologic objects such epithelium, stroma, and nucleus is necessary, in the hopes of information extraction not otherwise obvious to the subjective eye. Here, we propose a multivew boosting approach to segment histology objects of prostate tissue. Tissue specimen images are first represented at different scales using a Gaussian kernel and converted into several forms such HSV and La*b*. Intensity- and texture-based features are extracted from the converted images. Adopting multiview boosting approach, we effectively learn a classifier to predict the histologic class of a pixel in a prostate tissue specimen. The method attempts to integrate the information from multiple scales (or views). 18 prostate tissue specimens from 4 patients were employed to evaluate the new method. The method was trained on 11 tissue specimens including 75,832 epithelial and 103,453 stroma pixels and tested on 55,319 epithelial and 74,945 stroma pixels from 7 tissue specimens. The technique showed 96.7% accuracy, and as summarized into a receiver operating characteristic (ROC) plot, the area under the ROC curve (AUC) of 0.983 (95% CI: 0.983-0.984) was achieved.

  19. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  20. Non-boost-invariant dissipative hydrodynamics

    NASA Astrophysics Data System (ADS)

    Florkowski, Wojciech; Ryblewski, Radoslaw; Strickland, Michael; Tinti, Leonardo

    2016-12-01

    The one-dimensional non-boost-invariant evolution of the quark-gluon plasma, presumably produced during the early stages of heavy-ion collisions, is analyzed within the frameworks of viscous and anisotropic hydrodynamics. We neglect transverse dynamics and assume homogeneous conditions in the transverse plane but, differently from Bjorken expansion, we relax longitudinal boost invariance in order to study the rapidity dependence of various hydrodynamical observables. We compare the results obtained using several formulations of second-order viscous hydrodynamics with a recent approach to anisotropic hydrodynamics, which treats the large initial pressure anisotropy in a nonperturbative fashion. The results obtained with second-order viscous hydrodynamics depend on the particular choice of the second-order terms included, which suggests that the latter should be included in the most complete way. The results of anisotropic hydrodynamics and viscous hydrodynamics agree for the central hot part of the system, however, they differ at the edges where the approach of anisotropic hydrodynamics helps to control the undesirable growth of viscous corrections observed in standard frameworks.

  1. Centaur boost pump turbine icing investigation

    NASA Technical Reports Server (NTRS)

    Rollbuhler, R. J.

    1976-01-01

    An investigation was conducted to determine if ice formation in the Centaur vehicle liquid oxygen boost pump turbine could prevent rotation of the pump and whether or not this phenomenon could have been the failure mechanism for the Titan/Centaur vehicle TC-1. The investigation consisted of a series of tests done in the LeRC Space Power Chamber Facility to evaluate evaporative cooling behavior patterns in a turbine as a function of the quantity of water trapped in the turbine and as a function of the vehicle ascent pressure profile. It was found that evaporative freezing of water in the turbine housing, due to rapid depressurization within the turbine during vehicle ascent, could result in the formation of ice that would block the turbine and prevent rotation of the boost pump. But for such icing conditions to exist it would be necessary to have significant quantities of water in the turbine and/or its components, and the turbine housing temperature would have to be colder than 40 F at vehicle liftoff.

  2. Series Transmission Line Transformer

    DOEpatents

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  3. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  4. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  5. Languages for parallel architectures

    SciTech Connect

    Bakker, J.W.

    1989-01-01

    This book presents mathematical methods for modelling parallel computer architectures, based on the results of ESPRIT's project 415 on computer languages for parallel architectures. Presented are investigations incorporating a wide variety of programming styles, including functional,logic, and object-oriented paradigms. Topics cover include Philips's parallel object-oriented language POOL, lazy-functional languages, the languages IDEAL, K-LEAF, FP2, and Petri-net semantics for the AADL language.

  6. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  7. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  8. Simulation Exploration through Immersive Parallel Planes: Preprint

    SciTech Connect

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  9. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  10. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  11. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  12. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  13. Parallel methods for the flight simulation model

    SciTech Connect

    Xiong, Wei Zhong; Swietlik, C.

    1994-06-01

    The Advanced Computer Applications Center (ACAC) has been involved in evaluating advanced parallel architecture computers and the applicability of these machines to computer simulation models. The advanced systems investigated include parallel machines with shared. memory and distributed architectures consisting of an eight processor Alliant FX/8, a twenty four processor sor Sequent Symmetry, Cray XMP, IBM RISC 6000 model 550, and the Intel Touchstone eight processor Gamma and 512 processor Delta machines. Since parallelizing a truly efficient application program for the parallel machine is a difficult task, the implementation for these machines in a realistic setting has been largely overlooked. The ACAC has developed considerable expertise in optimizing and parallelizing application models on a collection of advanced multiprocessor systems. One of aspect of such an application model is the Flight Simulation Model, which used a set of differential equations to describe the flight characteristics of a launched missile by means of a trajectory. The Flight Simulation Model was written in the FORTRAN language with approximately 29,000 lines of source code. Depending on the number of trajectories, the computation can require several hours to full day of CPU time on DEC/VAX 8650 system. There is an impetus to reduce the execution time and utilize the advanced parallel architecture computing environment available. ACAC researchers developed a parallel method that allows the Flight Simulation Model to be able to run in parallel on the multiprocessor system. For the benchmark data tested, the parallel Flight Simulation Model implemented on the Alliant FX/8 has achieved nearly linear speedup. In this paper, we describe a parallel method for the Flight Simulation Model. We believe the method presented in this paper provides a general concept for the design of parallel applications. This concept, in most cases, can be adapted to many other sequential application programs.

  14. Behavior Analysis in Distance Education by Boosting Algorithms

    ERIC Educational Resources Information Center

    Zang, Wei; Lin, Fuzong

    2006-01-01

    Student behavior analysis is an active research topic in distance education in recent years. In this article, we propose a new method called Boosting to investigate students' behaviors. The Boosting Algorithm can be treated as a data mining method, trying to infer from a large amount of training data the essential factors and their relations that…

  15. Line Creek improves efficiency

    SciTech Connect

    Harder, P.

    1988-04-01

    Boosting coal recovery rate by 8% and reducing fuel expense $18,000 annually by replacing two tractors, are two tangible benefits that Crows Nest Resources of British Columbia has achieved since overseas coal markets weakened in 1985. Though coal production at the 4-million tpy Line Creek open pit mine has been cut 25% from its 1984 level, morale among the pit crew remains high. More efficient pit equipment, innovative use of existing equipment, and encouragement of multiple skill development among workers - so people can be assigned to different jobs in the operation as situations demand - contribute to a successful operation.

  16. Boosted one dimensional fermionic superfluids on a lattice

    NASA Astrophysics Data System (ADS)

    Ray, Sayonee; Mukerjee, Subroto; Shenoy, Vijay B.

    2017-09-01

    We study the effect of a boost (Fermi sea displaced by a finite momentum) on one dimensional systems of lattice fermions with short-ranged interactions. In the absence of a boost such systems with attractive interactions possess algebraic superconducting order. Motivated by physics in higher dimensions, one might naively expect a boost to weaken and ultimately destroy superconductivity. However, we show that for one dimensional systems the effect of the boost can be to strengthen the algebraic superconducting order by making correlation functions fall off more slowly with distance. This phenomenon can manifest in interesting ways, for example, a boost can produce a Luther-Emery phase in a system with both charge and spin gaps by engendering the destruction of the former.

  17. Orbiter aborts from boost: Presimulation report

    NASA Technical Reports Server (NTRS)

    Backman, H. D.; Brechka, K. G.

    1972-01-01

    A description of a hybrid simulation of the 040C orbiter aborting from boost to specified landing site is provided. The simulation starts when the abort is initiated and continues until a terminal energy state (associated with the selected landing site) is reached. At abort it is assumed that all SRM's are jettisoned with the external tank remaining with the orbiter. The simulation described has six degrees of freedom with the vehicle simulated as a rigid body. A conventional form of autopilot is provided to control engine gimbaling during powered flight. An ideal form of an autopilot is provided to test conventional autopilot function and provide pseudo RCS function during coasting flight. The simulation is proposed to provide means for studies of abort guidance function and to gain information concerning ability to control the abort trajectory.

  18. Inflammation boosts bacteriophage transfer between Salmonella spp.

    PubMed

    Diard, Médéric; Bakkeren, Erik; Cornuault, Jeffrey K; Moor, Kathrin; Hausmann, Annika; Sellin, Mikael E; Loverdo, Claude; Aertsen, Abram; Ackermann, Martin; De Paepe, Marianne; Slack, Emma; Hardt, Wolf-Dietrich

    2017-03-17

    Bacteriophage transfer (lysogenic conversion) promotes bacterial virulence evolution. There is limited understanding of the factors that determine lysogenic conversion dynamics within infected hosts. A murine Salmonella Typhimurium (STm) diarrhea model was used to study the transfer of SopEΦ, a prophage from STm SL1344, to STm ATCC14028S. Gut inflammation and enteric disease triggered >55% lysogenic conversion of ATCC14028S within 3 days. Without inflammation, SopEΦ transfer was reduced by up to 10(5)-fold. This was because inflammation (e.g., reactive oxygen species, reactive nitrogen species, hypochlorite) triggers the bacterial SOS response, boosts expression of the phage antirepressor Tum, and thereby promotes free phage production and subsequent transfer. Mucosal vaccination prevented a dense intestinal STm population from inducing inflammation and consequently abolished SopEΦ transfer. Vaccination may be a general strategy for blocking pathogen evolution that requires disease-driven transfer of temperate bacteriophages.

  19. Boosting jet power in black hole spacetimes.

    PubMed

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W; Liebling, Steven L; Motl, Patrick M; Garrett, Travis

    2011-08-02

    The extraction of rotational energy from a spinning black hole via the Blandford-Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux.

  20. Giving top quark effective operators a boost

    NASA Astrophysics Data System (ADS)

    Englert, Christoph; Moore, Liam; Nordström, Karl; Russell, Michael

    2016-12-01

    We investigate the prospects to systematically improve generic effective field theory-based searches for new physics in the top sector during LHC run 2 as well as the high luminosity phase. In particular, we assess the benefits of high momentum transfer final states on top EFT-fit as a function of systematic uncertainties in comparison with sensitivity expected from fully-resolved analyses focusing on t t bar production. We find that constraints are typically driven by fully-resolved selections, while boosted top quarks can serve to break degeneracies in the global fit. This demystifies and clarifies the importance of high momentum transfer final states for global fits to new interactions in the top sector from direct measurements.

  1. Boosting jet power in black hole spacetimes

    PubMed Central

    Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W.; Liebling, Steven L.; Motl, Patrick M.; Garrett, Travis

    2011-01-01

    The extraction of rotational energy from a spinning black hole via the Blandford–Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux. PMID:21768341

  2. Boosted top quarks and jet structure

    NASA Astrophysics Data System (ADS)

    Schätzel, Sebastian

    2015-09-01

    The Large Hadron Collider is the first particle accelerator that provides high enough energy to produce large numbers of boosted top quarks. The decay products of these top quarks are confined to a cone in the top quark flight direction and can be clustered into a single jet. Top quark reconstruction then amounts to analysing the structure of the jet and looking for subjets that are kinematically compatible with top quark decay. Many techniques have been developed in this context to identify top quarks in a large background of non-top jets. This article reviews the results obtained using data recorded in the years 2010-2012 by the experiments ATLAS and CMS. Studies of Standard Model top quark production and searches for new massive particles that decay to top quarks are presented.

  3. Hydrodynamic approach to boost invariant free streaming

    NASA Astrophysics Data System (ADS)

    Calzetta, E.

    2015-08-01

    We consider a family of exact boost invariant solutions of the transport equation for free-streaming massless particles, where the one-particle distribution function is defined in terms of a function of a single variable. The evolution of second and third moments of the one-particle distribution function [the second moment being the energy momentum tensor (EMT) and the third moment the nonequilibrium current (NEC)] depends only on two moments of that function. Given those two moments, we show how to build a nonlinear hydrodynamic theory which reproduces the early time evolution of the EMT and the NEC. The structure of these theories may give insight on nonlinear hydrodynamic phenomena on short time scales.

  4. Boosting for multi-graph classification.

    PubMed

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Cai, Zhihua

    2015-03-01

    In this paper, we formulate a novel graph-based learning problem, multi-graph classification (MGC), which aims to learn a classifier from a set of labeled bags each containing a number of graphs inside the bag. A bag is labeled positive, if at least one graph in the bag is positive, and negative otherwise. Such a multi-graph representation can be used for many real-world applications, such as webpage classification, where a webpage can be regarded as a bag with texts and images inside the webpage being represented as graphs. This problem is a generalization of multi-instance learning (MIL) but with vital differences, mainly because instances in MIL share a common feature space whereas no feature is available to represent graphs in a multi-graph bag. To solve the problem, we propose a boosting based multi-graph classification framework (bMGC). Given a set of labeled multi-graph bags, bMGC employs dynamic weight adjustment at both bag- and graph-levels to select one subgraph in each iteration as a weak classifier. In each iteration, bag and graph weights are adjusted such that an incorrectly classified bag will receive a higher weight because its predicted bag label conflicts to the genuine label, whereas an incorrectly classified graph will receive a lower weight value if the graph is in a positive bag (or a higher weight if the graph is in a negative bag). Accordingly, bMGC is able to differentiate graphs in positive and negative bags to derive effective classifiers to form a boosting model for MGC. Experiments and comparisons on real-world multi-graph learning tasks demonstrate the algorithm performance.

  5. The attentional boost effect and context memory.

    PubMed

    Mulligan, Neil W; Smith, S Adam; Spataro, Pietro

    2016-04-01

    Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors-the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is unclear if context memory is likewise affected. Some accounts suggest enhanced perceptual encoding or associative binding, predicting an ABE on context memory, whereas other evidence suggests a more abstract, amodal basis of the effect. In Experiment 1, context memory was assessed in terms of an intramodal perceptual detail, the font and color of the study word. Experiment 2 examined context memory cross-modally, assessing memory for the modality (visual or auditory) of the study word. Experiments 3 and 4 assessed context memory with list discrimination, in which 2 study lists are presented and participants must later remember which list (if either) a test word came from. In all experiments, item (recognition) memory was also assessed and consistently displayed a robust ABE. In contrast, the attentional-boost manipulation did not enhance context memory, whether defined in terms of visual details, study modality, or list membership. There was some evidence that the mode of responding on the detection task (motoric response as opposed to covert counting of targets) may impact context memory but there was no evidence of an effect of target detection, per se. In sum, the ABE did not occur in context memory with verbal materials. (c) 2016 APA, all rights reserved).

  6. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Florence H.

    2004-05-01

    Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.

  7. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  8. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  9. Parallels in History.

    ERIC Educational Resources Information Center

    Mugleston, William F.

    2000-01-01

    Believes that by focusing on the recurrent situations and problems, or parallels, throughout history, students will understand the relevance of history to their own times and lives. Provides suggestions for parallels in history that may be introduced within lectures or as a means to class discussions. (CMK)

  10. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  11. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  12. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  13. The reach for charged Higgs bosons with boosted bottom and boosted top jets

    NASA Astrophysics Data System (ADS)

    Sullivan, Zack; Pedersen, Keith

    2017-01-01

    At moderate values of tan(β) , a supersymmetric charged Higgs boson H+/- is expected to be difficult to find due its small cross section and large backgrounds. Using the new μx boosted bottom jet tag, and measured boosted top tagging rates from the CERN LHC, we examine the reach for TeV-scale charged Higgs bosons at 14 TeV and 100 TeV colliders in top-Higgs associated production, where the charged Higgs decays to a boosted top and bottom quark pair. We conclude that the cross section for charged Higgs bosons is indeed too small to observe at the LHC in the moderate tan(β) ``wedge region,'' but it will be possible to probe charged Higgs bosons at nearly all tan(β) up to 6 TeV at a 100 TeV collider. This work was supported by the U.S. Department of Energy under award No. DE-SC0008347.

  14. Development of cassava periclinal chimera may boost production.

    PubMed

    Bomfim, N; Nassar, N M A

    2014-02-10

    Plant periclinal chimeras are genotypic mosaics arranged concentrically. Trials to produce them to combine different species have been done, but pratical results have not been achieved. We report for the second time the development of a very productive interspecific periclinal chimera in cassava. It has very large edible roots up to 14 kg per plant at one year old compared to 2-3 kg in common varieties. The epidermal tissue formed was from Manihot esculenta cultivar UnB 032, and the subepidermal and internal tissue from the wild species, Manihot fortalezensis. We determined the origin of tissues by meiotic and mitotic chromosome counts, plant anatomy and morphology. Epidermal features displayed useful traits to deduce tissue origin: cell shape and size, trichome density and stomatal length. Chimera roots had a wholly tuberous and edible constitution with smaller starch granule size and similar distribution compared to cassava. Root size enlargement might have been due to an epigenetic effect. These results suggest a new line of improved crop based on the development of interspecific chimeras composed of different combinations of wild and cultivated species. It promises boosting cassava production through exceptional root enlargement.

  15. Linked-View Parallel Coordinate Plot Renderer

    SciTech Connect

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  16. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  17. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  18. First results of the Los Alamos polyphase boost converter-modulator

    SciTech Connect

    Doss, James D.; Gribble, R. F.; Lynch, M. T.; Rees, D. E.; Tallerico, P. J.; Reass, W. A.

    2001-01-01

    This paper describes the first full-scale electrical test results of the Los Alamos polyphase boost converter-modulator being developed for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. The convertrr-modulator provides 140 kV, 1.2 mS, 60 Hz pulses to a 5 MW, 805 MHz klystron. The system, which has 1 MW average power, derives its +/- 1250 Volt DC buss link voltages from a standard 3-phase utility 13.8 kV to 2100 volt transformer. An SCR pre-regulator provides a soft-start function in addition to correction of line and load variations, from no-load to full-load. Energy storage is provided by low inductance self-clearing metallized hazy polypropylene traction capacitors. Each of the 3-phase H-bridge Insulated Gate Bipolar Transistor (IGBT) Pulse-Width Modulation (PWM) drivers are resonated with the amorphous nanocrystalline boost transformer and associated peaking circuits to provide zero-voltage-switching characteristics for the IGBT's. This design feature minimizes IGBT switching losses. By PWM of individual IGBT conduction angles, output pulse regulation with adaptive feedforward and feedback techniques is used to improve the klystron voltage pulse shape. In addition to the first operational results, this paper will discuss the relevant design techniques associated with the boost converter-modulator topology.

  19. Series-Connected Buck Boost Regulators

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G.

    2005-01-01

    A series-connected buck boost regulator (SCBBR) is an electronic circuit that bucks a power-supply voltage to a lower regulated value or boosts it to a higher regulated value. The concept of the SCBBR is a generalization of the concept of the SCBR, which was reported in "Series-Connected Boost Regulators" (LEW-15918), NASA Tech Briefs, Vol. 23, No. 7 (July 1997), page 42. Relative to prior DC-voltage-regulator concepts, the SCBBR concept can yield significant reductions in weight and increases in power-conversion efficiency in many applications in which input/output voltage ratios are relatively small and isolation is not required, as solar-array regulation or battery charging with DC-bus regulation. Usually, a DC voltage regulator is designed to include a DC-to-DC converter to reduce its power loss, size, and weight. Advances in components, increases in operating frequencies, and improved circuit topologies have led to continual increases in efficiency and/or decreases in the sizes and weights of DC voltage regulators. The primary source of inefficiency in the DC-to-DC converter portion of a voltage regulator is the conduction loss and, especially at high frequencies, the switching loss. Although improved components and topology can reduce the switching loss, the reduction is limited by the fact that the converter generally switches all the power being regulated. Like the SCBR concept, the SCBBR concept involves a circuit configuration in which only a fraction of the power is switched, so that the switching loss is reduced by an amount that is largely independent of the specific components and circuit topology used. In an SCBBR, the amount of power switched by the DC-to-DC converter is only the amount needed to make up the difference between the input and output bus voltage. The remaining majority of the power passes through the converter without being switched. The weight and power loss of a DC-to-DC converter are determined primarily by the amount of power

  20. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  1. Climbazole boosts activity of retinoids in skin.

    PubMed

    Adamus, J; Feng, L; Hawkins, S; Kalleberg, K; Lee, J-M

    2017-08-01

    To explore whether climbazole enhances retinoid-associated biological activities in vitro and in vivo. Primary human dermal fibroblasts (HDFs) were treated from six to 48 h with either retinoids (retinol, retinyl propionate, retinyl palmitate) alone or in combination with climbazole, and then assessed for cellular retinoic acid-binding protein 2 (CRABP2) mRNA expression by RT-qPCR. Next, skin equivalent (SE) cultures were topically treated with retinol or retinyl propionate, with or without climbazole, and then measured for biological changes in retinoid biomarkers. Lastly, an IRB-approved clinical study was conducted on the outer forearm of 16 subjects to ascertain the effects of low (0.02%) or high (0.1%) levels of retinol, retinyl propionate (0.5%), climbazole (0.5%) or a combination of retinol (0.02%)/climbazole (0.5%). Indicators of retinoid activities were measured after 3 weeks. Treatment of HDFs with retinol or retinyl propionate was unaffected by climbazole but alone, resulted in a significantly (P < 0.01) higher sustained CRABP2 mRNA expression than those treated with retinyl palmitate or vehicle control. In SEs, climbazole combined with either retinol or retinyl propionate boosted retinoid related activity greater than the retinoid only, reflected by a dose-response, downregulation of loricrin (LOR) and induction of keratin 4 (KRT4) proteins. In vivo, retinol (0.1%) and retinyl propionate (0.5%) significantly increased most evaluated biomarkers, as expected. Low-dose retinol or climbazole alone did not increase these biomarkers; however, in combination, significant (P < 0.05) increases in retinoid and ageing biomarkers were detected. Climbazole boosted retinoid activity both in the SE model, after a combined topic treatment with either retinol or retinyl propionate, and in vivo, in combination with a low level of retinol. Based upon the evidence presented here, we suggest that the topical skin application of climbazole in combination with retinoids could

  2. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  3. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  4. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  5. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  6. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  7. 39% access time improvement, 11% energy reduction, 32 kbit 1-read/1-write 2-port static random-access memory using two-stage read boost and write-boost after read sensing scheme

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yasue; Moriwaki, Shinichi; Kawasumi, Atsushi; Miyano, Shinji; Shinohara, Hirofumi

    2016-04-01

    We propose novel circuit techniques for 1 clock (1CLK) 1 read/1 write (1R/1W) 2-port static random-access memories (SRAMs) to improve read access time (tAC) and write margins at low voltages. Two-stage read boost (TSR-BST) and write word line boost (WWL-BST) after the read sensing schemes have been proposed. TSR-BST reduces the worst read bit line (RBL) delay by 61% and RBL amplitude by 10% at V DD = 0.5 V, which improves tAC by 39% and reduces energy dissipation by 11% at V DD = 0.55 V. WWL-BST after read sensing scheme improves minimum operating voltage (V min) by 140 mV. A 32 kbit 1CLK 1R/1W 2-port SRAM with TSR-BST and WWL-BST has been developed using a 40 nm CMOS.

  8. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  9. Revisiting and parallelizing SHAKE

    NASA Astrophysics Data System (ADS)

    Weinbach, Yael; Elber, Ron

    2005-10-01

    An algorithm is presented for running SHAKE in parallel. SHAKE is a widely used approach to compute molecular dynamics trajectories with constraints. An essential step in SHAKE is the solution of a sparse linear problem of the type Ax = b, where x is a vector of unknowns. Conjugate gradient minimization (that can be done in parallel) replaces the widely used iteration process that is inherently serial. Numerical examples present good load balancing and are limited only by communication time.

  10. Novel Control for Voltage Boosted Matrix Converter based Wind Energy Conversion System with Practicality

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod; Joshi, Raghuveer Raj; Yadav, Dinesh Kumar; Garg, Rahul Kumar

    2017-04-01

    This paper presents the implementation and investigation of novel voltage boosted matrix converter (MC) based permanent magnet wind energy conversion system (WECS). In this paper, on-line tuned adaptive fuzzy control algorithm cooperated with reversed MC is proposed to yield maximum energy. The control system is implemented on a dSPACE DS1104 real time board. Feasibility of the proposed system has been experimentally verified using a laboratory 1.2 kW prototype of WECS under steady-state and dynamic conditions.

  11. Parallel and serial search in haptics.

    PubMed

    Overvliet, K E; Smeets, J B J; Brenner, E

    2007-10-01

    We propose a model that distinguishes between parallel and serial search in haptics. To test this model, participants performed three haptic search experiments in which a target and distractors were presented to their fingertips. The participants indicated a target's presence by lifting the corresponding finger, or its absence by lifting all fingers. In one experiment, the target was a cross and the distractors were circles. In another, the target was a vertical line and the distractors were horizontal lines. In both cases, we found a serial search pattern. In a final experiment, the target was a horizontal line and the distractors were surfaces without any contours. In this case, we found a parallel search pattern. We conclude that the model can describe our data very well.

  12. Boosted Regression Tree Models to Explain Watershed ...

    EPA Pesticide Factsheets

    Boosted regression tree (BRT) models were developed to quantify the nonlinear relationships between landscape variables and nutrient concentrations in a mesoscale mixed land cover watershed during base-flow conditions. Factors that affect instream biological components, based on the Index of Biotic Integrity (IBI), were also analyzed. Seasonal BRT models at two spatial scales (watershed and riparian buffered area [RBA]) for nitrite-nitrate (NO2-NO3), total Kjeldahl nitrogen, and total phosphorus (TP) and annual models for the IBI score were developed. Two primary factors — location within the watershed (i.e., geographic position, stream order, and distance to a downstream confluence) and percentage of urban land cover (both scales) — emerged as important predictor variables. Latitude and longitude interacted with other factors to explain the variability in summer NO2-NO3 concentrations and IBI scores. BRT results also suggested that location might be associated with indicators of sources (e.g., land cover), runoff potential (e.g., soil and topographic factors), and processes not easily represented by spatial data indicators. Runoff indicators (e.g., Hydrological Soil Group D and Topographic Wetness Indices) explained a substantial portion of the variability in nutrient concentrations as did point sources for TP in the summer months. The results from our BRT approach can help prioritize areas for nutrient management in mixed-use and heavily impacted watershed

  13. Boosting nitrification by membrane-attached biofilm.

    PubMed

    Wu, C Y; Ushiwaka, S; Horii, H; Yamagiwa, K

    2006-01-01

    Nitrification is a key step for reliable biological nitrogen removal. In order to enhance nitrification in the activated sludge (AS) process, membrane-attached biofilm (MAB) was incorporated in a conventional activated sludge tank. Simultaneous organic carbon removal and nitrification of the MAB incorporated activated sludge (AS + MAB) process was investigated with continuous wastewater treatment. The effluent TOC concentration of AS and the AS + MAB processes were about 6.3 mg/L and 7.9 mg/L, respectively. The TOC removal efficiency of both AS and AS + MAB were above 95% during the wastewater treatment, indicating excellent organic carbon removal performance in both processes. Little nitrification occurred in the AS process. On the contrary, successful nitrification was obtained with the AS + MAB process with nitrification efficiency of about 90%. The volumetric and surface nitrification rates were about 0.14 g/Ld and 6.5 g/m2d, respectively. The results clearly demonstrated that nitrification in the conventional AS process was boosted by MAB. Furthermore, the microfaunal population in the AS + MAB process was different from that in the AS process. The high concentration of rotifers in the AS + MAB process was expected to decrease the generation of excess sludge in the process.

  14. Designing boosting ensemble of relational fuzzy systems.

    PubMed

    Scherer, Rafał

    2010-10-01

    A method frequently used in classification systems for improving classification accuracy is to combine outputs of several classifiers. Among various types of classifiers, fuzzy ones are tempting because of using intelligible fuzzy if-then rules. In the paper we build an AdaBoost ensemble of relational neuro-fuzzy classifiers. Relational fuzzy systems bond input and output fuzzy linguistic values by a binary relation; thus, fuzzy rules have additional, comparing to traditional fuzzy systems, weights - elements of a fuzzy relation matrix. Thanks to this the system is better adjustable to data during learning. In the paper an ensemble of relational fuzzy systems is proposed. The problem is that such an ensemble contains separate rule bases which cannot be directly merged. As systems are separate, we cannot treat fuzzy rules coming from different systems as rules from the same (single) system. In the paper, the problem is addressed by a novel design of fuzzy systems constituting the ensemble, resulting in normalization of individual rule bases during learning. The method described in the paper is tested on several known benchmarks and compared with other machine learning solutions from the literature.

  15. Exploiting tRNAs to Boost Virulence.

    PubMed

    Albers, Suki; Czech, Andreas

    2016-01-19

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage-either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5'- and 3'-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification.

  16. Acetonitrile boosts conductivity of imidazolium ionic liquids.

    PubMed

    Chaban, Vitaly V; Voroshylova, Iuliia V; Kalugin, Oleg N; Prezhdo, Oleg V

    2012-07-05

    We apply a new methodology in the force field generation (Phys. Chem. Chem. Phys.2011, 13, 7910) to study binary mixtures of five imidazolium-based room-temperature ionic liquids (RTILs) with acetonitrile (ACN). Each RTIL is composed of tetrafluoroborate (BF(4)) anion and dialkylimidazolium (MMIM) cations. The first alkyl group of MIM is methyl, and the other group is ethyl (EMIM), butyl (BMIM), hexyl (HMIM), octyl (OMIM), and decyl (DMIM). Upon addition of ACN, the ionic conductivity of RTILs increases by more than 50 times. It significantly exceeds an impact of most known solvents. Unexpectedly, long-tailed imidazolium cations demonstrate the sharpest conductivity boost. This finding motivates us to revisit an application of RTIL/ACN binary systems as advanced electrolyte solutions. The conductivity correlates with a composition of ion aggregates simplifying its predictability. Addition of ACN exponentially increases diffusion and decreases viscosity of the RTIL/ACN mixtures. Large amounts of ACN stabilize ion pairs, although they ruin greater ion aggregates.

  17. New ways to boost molecular dynamics simulations.

    PubMed

    Krieger, Elmar; Vriend, Gert

    2015-05-15

    We describe a set of algorithms that allow to simulate dihydrofolate reductase (DHFR, a common benchmark) with the AMBER all-atom force field at 160 nanoseconds/day on a single Intel Core i7 5960X CPU (no graphics processing unit (GPU), 23,786 atoms, particle mesh Ewald (PME), 8.0 Å cutoff, correct atom masses, reproducible trajectory, CPU with 3.6 GHz, no turbo boost, 8 AVX registers). The new features include a mixed multiple time-step algorithm (reaching 5 fs), a tuned version of LINCS to constrain bond angles, the fusion of pair list creation and force calculation, pressure coupling with a "densostat," and exploitation of new CPU instruction sets like AVX2. The impact of Intel's new transactional memory, atomic instructions, and sloppy pair lists is also analyzed. The algorithms map well to GPUs and can automatically handle most Protein Data Bank (PDB) files including ligands. An implementation is available as part of the YASARA molecular modeling and simulation program from www.YASARA.org. © 2015 The Authors Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  18. Exploiting tRNAs to Boost Virulence

    PubMed Central

    Albers, Suki; Czech, Andreas

    2016-01-01

    Transfer RNAs (tRNAs) are powerful small RNA entities that are used to translate nucleotide language of genes into the amino acid language of proteins. Their near-uniform length and tertiary structure as well as their high nucleotide similarity and post-transcriptional modifications have made it difficult to characterize individual species quantitatively. However, due to the central role of the tRNA pool in protein biosynthesis as well as newly emerging roles played by tRNAs, their quantitative assessment yields important information, particularly relevant for virus research. Viruses which depend on the host protein expression machinery have evolved various strategies to optimize tRNA usage—either by adapting to the host codon usage or encoding their own tRNAs. Additionally, several viruses bear tRNA-like elements (TLE) in the 5′- and 3′-UTR of their mRNAs. There are different hypotheses concerning the manner in which such structures boost viral protein expression. Furthermore, retroviruses use special tRNAs for packaging and initiating reverse transcription of their genetic material. Since there is a strong specificity of different viruses towards certain tRNAs, different strategies for recruitment are employed. Interestingly, modifications on tRNAs strongly impact their functionality in viruses. Here, we review those intersection points between virus and tRNA research and describe methods for assessing the tRNA pool in terms of concentration, aminoacylation and modification. PMID:26797637

  19. Parallel Unsteady Turbopump Flow Simulations for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2000-01-01

    An efficient solution procedure for time-accurate solutions of Incompressible Navier-Stokes equation is obtained. Artificial compressibility method requires a fast convergence scheme. Pressure projection method is efficient when small time-step is required. The number of sub-iteration is reduced significantly when Poisson solver employed with the continuity equation. Both computing time and memory usage are reduced (at least 3 times). Other work includes Multi Level Parallelism (MLP) of INS3D, overset connectivity for the validation case, experimental measurements, and computational model for boost pump.

  20. Breakdown of Spatial Parallel Coding in Children's Drawing

    ERIC Educational Resources Information Center

    De Bruyn, Bart; Davis, Alyson

    2005-01-01

    When drawing real scenes or copying simple geometric figures young children are highly sensitive to parallel cues and use them effectively. However, this sensitivity can break down in surprisingly simple tasks such as copying a single line where robust directional errors occur despite the presence of parallel cues. Before we can conclude that this…

  1. A Lengthy, Stable Marriage May Boost Stroke Survival

    MedlinePlus

    ... 162542.html A Lengthy, Stable Marriage May Boost Stroke Survival Lifelong singles fared the worst, study finds ... 14, 2016 WEDNESDAY, Dec. 14, 2016 (HealthDay News) -- Stroke patients may have better odds of surviving if ...

  2. Adding in Prescription for Partner Boosts STD Care

    MedlinePlus

    ... in Prescription for Partner Boosts STD Care Lower chlamydia, gonorrhea rates seen when one person can obtain ... States that let doctors prescribe drugs to treat chlamydia or gonorrhea in both partners when only one ...

  3. Cutting Salt a Health Boost for Kidney Patients

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_163628.html Cutting Salt a Health Boost for Kidney Patients Blood pressure ... Encouraging people with kidney disease to reduce their salt intake may help improve blood pressure and cut ...

  4. Xanax, Valium May Boost Pneumonia Risk in Alzheimer's Patients

    MedlinePlus

    ... html Xanax, Valium May Boost Pneumonia Risk in Alzheimer's Patients Researchers suspect people may breathe saliva or ... 10, 2017 MONDAY, April 10, 2017 (HealthDay News) -- Alzheimer's patients given sedatives such as Valium or Xanax ...

  5. Does a Low-Fat Dairy Habit Boost Parkinson's Risk?

    MedlinePlus

    ... html Does a Low-Fat Dairy Habit Boost Parkinson's Risk? Study showed 3 or more servings daily ... a slight rise in the risk of developing Parkinson's disease. Experts who reviewed the study stressed that ...

  6. Did El Nino Weather Give Zika a Boost?

    MedlinePlus

    ... fullstory_162611.html Did El Nino Weather Give Zika a Boost? Climate phenomenon could have helped infection- ... might have aided the explosive spread of the Zika virus throughout South America, a new study reports. ...

  7. Lung-Sparing Surgery May Boost Mesothelioma Survival

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_162720.html Lung-Sparing Surgery May Boost Mesothelioma Survival Treatment nearly ... 23, 2016 (HealthDay News) -- Surgery that preserves the lung, when combined with other therapies, appears to extend ...

  8. Kidney Disease May Boost Risk of Abnormal Heartbeat

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_167715.html Kidney Disease May Boost Risk of Abnormal Heartbeat And, the ... abnormal heart rhythm, a new report suggests. Chronic kidney disease can as much as double a patient's risk ...

  9. High-temperature alloys: Single-crystal performance boost

    NASA Astrophysics Data System (ADS)

    Schütze, Michael

    2016-08-01

    Titanium aluminide alloys are lightweight and have attractive properties for high-temperature applications. A new growth method that enables single-crystal production now boosts their mechanical performance.

  10. '12-Step' Strategy Boosts Success of Teen Drug Abuse Program

    MedlinePlus

    ... medlineplus.gov/news/fullstory_167650.html '12-Step' Strategy Boosts Success of Teen Drug Abuse Program Messages from recovering peers made an impact, study finds To use the sharing features on this ...

  11. Autism Greatly Boosts Kids' Injury Risk, Especially for Drowning

    MedlinePlus

    ... Boosts Kids' Injury Risk, Especially for Drowning Swimming lessons are essential -- even before other therapies, researcher says ... 2 and 3 years of age -- need swimming lessons as soon as possible, even before they start ...

  12. Solid state light source driver establishing buck or boost operation

    DOEpatents

    Palmer, Fred

    2017-08-29

    A solid state light source driver circuit that operates in either a buck convertor or a boost convertor configuration is provided. The driver circuit includes a controller, a boost switch circuit and a buck switch circuit, each coupled to the controller, and a feedback circuit, coupled to the light source. The feedback circuit provides feedback to the controller, representing a DC output of the driver circuit. The controller controls the boost switch circuit and the buck switch circuit in response to the feedback signal, to regulate current to the light source. The controller places the driver circuit in its boost converter configuration when the DC output is less than a rectified AC voltage coupled to the driver circuit at an input node. The controller places the driver circuit in its buck converter configuration when the DC output is greater than the rectified AC voltage at the input node.

  13. Mapping robust parallel multigrid algorithms to scalable memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than line relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. The parallel implementation of a V-cycle multiple semi-coarsened grid (MSG) algorithm or distributed-memory architectures such as the Intel iPSC/860 and Paragon computers is addressed. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. A mapping of an MSG algorithm to distributed-memory architectures that demonstrate how both levels of parallelism can be exploited is described. The results is a robust and effective multigrid algorithm for distributed-memory machines.

  14. Modulating Calcium Signals to Boost AON Exon Skipping for DMD

    DTIC Science & Technology

    2016-10-01

    changing DMD treatment. We have assessed whether dantrolene, an already FDA-approved drug , can boost efficacy of AON exon skipping in the context of...RNA Seq analysis to identify mechanisms of activity and specificity in order to guide discovery of second-generation skipping drugs or combinations...for DMD. Here, we will assess whether dantrolene, an FDA-approved drug already demonstrated to boost efficacy of AON exon skipping in the context of

  15. SCALING OF THE ANOMALOUS BOOST IN RELATIVISTIC JET BOUNDARY LAYER

    SciTech Connect

    Zenitani, Seiji; Hesse, Michael; Klimas, Alex

    2010-04-01

    We investigate the one-dimensional interaction of a relativistic jet and an external medium. Relativistic magnetohydrodynamic simulations show an anomalous boost of the jet fluid in the boundary layer, as previously reported. We describe the boost mechanism using an ideal relativistic fluid and magnetohydrodynamic theory. The kinetic model is also examined for further understanding. Simple scaling laws for the maximum Lorentz factor are derived, and verified by the simulations.

  16. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching Yuen

    1991-01-01

    A new Lagrangian formulation of the Euler equation is adopted for the calculation of 2-D supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, a better than six times speed-up was achieved on a 8192-processor CM-2 over a single processor of a CRAY-2.

  17. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  18. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  19. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  20. Flow cytometric application of helper adenovirus (HAd) containing GFP gene flanked by two parallel loxP sites to evaluation of 293 cre-complementing cell line and monitoring of HAd in Gutless Ad production.

    PubMed

    Park, Min Tae; Hwang, Su-Jeong; Lee, Gyun Min

    2004-01-01

    Gutless adenoviruses (GAds), namely, all gene-deleted adenoviruses, were developed to minimize their immune responses and toxic effects for a successful gene delivery tool in gene therapy. The Cre/loxP system has been widely used for GAd production. To produce GAd with a low amount of helper adenovirus (HAd) as byproduct, it is indispensable to use 293Cre cells expressing a high level of Cre for GAd production. In this study, we constructed the HAd containing enhanced green fluorescent protein gene flanked by two parallel loxP sites (HAd/GFP). The use of HAd/GFP with flow cytometry allows one to select 293Cre cells expressing a high level of Cre without using conventional Western blot analysis. Unlike conventional HAd titration methods such as plaque assay and end-point dilution assay, it also allows one to monitor rapidly the HAd as byproduct in earlier stages of GAd amplification. Taken together, the use of HAd/GFP with flow cytometry facilitates bioprocess development for efficient GAd production.

  1. Our intraoperative boost radiotherapy experience and applications

    PubMed Central

    Günay, Semra; Alan, Ömür; Yalçın, Orhan; Türkmen, Aygen; Dizdar, Nihal

    2016-01-01

    Objective: To present our experience since November 2013, and case selection criteria for intraoperative boost radiotherapy (IObRT) that significantly reduces the local recurrence rate after breast conserving surgery in patients with breast cancer. Material and Methods: Patients who were suitable for IObRT were identified within the group of patients who were selected for breast conserving surgery at our breast council. A MOBETRON (mobile linear accelerator for IObRT) was used for IObRt during surgery. Results: Patients younger than 60 years old with <3 cm invasive ductal cancer in one focus (or two foci within 2 cm), with a histologic grade of 2–3, and a high possibility of local recurrence were admitted for IObRT application. Informed consent was obtained from all participants. Lumpectomy and sentinel lymph node biopsy was performed and advancement flaps were prepared according to the size and inclination of the conus following evaluation of tumor size and surgical margins by pathology. Distance to the thoracic wall was measured, and a radiation oncologist and radiation physicist calculated the required dose. Anesthesia was regulated with slower ventilation frequency, without causing hypoxia. The skin and incision edges were protected, the field was radiated (with 6 MeV electron beam of 10 Gy) and the incision was closed. In our cases, there were no major postoperative surgical or early radiotherapy related complications. Conclusion: The completion of another stage of local therapy with IObRT during surgery positively effects sequencing of other treatments like chemotherapy, hormonotherapy and radiotherapy, if required. IObRT increases disease free and overall survival, as well as quality of life in breast cancer patients. PMID:26985156

  2. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  3. Collisionless parallel shocks

    SciTech Connect

    Khabibrakhmanov, I.K. ); Galeev, A.A.; Galinsky, V.L. )

    1993-02-01

    A collisionless parallel shock model is presented which is based on solitary-type solutions of the modified derivative nonlinear Schrodinger equation (MDNLS) for parallel Alfven waves. We generalize the standard derivative nonlinear Schrodinger equation in order to include the possible anisotropy of the plasma distribution function and higher-order Korteweg-de Vies type dispersion. Stationary solutions of MDNLS are discussed. The new mechanism, which can be called [open quote]adiabatic[close quote] of ion reflection from the magnetic mirror of the parallel shock structure is the natural and essential feature of the parallel shock that introduces the irreversible properties into the nonlinear wave structure and may significantly contribute to the plasma heating upstream as well as downstream of the shock. The anisotropic nature of [open quotes]adiabatic[close quotes] reflections leads to the asymmetric particle distribution in the upstream as well in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves which can significantly contribute to the shock thermalization. The number of adiabaticaly reflected ions define the threshold conditions of the fire-hose and mirror type instabilities in the downstream and upstream regions and thus determine a parameter region in which the described laminar parallel shock structure can exist. The threshold conditions for the fire hose and mirror-type instabilities in the downstream and upstream regions of the shock are defined by the number of reflected particles and thus determine a parameter region in which the described laminar parallel shock structure can exist. 29 refs., 4 figs.

  4. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  5. Ion parallel closures

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Lee, Hankyu Q.; Held, Eric D.

    2017-02-01

    Ion parallel closures are obtained for arbitrary atomic weights and charge numbers. For arbitrary collisionality, the heat flow and viscosity are expressed as kernel-weighted integrals of the temperature and flow-velocity gradients. Simple, fitted kernel functions are obtained from the 1600 parallel moment solution and the asymptotic behavior in the collisionless limit. The fitted kernel parameters are tabulated for various temperature ratios of ions to electrons. The closures can be used conveniently without solving the kinetic equation or higher order moment equations in closing ion fluid equations.

  6. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  7. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  8. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  9. CRUNCH_PARALLEL

    SciTech Connect

    Shumaker, Dana E.; Steefel, Carl I.

    2016-06-21

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  10. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  11. Bridging the gap between parallel file systems and local file systems : a case study with PVFS.

    SciTech Connect

    Gu, P.; Wang, J.; Ross, R.; Mathematics and Computer Science; Univ. of Central Florida

    2008-09-01

    Parallel I/O plays an increasingly important role in today's data intensive computing applications. While much attention has been paid to parallel read performance, most of this work has focused on the parallel file system, middleware, or application layers, ignoring the potential for improvement through more effective use of local storage. In this paper, we present the design and implementation of segment-structured on-disk data grouping and prefetching (SOGP), a technique that leverages additional local storage to boost the local data read performance for parallel file systems, especially for those applications with partially overlapped access patterns. Parallel virtual file system (PVFS) is chosen as an example. Our experiments show that an SOGP-enhanced PVFS prototype system can outperform a traditional Linux-Ext3-based PVFS for many applications and benchmarks, in some tests by as much as 230% in terms of I/O bandwidth.

  12. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  13. Parallel Total Energy

    SciTech Connect

    Wang, Lin-Wang

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  14. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, Michael

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  15. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  16. [The parallel saw blade].

    PubMed

    Mühldorfer-Fodor, M; Hohendorff, B; Prommersberger, K-J; van Schoonhoven, J

    2011-04-01

    For shortening osteotomy, two exactly parallel osteotomies are needed to assure a congruent adaption of the shortened bone after segment resection. This is required for regular bone healing. In addition, it is difficult to shorten a bone to a precise distance using an oblique segment resection. A mobile spacer between two saw blades keeps the distance of the blades exactly parallel during an osteotomy cut. The parallel saw blades from Synthes® are designed for 2, 2.5, 3, 4, and 5 mm shortening distances. Two types of blades are available (e.g., for transverse or oblique osteotomies) to assure precise shortening. Preoperatively, the desired type of osteotomy (transverse or oblique) and the shortening distance has to be determined. Then, the appropriate parallel saw blade is chosen, which is compatible to Synthes® Colibri with an oscillating saw attachment. During the osteotomy cut, the spacer should be kept as close to the bone as possible. Excessive force that may deform the blades should be avoided. Before manipulating the bone ends, it is important to determine that the bone is completely dissected by both saw blades to prevent fracturing of the corticalis with bony spurs. The shortening osteotomy is mainly fixated by plate osteosynthesis. For compression of the bone ends, the screws should be placed eccentrically in the plate holes. For an oblique osteotomy, an additional lag screw should be used.

  17. Parallel Coordinate Axes.

    ERIC Educational Resources Information Center

    Friedlander, Alex; And Others

    1982-01-01

    Several methods of numerical mappings other than the usual cartesian coordinate system are considered. Some examples using parallel axes representation, which are seen to lead to aesthetically pleasing or interesting configurations, are presented. Exercises with alternative representations can stimulate pupil imagination and exploration in…

  18. Parallel Dislocation Simulator

    SciTech Connect

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  19. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  20. Progress in parallelizing XOOPIC

    NASA Astrophysics Data System (ADS)

    Mardahl, Peter; Verboncoeur, J. P.

    1997-11-01

    XOOPIC (Object Orient Particle in Cell code for X11-based Unix workstations) is presently a serial 2-D 3v particle-in-cell plasma simulation (J.P. Verboncoeur, A.B. Langdon, and N.T. Gladd, ``An object-oriented electromagnetic PIC code.'' Computer Physics Communications 87 (1995) 199-211.). The present effort focuses on using parallel and distributed processing to optimize the simulation for large problems. The benefits include increased capacity for memory intensive problems, and improved performance for processor-intensive problems. The MPI library is used to enable the parallel version to be easily ported to massively parallel, SMP, and distributed computers. The philosophy employed here is to spatially decompose the system into computational regions separated by 'virtual boundaries', objects which contain the local data and algorithms to perform the local field solve and particle communication between regions. This implementation will reduce the changes required in the rest of the program by parallelization. Specific implementation details such as the hiding of communication latency behind local computation will also be discussed.

  1. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Quinn O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  2. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  3. Parallel Multigrid Equation Solver

    SciTech Connect

    Adams, Mark

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  4. Parallel Incremental Compilation

    DTIC Science & Technology

    1990-06-01

    8320136. The Government has certain rights in this ma- terial. This work was also supported by ONR/DARPA Research Contract number N00014-82- K -0193. iv...for a deletion at line k of m lines: ,return a fragment list vith two fragments: (i) carry-over: position 1 and length k -i (if k =l then discard this...fragment) (2) carry-over: position k ~m and length infinity (rest of file) (b) for an insertion at line k of m lines: return a list with three

  5. High-dose, high-precision treatment options for boosting cancer of the nasopharynx.

    PubMed

    Levendag, Peter C; Lagerwaard, Frank J; de Pan, Connie; Noever, Inge; van Nimwegen, Arent; Wijers, Oda; Nowak, Peter J C M

    2002-04-01

    The aim of the study is to define the role and type of high-dose, high-precision radiation therapy for boosting early staged T1,2a, but in particular locally advanced, T2b-4, nasopharyngeal cancer (NPC). Ninety-one patients with primary stage I-IVB NPC, were treated between 1991 and 2000 with 60-70Gy external beam radiation therapy (ERT) followed by 11-18Gy endocavitary brachytherapy (ECBT) boost. In 1996, for stage III-IVB disease, cisplatinum (CDDP)-based neoadjuvant chemotherapy (CHT) was introduced per protocol. Patients were analyzed for local control and overall survival. For a subset of 18 patients, a magnetic resonance imaging (MRI) scan at 46Gy was obtained. After matching with pre-treatment computed tomogram, patients (response) were graded into four categories; i.e. LD (T1,2a, with limited disease, i.e. disease confined to nasopharynx), LRD (T2b, with limited residual disease), ERD (T2b, with extensive residual disease), or patients initially diagnosed with T3,4 tumors. Dose distributions for ECBT (Plato-BPS v. 13.3, Nucletron) were compared to parallel-opposed three-dimensional conformal radiation therapy (Cadplan, Varian Dosetek v. 3.1), intensity modulated radiation therapy (IMRT) (Helios, Varian) and stereotactic radiotherapy (SRT) (X-plan, Radionics v. 2.02). For stage T1,2N0,1 tumors, at 2 years local control of 96% and overall survival of 80% were observed. For the poorest subset of patients, well/moderate/poorly differentiated T3,4 tumors, local control and overall survival at 2 years with CHT were 67 and 67%, respectively, vs. local control of 20% and overall survival of 12% without CHT. For LD and LRD, conformal target coverage and optimal sparing can be obtained with brachytherapy. For T2b-ERD and T3,4 tumors, these planning goals are better achieved with SRT and/or IMRT. The dosimetric findings, ease of application of the brachytherapy procedure, and the clinical results in early staged NPC, necessitates ERT combined with brachytherapy boost

  6. How Vein Sealing Boosts Fracture Opening

    NASA Astrophysics Data System (ADS)

    Nüchter, Jens-Alexander

    2015-04-01

    an increase in the fracture opening rates. (4) At constant strain rates, the rate of fracture opening increases with increasing strain. These results suggest that vein sealing boosts the rate of fracture opening, and contributes to development of low-aspect ratio veins.

  7. Boosted Fast Flux Loop Final Report

    SciTech Connect

    Boosted Fast Flux Loop Project Staff

    2009-09-01

    The Boosted Fast Flux Loop (BFFL) project was initiated to determine basic feasibility of designing, constructing, and installing in a host irradiation facility, an experimental vehicle that can replicate with reasonable fidelity the fast-flux test environment needed for fuels and materials irradiation testing for advanced reactor concepts. Originally called the Gas Test Loop (GTL) project, the activity included (1) determination of requirements that must be met for the GTL to be responsive to potential users, (2) a survey of nuclear facilities that may successfully host the GTL, (3) conceptualizing designs for hardware that can support the needed environments for neutron flux intensity and energy spectrum, atmosphere, flow, etc. needed by the experimenters, and (4) examining other aspects of such a system, such as waste generation and disposal, environmental concerns, needs for additional infrastructure, and requirements for interfacing with the host facility. A revised project plan included requesting an interim decision, termed CD-1A, that had objectives of' establishing the site for the project at the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL), deferring the CD 1 application, and authorizing a research program that would resolve the most pressing technical questions regarding GTL feasibility, including issues relating to the use of booster fuel in the ATR. Major research tasks were (1) hydraulic testing to establish flow conditions through the booster fuel, (2) mini-plate irradiation tests and post-irradiation examination to alleviate concerns over corrosion at the high heat fluxes planned, (3) development and demonstration of booster fuel fabrication techniques, and (4) a review of the impact of the GTL on the ATR safety basis. A revised cooling concept for the apparatus was conceptualized, which resulted in renaming the project to the BFFL. Before the subsequent CD-1 approval request could be made, a decision was made in April 2006

  8. An optimized posterior axillary boost technique in radiation therapy to supraclavicular and axillary lymph nodes: a comparative study.

    PubMed

    Hernandez, Victor; Arenas, Meritxell; Müller, Katrin; Gomez, David; Bonet, Marta

    2013-01-01

    To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statistical analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes. © 2013 American Association of Medical Dosimetrists.

  9. An optimized posterior axillary boost technique in radiation therapy to supraclavicular and axillary lymph nodes: A comparative study

    SciTech Connect

    Hernandez, Victor; Arenas, Meritxell; Müller, Katrin; Gomez, David; Bonet, Marta

    2013-01-01

    To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statistical analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.

  10. Finite parallel wavelengths and ionospheric structuring

    SciTech Connect

    Sperling, J.L.

    1983-05-01

    Much large-scale fluid structuring in the ionosphere has been attributed to the flutelike Rayleigh-Taylor and E x B gradient drift instabilities. The finite extent of the ionosphere and the spatial variation of plasma parameters within the ionosphere suggest that these instabilities can be expected to vary along magnetic field lines. The variations are taken into account by assuming a nonzero component of wave number parallel to the ambient magnetic field. The accompanying electric fields are not purely electrostatic but imply mode magnetic fields that may permit plasma transport across density gradients that are larger than classical cross-field diffusion. This enhanced diffusion, which is most effective for sufficiently large and tenuous plasma clouds, can limit the minimum size of striations to a larger value than classical considerations alone permit. Finite parallel wave number has the additional effect of allowing ion free energy to be transferred to parallel electron motion and so the Rayleigh-Taylor and E x B gradient drift instabilities can contribute to structuring at conjugate points along magnetic field lines where electron energy is deposited. Also, the transfer of free energy indicates that long-term structural persistence requires a continuous source of ion free energy. Some finite parallel wavelength effects, particularly those relating to transport, can be included in present two-dimensional striation simulations.

  11. Effects of Nasal Corticosteroids on Boosts of Systemic Allergen-Specific IgE Production Induced by Nasal Allergen Exposure

    PubMed Central

    Egger, Cornelia; Lupinek, Christian; Ristl, Robin; Lemell, Patrick; Horak, Friedrich; Zieglmayer, Petra; Spitzauer, Susanne; Valenta, Rudolf; Niederberger, Verena

    2015-01-01

    Background Allergen exposure via the respiratory tract and in particular via the nasal mucosa boosts systemic allergen-specific IgE production. Intranasal corticosteroids (INCS) represent a first line treatment of allergic rhinitis but their effects on this boost of allergen-specific IgE production are unclear. Aim Here we aimed to determine in a double-blind, placebo-controlled study whether therapeutic doses of an INCS preparation, i.e., nasal fluticasone propionate, have effects on boosts of allergen-specific IgE following nasal allergen exposure. Methods Subjects (n = 48) suffering from grass and birch pollen allergy were treated with daily fluticasone propionate or placebo nasal spray for four weeks. After two weeks of treatment, subjects underwent nasal provocation with either birch pollen allergen Bet v 1 or grass pollen allergen Phl p 5. Bet v 1 and Phl p 5-specific IgE, IgG1–4, IgM and IgA levels were measured in serum samples obtained at the time of provocation and one, two, four, six and eight weeks thereafter. Results Nasal allergen provocation induced a median increase to 141.1% of serum IgE levels to allergens used for provocation but not to control allergens 4 weeks after provocation. There were no significant differences regarding the boosts of allergen-specific IgE between INCS- and placebo-treated subjects. Conclusion In conclusion, the application of fluticasone propionate had no significant effects on the boosts of systemic allergen-specific IgE production following nasal allergen exposure. Trial Registration http://clinicaltrials.gov/ NCT00755066 PMID:25705889

  12. Maximizing boosted top identification by minimizing N-subjettiness

    NASA Astrophysics Data System (ADS)

    Thaler, Jesse; van Tilburg, Ken

    2012-02-01

    N -subjettiness is a jet shape designed to identify boosted hadronic objects such as top quarks. Given N subjet axes within a jet, N-subjettiness sums the angular distances of jet constituents to their nearest subjet axis. Here, we generalize and improve on N -subjettiness by minimizing over all possible subjet directions, using a new variant of the k-means clustering algorithm. On boosted top benchmark samples from the BOOST2010 workshop, we demonstrate that a simple cut on the 3-subjettiness to 2-subjettiness ratio yields 20% (50%) tagging efficiency for a 0.23% (4.1%) fake rate, making N -subjettiness a highly effective boosted top tagger. N-subjettiness can be modified by adjusting an angular weighting exponent, and we find that the jet broadening measure is preferred for boosted top searches. We also explore multivariate techniques, and show that additional improvements are possible using a modified Fisher discriminant. Finally, we briefly mention how our minimization procedure can be extended to the entire event, allowing the event shape N-jettiness to act as a fixed N cone jet algorithm.

  13. Parallel Memory Addressing Using Coincident Optical Pulses

    DTIC Science & Technology

    1989-09-15

    case reduces to a at the interface between the electronic memory structure more manageable 21n lines controlling processing units and the optical system...Addressing Donald M. Chiarulli, Rami G. Melhem, and Steven P. Levitan University of Pittsburgh omm on-bus, shared-memory .dcoder can process only a single...encoded multiprocessors are the most k address,thuslimitingmemoryaccess to widely used parallel processing single location. Memory interleaving tech

  14. Managing first-line failure.

    PubMed

    Cooper, David A

    2014-01-01

    WHO standard of care for failure of a first regimen, usually 2N(t)RTI's and an NNRTI, consists of a ritonavir-boosted protease inhibitor with a change in N(t)RTI's. Until recently, there was no evidence to support these recommendations which were based on expert opinion. Two large randomized clinical trials, SECOND LINE and EARNEST both showed excellent response rates (>80%) for the WHO standard of care and indicated that a novel regimen of a boosted protease inhibitor with an integrase inhibitor had equal efficacy with no difference in toxicity. In EARNEST, a third arm consisting of induction with the combined protease and integrase inhibitor followed by protease inhibitor monotherapy maintenance was inferior and led to substantial (20%) protease inhibitor resistance. These studies confirm the validity of the current recommendations of WHO and point to a novel public health approach of using two new classes for second line when standard first-line therapy has failed, which avoids resistance genotyping. Notwithstanding, adherence must be stressed in those failing first-line treatments. Protease inhibitor monotherapy is not suitable for a public health approach in low- and middle-income countries.

  15. Detection of multiple sinusoids using a parallel ale

    SciTech Connect

    David, R.A.

    1984-01-01

    This paper introduces an Adaptive Line Enhancer (ALE) whose parallel structure enables the detection and enhancement of multiple sinusoids. A function describing the performance surface is derived for the case where several line signals are buried in white noise. A steepest descent adaptive algorithm is derived, and simulations are used to demonstrate its performance.

  16. 10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. UNDERSIDE, VIEW PARALLEL TO BRIDGE, SHOWING FLOOR SYSTEM AND SOUTH PIER. LOOKING SOUTHEAST. - Route 31 Bridge, New Jersey Route 31, crossing disused main line of Central Railroad of New Jersey (C.R.R.N.J.) (New Jersey Transit's Raritan Valley Line), Hampton, Hunterdon County, NJ

  17. Digital parallel-to-series pulse-train converter

    NASA Technical Reports Server (NTRS)

    Hussey, J.

    1971-01-01

    Circuit converts number represented as two level signal on n-bit lines to series of pulses on one of two lines, depending on sign of number. Converter accepts parallel binary input data and produces number of output pulses equal to number represented by input data.

  18. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  19. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  20. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  2. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  3. Homology, convergence and parallelism

    PubMed Central

    Ghiselin, Michael T.

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  4. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  5. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  6. Parallel Computing in Optimization.

    DTIC Science & Technology

    1984-10-01

    include : Heller [1978] and Sameh [1977] (surveys of algorithms), Duff [1983], Fong and Jordan [1977]. Jordan [1979]. and Rodrigue [1982] (all mainly...constrained concave function by partition of feasible domain", Mathematics of Operations Research 8, pp. A. Sameh [1977, "Numerical parallel algorithms...a survey", in High Speed Computer and Algorithm Organization, D. Kuck, D. Lawrie, and A. Sameh , eds., Academic Press, pp. 207-228. 1,. J. Siegel

  7. Development of Parallel GSSHA

    DTIC Science & Technology

    2013-09-01

    C en te r Paul R. Eller , Jing-Ru C. Cheng, Aaron R. Byrd, Charles W. Downer, and Nawa Pradhan September 2013 Approved for public release...Program ERDC TR-13-8 September 2013 Development of Parallel GSSHA Paul R. Eller and Jing-Ru C. Cheng Information Technology Laboratory US Army Engineer...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Eller , Ruth Cheng, Aaron Byrd, Chuck Downer, and Nawa Pradhan 5d. PROJECT NUMBER

  8. Parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Camberos, Jose; Merriam, Marshal

    1991-01-01

    A parallel unstructured grid generation algorithm is presented and implemented on the Hypercube. Different processor hierarchies are discussed, and the appropraite hierarchies for mesh generation and mesh smoothing are selected. A domain-splitting algorithm for unstructured grids which tries to minimize the surface-to-volume ratio of each subdomain is described. This splitting algorithm is employed both for grid generation and grid smoothing. Results obtained on the Hypercube demonstrate the effectiveness of the algorithms developed.

  9. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  10. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  11. Boosted Fast Flux Loop Alternative Cooling Assessment

    SciTech Connect

    Glen R. Longhurst; Donna Post Guillen; James R. Parry; Douglas L. Porter; Bruce W. Wallace

    2007-08-01

    The Gas Test Loop (GTL) Project was instituted to develop the means for conducting fast neutron irradiation tests in a domestic radiation facility. It made use of booster fuel to achieve the high neutron flux, a hafnium thermal neutron absorber to attain the high fast-to-thermal flux ratio, a mixed gas temperature control system for maintaining experiment temperatures, and a compressed gas cooling system to remove heat from the experiment capsules and the hafnium thermal neutron absorber. This GTL system was determined to provide a fast (E > 0.1 MeV) flux greater than 1.0E+15 n/cm2-s with a fast-to-thermal flux ratio in the vicinity of 40. However, the estimated system acquisition cost from earlier studies was deemed to be high. That cost was strongly influenced by the compressed gas cooling system for experiment heat removal. Designers were challenged to find a less expensive way to achieve the required cooling. This report documents the results of the investigation leading to an alternatively cooled configuration, referred to now as the Boosted Fast Flux Loop (BFFL). This configuration relies on a composite material comprised of hafnium aluminide (Al3Hf) in an aluminum matrix to transfer heat from the experiment to pressurized water cooling channels while at the same time providing absorption of thermal neutrons. Investigations into the performance this configuration might achieve showed that it should perform at least as well as its gas-cooled predecessor. Physics calculations indicated that the fast neutron flux averaged over the central 40 cm (16 inches) relative to ATR core mid-plane in irradiation spaces would be about 1.04E+15 n/cm2-s. The fast-to-thermal flux ratio would be in excess of 40. Further, the particular configuration of cooling channels was relatively unimportant compared with the total amount of water in the apparatus in determining performance. Thermal analyses conducted on a candidate configuration showed the design of the water coolant and

  12. (In)Direct detection of boosted dark matter

    NASA Astrophysics Data System (ADS)

    Agashe, Kaustubh; Cui, Yanou; Necib, Lina; Thaler, Jesse

    2016-05-01

    We present a new multi-component dark matter model with a novel experimental signature that mimics neutral current interactions at neutrino detectors. In our model, the dark matter is composed of two particles, a heavier dominant component that annihilates to produce a boosted lighter component that we refer to as boosted dark matter. The lighter component is relativistic and scatters off electrons in neutrino experiments to produce Cherenkov light. This model combines the indirect detection of the dominant component with the direct detection of the boosted dark matter. Directionality can be used to distinguish the dark matter signal from the atmospheric neutrino background. We discuss the viable region of parameter space in current and future experiments.

  13. Noise reduction effect and analysis through serial multiple sampling in a CMOS image sensor with floating diffusion boost-driving

    NASA Astrophysics Data System (ADS)

    Wakabayashi, Hayato; Yamaguchi, Keiji; Yamagata, Yuuki

    2017-04-01

    We have developed a 1/2.3-in. 10.3 mega pixel back-illuminated CMOS image sensor utilizing serial multiple sampling. This sensor achieves an RMS random noise of 1.3e- and row temporal noise (RTN) of 0.19e-. Serial multiple sampling is realized with a column inline averaging technique without the need for additional processing circuitry. Pixel readout is accomplished utilizing a 4-shared-pixel floating diffusion (FD) boost-driving architecture. RTN caused by column parallel readout was analyzed considering the transfer function at the system level and the developed model was verified by measurement data taken at each sampling time. This model demonstrates the RTN improvement of -1.6 dB in a parallel multiple readout architecture.

  14. Conditional Random Field (CRF)-Boosting: Constructing a Robust Online Hybrid Boosting Multiple Object Tracker Facilitated by CRF Learning

    PubMed Central

    Yang, Ehwa; Gwak, Jeonghwan; Jeon, Moongu

    2017-01-01

    Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable. PMID:28304366

  15. Self-boosting vaccines and their implications for herd immunity.

    PubMed

    Arinaminpathy, Nimalan; Lavine, Jennie S; Grenfell, Bryan T

    2012-12-04

    Advances in vaccine technology over the past two centuries have facilitated far-reaching impact in the control of many infections, and today's emerging vaccines could likewise open new opportunities in the control of several diseases. Here we consider the potential, population-level effects of a particular class of emerging vaccines that use specific viral vectors to establish long-term, intermittent antigen presentation within a vaccinated host: in essence, "self-boosting" vaccines. In particular, we use mathematical models to explore the potential role of such vaccines in situations where current immunization raises only relatively short-lived protection. Vaccination programs in such cases are generally limited in their ability to raise lasting herd immunity. Moreover, in certain cases mass vaccination can have the counterproductive effect of allowing an increase in severe disease, through reducing opportunities for immunity to be boosted through natural exposure to infection. Such dynamics have been proposed, for example, in relation to pertussis and varicella-zoster virus. In this context we show how self-boosting vaccines could open qualitatively new opportunities, for example by broadening the effective duration of herd immunity that can be achieved with currently used immunogens. At intermediate rates of self-boosting, these vaccines also alleviate the potential counterproductive effects of mass vaccination, through compensating for losses in natural boosting. Importantly, however, we also show how sufficiently high boosting rates may introduce a new regime of unintended consequences, wherein the unvaccinated bear an increased disease burden. Finally, we discuss important caveats and data needs arising from this work.

  16. Augmenting antitumor T-cell responses to mimotope vaccination by boosting with native tumor antigens.

    PubMed

    Buhrman, Jonathan D; Jordan, Kimberly R; U'ren, Lance; Sprague, Jonathan; Kemmler, Charles B; Slansky, Jill E

    2013-01-01

    Vaccination with antigens expressed by tumors is one strategy for stimulating enhanced T-cell responses against tumors. However, these peptide vaccines rarely result in efficient expansion of tumor-specific T cells or responses that protect against tumor growth. Mimotopes, or peptide mimics of tumor antigens, elicit increased numbers of T cells that crossreact with the native tumor antigen, resulting in potent antitumor responses. Unfortunately, mimotopes may also elicit cells that do not crossreact or have low affinity for tumor antigen. We previously showed that one such mimotope of the dominant MHC class I tumor antigen of a mouse colon carcinoma cell line stimulates a tumor-specific T-cell clone and elicits antigen-specific cells in vivo, yet protects poorly against tumor growth. We hypothesized that boosting the mimotope vaccine with the native tumor antigen would focus the T-cell response elicited by the mimotope toward high affinity, tumor-specific T cells. We show that priming T cells with the mimotope, followed by a native tumor-antigen boost, improves tumor immunity compared with T cells elicited by the same prime with a mimotope boost. Our data suggest that the improved tumor immunity results from the expansion of mimotope-elicited tumor-specific T cells that have increased avidity for the tumor antigen. The enhanced T cells are phenotypically distinct and enriched for T-cell receptors previously correlated with improved antitumor immunity. These results suggest that incorporation of native antigen into clinical mimotope vaccine regimens may improve the efficacy of antitumor T-cell responses.

  17. Dielectric Nonlinear Transmission Line (Postprint)

    DTIC Science & Technology

    2011-12-01

    Technical Paper 3. DATES COVERED (From - To) 2011 4. TITLE AND SUBTITLE Dielectric Nonlinear Transmission Line (POSTPRINT) 5a. CONTRACT NUMBER...14. ABSTRACT A parallel plate nonlinear transmission line (NLTL) was constructed. Periodic loading of nonlinear dielectric slabs provides the...846-9101 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 Dielectric Nonlinear Transmission Line David M. French, Brad W. Hoff

  18. A methodology for boost-glide transport technology planning

    NASA Technical Reports Server (NTRS)

    Repic, E. M.; Olson, G. A.; Milliken, R. J.

    1974-01-01

    A systematic procedure is presented by which the relative economic value of technology factors affecting design, configuration, and operation of boost-glide transport can be evaluated. Use of the methodology results in identification of first-order economic gains potentially achievable by projected advances in each of the definable, hypersonic technologies. Starting with a baseline vehicle, the formulas, procedures and forms which are integral parts of this methodology are developed. A demonstration of the methodology is presented for one specific boost-glide system.

  19. Quantum AdaBoost algorithm via cluster state

    NASA Astrophysics Data System (ADS)

    Li, Yuan

    2017-03-01

    The principle and theory of quantum computation are investigated by researchers for many years, and further applied to improve the efficiency of classical machine learning algorithms. Based on physical mechanism, a quantum version of AdaBoost (Adaptive Boosting) training algorithm is proposed in this paper, of which purpose is to construct a strong classifier. In the proposed scheme with cluster state in quantum mechanism is to realize the weak learning algorithm, and then update the corresponding weight of examples. As a result, a final classifier can be obtained by combining efficiently weak hypothesis based on measuring cluster state to reweight the distribution of examples.

  20. Buck-boost converter feedback controller design via evolutionary search

    NASA Astrophysics Data System (ADS)

    Sundareswaran, K.; Devi, V.; Nadeem, S. K.; Sreedevi, V. T.; Palani, S.

    2010-11-01

    Buck-boost converters are switched power converters. The model of the converter system varies from the ON state to the OFF state and hence traditional methods of controller design based on approximate transfer function models do not yield good dynamic response at different operating points of the converter system. This article attempts to design a feedback controller for a buck-boost type dc-dc converter using a genetic algorithm. The feedback controller design is perceived as an optimisation problem and a robust controller is estimated through an evolutionary search. Extensive simulation and experimental results provided in the article show the effectiveness of the new approach.

  1. Boosted Objects: A Probe of Beyond the Standard Model Physics

    SciTech Connect

    Abdesselam, A.; Kuutmann, E.Bergeaas; Bitenc, U.; Brooijmans, G.; Butterworth, J.; Bruckman de Renstrom, P.; Buarque Franzosi, D.; Buckingham, R.; Chapleau, B.; Dasgupta, M.; Davison, A.; Dolen, J.; Ellis, S.; Fassi, F.; Ferrando, J.; Frandsen, M.T.; Frost, J.; Gadfort, T.; Glover, N.; Haas, A.; Halkiadakis, E.; /more authors..

    2012-06-12

    We present the report of the hadronic working group of the BOOST2010 workshop held at the University of Oxford in June 2010. The first part contains a review of the potential of hadronic decays of highly boosted particles as an aid for discovery at the LHC and a discussion of the status of tools developed to meet the challenge of reconstructing and isolating these topologies. In the second part, we present new results comparing the performance of jet grooming techniques and top tagging algorithms on a common set of benchmark channels. We also study the sensitivity of jet substructure observables to the uncertainties in Monte Carlo predictions.

  2. Broken boost invariance in the Glasma via finite nuclei thickness

    NASA Astrophysics Data System (ADS)

    Ipp, Andreas; Müller, David

    2017-08-01

    We simulate the creation and evolution of non-boost-invariant Glasma in the early stages of heavy ion collisions within the color glass condensate framework. This is accomplished by extending the McLerran-Venugopalan model to include a parameter for the Lorentz-contracted but finite width of the nucleus in the beam direction. We determine the rapidity profile of the Glasma energy density, which shows deviations from the boost-invariant result. Varying the parameters both broad and narrow profiles can be produced. We compare our results to experimental data from RHIC and find surprising agreement.

  3. EARLY CHILDHOOD INVESTMENTS SUBSTANTIALLY BOOST ADULT HEALTH

    PubMed Central

    Campbell, Frances; Conti, Gabriella; Heckman, James J.; Moon, Seong Hyeok; Pinto, Rodrigo; Pungello, Elizabeth; Pan, Yi

    2014-01-01

    High-quality early childhood programs have been shown to have substantial benefits in reducing crime, raising earnings, and promoting education. Much less is known about their benefits for adult health. We report the long-term health impacts of one of the oldest and most heavily cited early childhood interventions with long-term follow-up evaluated by the method of randomization: the Carolina Abecedarian Project (ABC). Using recently collected biomedical data, we find that disadvantaged children randomly assigned to treatment have significantly lower prevalence of risk factors for cardiovascular and metabolic diseases in their mid-30s. The evidence is especially strong for males. The mean systolic blood pressure among the control males is 143, while only 126 among the treated. One in four males in the control group is affected by metabolic syndrome, while none in the treatment group is. To reach these conclusions, we address several statistical challenges. We use exact permutation tests to account for small sample sizes and conduct a parallel bootstrap confidence interval analysis to confirm the permutation analysis. We adjust inference to account for the multiple hypotheses tested and for nonrandom attrition. Our evidence shows the potential of early life interventions for preventing disease and promoting health. PMID:24675955

  4. Early childhood investments substantially boost adult health.

    PubMed

    Campbell, Frances; Conti, Gabriella; Heckman, James J; Moon, Seong Hyeok; Pinto, Rodrigo; Pungello, Elizabeth; Pan, Yi

    2014-03-28

    High-quality early childhood programs have been shown to have substantial benefits in reducing crime, raising earnings, and promoting education. Much less is known about their benefits for adult health. We report on the long-term health effects of one of the oldest and most heavily cited early childhood interventions with long-term follow-up evaluated by the method of randomization: the Carolina Abecedarian Project (ABC). Using recently collected biomedical data, we find that disadvantaged children randomly assigned to treatment have significantly lower prevalence of risk factors for cardiovascular and metabolic diseases in their mid-30s. The evidence is especially strong for males. The mean systolic blood pressure among the control males is 143 millimeters of mercury (mm Hg), whereas it is only 126 mm Hg among the treated. One in four males in the control group is affected by metabolic syndrome, whereas none in the treatment group are affected. To reach these conclusions, we address several statistical challenges. We use exact permutation tests to account for small sample sizes and conduct a parallel bootstrap confidence interval analysis to confirm the permutation analysis. We adjust inference to account for the multiple hypotheses tested and for nonrandom attrition. Our evidence shows the potential of early life interventions for preventing disease and promoting health.

  5. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  6. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT

  7. Quality improvement principles boost SCADA system reliability

    SciTech Connect

    Boling, J.E. )

    1994-08-01

    A major section of Chevron Pipe Line Co.'s SCADA system was recently brought up to the industry-standard 99.5% data-reporting reliability by an intercompany team applying quality improvement (QI) principles. To make the study manageable, the scope was limited to only half the CPL SCADA system, southeast Texas. The study concentrated on 20% of these remote sites which all happened to operate below 90% reliability. The team surveyed 21 sites and recorded data on reliability problem root causes. The data were categorized and formed into a Pareto chart. This chart indicated the root cause of 80% of problems was related to lack of maintenance on both radio equipment and RTU/PLCs. These results were presented to management along with recommendations for forming a quality improvement team to work on developing a preventative maintenance system, a task to be performed jointly between the radio technicians and the pipe line technicians. Goal was to allow the technicians to develop a working relationship with one another and to facilitate a better knowledge of the physical interfaces involved.

  8. OKVAR-Boost: a novel boosting algorithm to infer nonlinear dynamics and interactions in gene regulatory networks.

    PubMed

    Lim, Néhémy; Senbabaoglu, Yasin; Michailidis, George; d'Alché-Buc, Florence

    2013-06-01

    Reverse engineering of gene regulatory networks remains a central challenge in computational systems biology, despite recent advances facilitated by benchmark in silico challenges that have aided in calibrating their performance. A number of approaches using either perturbation (knock-out) or wild-type time-series data have appeared in the literature addressing this problem, with the latter using linear temporal models. Nonlinear dynamical models are particularly appropriate for this inference task, given the generation mechanism of the time-series data. In this study, we introduce a novel nonlinear autoregressive model based on operator-valued kernels that simultaneously learns the model parameters, as well as the network structure. A flexible boosting algorithm (OKVAR-Boost) that shares features from L2-boosting and randomization-based algorithms is developed to perform the tasks of parameter learning and network inference for the proposed model. Specifically, at each boosting iteration, a regularized Operator-valued Kernel-based Vector AutoRegressive model (OKVAR) is trained on a random subnetwork. The final model consists of an ensemble of such models. The empirical estimation of the ensemble model's Jacobian matrix provides an estimation of the network structure. The performance of the proposed algorithm is first evaluated on a number of benchmark datasets from the DREAM3 challenge and then on real datasets related to the In vivo Reverse-Engineering and Modeling Assessment (IRMA) and T-cell networks. The high-quality results obtained strongly indicate that it outperforms existing approaches. The OKVAR-Boost Matlab code is available as the archive: http://amis-group.fr/sourcecode-okvar-boost/OKVARBoost-v1.0.zip. Supplementary data are available at Bioinformatics online.

  9. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  10. Status of TRANSP Parallel Services

    NASA Astrophysics Data System (ADS)

    Indireshkumar, K.; Andre, Robert; McCune, Douglas; Randerson, Lewis

    2006-10-01

    The PPPL TRANSP code suite has been used successfully over many years to carry out time dependent simulations of tokamak plasmas. However, accurately modeling certain phenomena such as RF heating and fast ion behavior using TRANSP requires extensive computational power and will benefit from parallelization. Parallelizing all of TRANSP is not required and parts will run sequentially while other parts run parallelized. To efficiently use a site's parallel services, the parallelized TRANSP modules are deployed to a shared ``parallel service'' on a separate cluster. The PPPL Monte Carlo fast ion module NUBEAM and the MIT RF module TORIC are the first TRANSP modules to be so deployed. This poster will show the performance scaling of these modules within the parallel server. Communications between the serial client and the parallel server will be described in detail, and measurements of startup and communications overhead will be shown. Physics modeling benefits for TRANSP users will be assessed.

  11. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  12. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  13. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  14. The Structure of Parallel Algorithms.

    DTIC Science & Technology

    1979-08-01

    parallel architectures and parallel algorithms see [Anderson and Jensen 75, Stone 75, Kung 76, Enslow 77, Kuck 77, Ramamoorthy and Li 77, Sameh 77, Heller...the Routing Time on a Parallel Computer with a Fixed Interconnection Network, In Kuck., D. J., Lawrie, D.H. and Sameh , A.H., editor, High Speed...Letters 5(4):107-112, October 1976. [ Sameh 77] Sameh , A.H. Numerical Parallel Algorithms -- A Survey. In Hifh Speed Computer and AlgorLthm Organization

  15. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  16. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  17. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  18. Massively Parallel Genetics.

    PubMed

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  19. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  20. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  1. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  2. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  3. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  4. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  5. Elf Atochem boosts production of CFC substitutes

    SciTech Connect

    Not Available

    1992-05-01

    To carve out a larger share of the market for acceptable chlorofluorocarbon substitutes, Elf Atochem (Paris) is expanding its production of HFC-134a, HCFC-141b and HCFC-142b in the U.S. and in France. This paper reports that the company is putting the finishing touches on a plant at its Pierre-Benite (France) facility, to bring 9,000 m.t./yr (19.8 million lb) of HFC-134a capacity on-line by September. Construction is scheduled to begin next year at the company's Calvert City, Ky., plant, where a 15,000-m.t./yr (33-million-lb) unit for HFC-134a will come onstream by 1995.

  6. Lock-in-detection-free line-scan stimulated Raman scattering microscopy for near video-rate Raman imaging.

    PubMed

    Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2016-09-01

    We report on the development of a unique lock-in-detection-free line-scan stimulated Raman scattering microscopy technique based on a linear detector with a large full well capacity controlled by a field-programmable gate array (FPGA) for near video-rate Raman imaging. With the use of parallel excitation and detection scheme, the line-scan SRS imaging at 20 frames per second can be acquired with a ∼5-fold lower excitation power density, compared to conventional point-scan SRS imaging. The rapid data communication between the FPGA and the linear detector allows a high line-scanning rate to boost the SRS imaging speed without the need for lock-in detection. We demonstrate this lock-in-detection-free line-scan SRS imaging technique using the 0.5 μm polystyrene and 1.0 μm poly(methyl methacrylate) beads mixed in water, as well as living gastric cancer cells.

  7. Heterologous Prime-Boost Immunisation Regimens Against Infectious Diseases

    DTIC Science & Technology

    2006-08-01

    Enterotoxigenic Escherichia coli [92], Human T-cell leukemia / lymphoma virus type1 ( HTLV -1) [93] and Chlamydophila abortus [94]. 6. Conclusion...T-cell leukemia/ lymphoma virus type1 ( HTLV -1) NYVAC DNA HTLV -1 env prime NYVAC- HTLV -1 env+gag boost Specific Ab, challenge, lymphocyte

  8. Real-World Connections Can Boost Journalism Program.

    ERIC Educational Resources Information Center

    Schrier, Kathy; Bott, Don; McGuire, Tim

    2001-01-01

    Describes various ways scholastic journalism advisers have attempted to make real-world connections to boost their journalism programs: critiques of student publications by invited guest speakers (professional journalists); regional workshops where professionals offer short presentations; local media offering programming or special sections aimed…

  9. Boost glycemic control in teen diabetics through 'family focused teamwork'.

    PubMed

    2003-09-01

    While family conflict during the teenaged years is typical, it can have long-term health consequences when it involves an adolescent with diabetes. However, researchers at Joslin Diabetes Center in Boston have developed a low-cost intervention that aims to remove conflict from disease management responsibilities--and a new study shows that it can boost glycemic control as well.

  10. Repetitive peptide boosting progressively enhances functional memory CTLs

    USDA-ARS?s Scientific Manuscript database

    Induction of functional memory CTLs holds promise for fighting critical infectious diseases through vaccination, but so far, no effective regime has been identified. We show here that memory CTLs can be enhanced progressively to high levels by repetitive intravenous boosting with peptide and adjuvan...

  11. Graph ensemble boosting for imbalanced noisy graph stream classification.

    PubMed

    Pan, Shirui; Wu, Jia; Zhu, Xingquan; Zhang, Chengqi

    2015-05-01

    Many applications involve stream data with structural dependency, graph representations, and continuously increasing volumes. For these applications, it is very common that their class distributions are imbalanced with minority (or positive) samples being only a small portion of the population, which imposes significant challenges for learning models to accurately identify minority samples. This problem is further complicated with the presence of noise, because they are similar to minority samples and any treatment for the class imbalance may falsely focus on the noise and result in deterioration of accuracy. In this paper, we propose a classification model to tackle imbalanced graph streams with noise. Our method, graph ensemble boosting, employs an ensemble-based framework to partition graph stream into chunks each containing a number of noisy graphs with imbalanced class distributions. For each individual chunk, we propose a boosting algorithm to combine discriminative subgraph pattern selection and model learning as a unified framework for graph classification. To tackle concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the boosting framework can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-life imbalanced graph streams demonstrate clear benefits of our boosting design for handling imbalanced noisy graph stream.

  12. Culture First: Boosting Program Strength through Cultural Instruction

    ERIC Educational Resources Information Center

    Windham, Scott

    2017-01-01

    In recent years, cultural instruction has been touted as a way to help foreign language programs boost student learning outcomes, enrollments, and many other measures of program strength. In order to investigate the relationship between cultural instruction and program strength in a university-level German program, students in first- and…

  13. Mimotope vaccine efficacy gets a "boost" from native tumor antigens.

    PubMed

    Buhrman, Jonathan D; Slansky, Jill E

    2013-04-01

    Tumor-associated antigen (TAA)-targeting mimotope peptides exert more prominent immunostimulatory functions than unmodified TAAs, with the caveat that some T-cell clones exhibit a relatively low affinity for TAAs. Combining mimotope-based vaccines with native TAAs in a prime-boost setting significantly improves antitumor immunity.

  14. Congress OKs $2 Billion Boost for the NIH.

    PubMed

    2017-07-01

    President Donald Trump last week signed a $1.1 trillion spending bill for fiscal year 2017, including a welcome $2 billion boost for the NIH that will support former Vice President Joe Biden's Cancer Moonshot initiative, among other priorities. However, researchers who rely heavily on NIH grant funding remain concerned about proposed cuts for 2018. ©2017 American Association for Cancer Research.

  15. Balance-Boosting Footwear Tips for Older People

    MedlinePlus

    ... Home » Learn About Feet » Tips for Healthy Feet Balance-Boosting Footwear Tips for Older People Balance in all aspects of life is a good ... mental equilibrium isn't the only kind of balance that's important in life. Good physical balance can ...

  16. Boost compensator for use with internal combustion engine with supercharger

    SciTech Connect

    Asami, T.

    1988-04-12

    A boost compensator for controlling the position of a control rack of a fuel injection pump to supply fuel to an internal combustion with a supercharger in response to a boost pressure to be applied to the engine is described. The control rack is movable in a first direction increasing an amount of fuel to be supplied by the fuel injection pump to the engine and in a second direction, opposite to the first direction, decreasing the amount of fuel. The boost compensator comprises: a push rod disposed for forward and rearward movement in response to the boost pressure; a main lever disposed for angular movement about a first pivot; an auxiliary lever disposed for angular movement about a second pivot; return spring means associated with the first portion of the auxiliary lever for resiliently biasing same in one direction about the second pivot; and abutment means mounted on the second portion of the auxiliary lever and engageable with the second portion of the main lever.

  17. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  18. Benchmarking massively parallel architectures

    SciTech Connect

    Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

    1993-01-01

    The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

  19. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  20. Parallel Eigenvalue extraction

    NASA Technical Reports Server (NTRS)

    Akl, Fred A.

    1989-01-01

    A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is utilized in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. Assembly, elimination and back-substitution of degrees of freedom are performed concurrently, using a number of fronts. All fronts converge to and diverge from a predefined global front during elimination and back-substitution, respectively. In the meantime, reduction of the stiffness and mass matrices required by the modified subspace method can be completed during the convergence/divergence cycle and an estimate of the required eigenpairs obtained. Successive cycles of convergence and divergence are repeated until the desired accuracy of calculations is achieved. The advantages of this new algorithm in parallel computer architecture are discussed.

  1. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  2. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  3. Parallel ptychographic reconstruction

    SciTech Connect

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.

  4. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  5. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  6. Benefit of Radiation Boost After Whole-Breast Radiotherapy

    SciTech Connect

    Livi, Lorenzo; Borghesi, Simona; Saieva, Calogero; Fambrini, Massimiliano; Iannalfi, Alberto; Greto, Daniela; Paiar, Fabiola; Scoccianti, Silvia; Simontacchi, Gabriele; Bianchi, Simonetta; Cataliotti, Luigi; Biti, Giampaolo

    2009-11-15

    Purpose: To determine whether a boost to the tumor bed after breast-conserving surgery (BCS) and radiotherapy (RT) to the whole breast affects local control and disease-free survival. Methods and Materials: A total of 1,138 patients with pT1 to pT2 breast cancer underwent adjuvant RT at the University of Florence. We analyzed only patients with a minimum follow-up of 1 year (range, 1-20 years), with negative surgical margins. The median age of the patient population was 52.0 years (+-7.9 years). The breast cancer relapse incidence probability was estimated by the Kaplan-Meier method, and differences between patient subgroups were compared by the log rank test. Cox regression models were used to evaluate the risk of breast cancer relapse. Results: On univariate survival analysis, boost to the tumor bed reduced breast cancer recurrence (p < 0.0001). Age and tamoxifen also significantly reduced breast cancer relapse (p = 0.01 and p = 0.014, respectively). On multivariate analysis, the boost and the medium age (45-60 years) were found to be inversely related to breast cancer relapse (hazard ratio [HR], 0.27; 95% confidence interval [95% CI], 0.14-0.52, and HR 0.61; 95% CI, 0.37-0.99, respectively). The effect of the boost was more evident in younger patients (HR, 0.15 and 95% CI, 0.03-0.66 for patients <45 years of age; and HR, 0.31 and 95% CI, 0.13-0.71 for patients 45-60 years) on multivariate analyses stratified by age, although it was not a significant predictor in women older than 60 years. Conclusion: Our results suggest that boost to the tumor bed reduces breast cancer relapse and is more effective in younger patients.

  7. Boosted lopinavir- versus boosted atazanavir-containing regimens and immunologic, virologic, and clinical outcomes: a prospective study of HIV-infected individuals in high-income countries.

    PubMed

    Cain, Lauren E; Phillips, Andrew; Olson, Ashley; Sabin, Caroline; Jose, Sophie; Justice, Amy; Tate, Janet; Logan, Roger; Robins, James M; Sterne, Jonathan A C; van Sighem, Ard; Reiss, Peter; Young, James; Fehr, Jan; Touloumi, Giota; Paparizos, Vasilis; Esteve, Anna; Casabona, Jordi; Monge, Susana; Moreno, Santiago; Seng, Rémonie; Meyer, Laurence; Pérez-Hoyos, Santiago; Muga, Roberto; Dabis, François; Vandenhende, Marie-Anne; Abgrall, Sophie; Costagliola, Dominique; Hernán, Miguel A

    2015-04-15

    Current clinical guidelines consider regimens consisting of either ritonavir-boosted atazanavir or ritonavir-boosted lopinavir and a nucleoside reverse transcriptase inhibitor (NRTI) backbone among their recommended and alternative first-line antiretroviral regimens. However, these guidelines are based on limited evidence from randomized clinical trials and clinical experience. We compared these regimens with respect to clinical, immunologic, and virologic outcomes using data from prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States in the HIV-CAUSAL Collaboration, 2004-2013. Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started a lopinavir or an atazanavir regimen. We estimated the 'intention-to-treat' effect for atazanavir vs lopinavir regimens on each of the outcomes. A total of 6668 individuals started a lopinavir regimen (213 deaths, 457 AIDS-defining illnesses or deaths), and 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths). The adjusted intention-to-treat hazard ratios for atazanavir vs lopinavir regimens were 0.70 (95% confidence interval [CI], .53-.91) for death, 0.67 (95% CI, .55-.82) for AIDS-defining illness or death, and 0.91 (95% CI, .84-.99) for virologic failure at 12 months. The mean 12-month increase in CD4 count was 8.15 (95% CI, -.13 to 16.43) cells/µL higher in the atazanavir group. Estimates differed by NRTI backbone. Our estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a greater 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for atazanavir compared with lopinavir regimens. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Xyce parallel electronic simulator : reference guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  9. Integration of cell line and process development to overcome the challenge of a difficult to express protein.

    PubMed

    Alves, Christina S; Gilbert, Alan; Dalvi, Swati; St Germain, Bryan; Xie, Wenqi; Estes, Scott; Kshirsagar, Rashmi; Ryll, Thomas

    2015-01-01

    This case study addresses the difficulty in achieving high level expression and production of a small, very positively charged recombinant protein. The novel challenges with this protein include the protein's adherence to the cell surface and its inhibitory effects on Chinese hamster ovary (CHO) cell growth. To overcome these challenges, we utilized a multi-prong approach. We identified dextran sulfate as a way to simultaneously extract the protein from the cell surface and boost cellular productivity. In addition, host cells were adapted to grow in the presence of this protein to improve growth and production characteristics. To achieve an increase in productivity, new cell lines from three different CHO host lines were created and evaluated in parallel with new process development workflows. Instead of a traditional screen of only four to six cell lines in bioreactors, over 130 cell lines were screened by utilization of 15 mL automated bioreactors (AMBR) in an optimal production process specifically developed for this protein. Using the automation, far less manual intervention is required than in traditional bench-top bioreactors, and much more control is achieved than typical plate or shake flask based screens. By utilizing an integrated cell line and process development incorporating medium optimized for this protein, we were able to increase titer more than 10-fold while obtaining desirable product quality. Finally, Monte Carlo simulations were performed to predict the optimal number of cell lines to screen in future cell line development work with the goal of systematically increasing titer through enhanced cell line screening. © 2015 American Institute of Chemical Engineers.

  10. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  11. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  12. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... failure of all engines. (b) Each alternate system may be a duplicate power portion or a manually operated... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Power boost and power-operated control...

  13. 14 CFR 29.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Systems § 29.695 Power boost and power-operated control system. (a) If a power boost or power-operated... failure of all engines. (b) Each alternate system may be a duplicate power portion or a manually operated... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power boost and power-operated control...

  14. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... failure of all engines. (b) Each alternate system may be a duplicate power portion or a manually operated... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power boost and power-operated control...

  15. 14 CFR 27.695 - Power boost and power-operated control system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Systems § 27.695 Power boost and power-operated control system. (a) If a power boost or power-operated... failure of all engines. (b) Each alternate system may be a duplicate power portion or a manually operated... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Power boost and power-operated control...

  16. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  17. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  18. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  19. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  20. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  1. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  2. Parallel imaging microfluidic cytometer.

    PubMed

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  3. A parallel programming environment supporting multiple data-parallel modules

    SciTech Connect

    Seevers, B.K.; Quinn, M.J. ); Hatcher, P.J. )

    1992-10-01

    We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The progammer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer.

  4. Stereotactic Body Radiotherapy: A Promising Treatment Option for the Boost of Oropharyngeal Cancers Not Suitable for Brachytherapy: A Single-Institutional Experience

    SciTech Connect

    Al-Mamgani, Abrahim; Tans, Lisa; Teguh, David N.; Rooij, Peter van; Zwijnenburg, Ellen M.; Levendag, Peter C.

    2012-03-15

    Purpose: To prospectively assess the outcome and toxicity of frameless stereotactic body radiotherapy (SBRT) as a treatment option for boosting primary oropharyngeal cancers (OPC) in patients who not suitable for the standard brachytherapy boost (BTB). Methods and Materials: Between 2005 and 2010, 51 patients with Stage I to IV biopsy-proven OPC who were not suitable for BTB received boosts by means of SBRT (3 times 5.5 Gy, prescribed to the 80% isodose line), after 46 Gy of IMRT to the primary tumor and neck (when indicated). Endpoints of the study were local control (LC), disease-free survival (DFS), overall survival (OS), and acute and late toxicity. Results: After a median follow-up of 18 months (range, 6-65 months), the 2-year actuarial rates of LC, DFS, and OS were 86%, 80%, and 82%, respectively, and the 3-year rates were 70%, 66%, and 54%, respectively. The treatment was well tolerated, as there were no treatment breaks and no Grade 4 or 5 toxicity reported, either acute or chronic. The overall 2-year cumulative incidence of Grade {>=}2 late toxicity was 28%. Of the patients with 2 years with no evidence of disease (n = 20), only 1 patient was still feeding tube dependent and 2 patients had Grade 3 xerostomia. Conclusions: According to our knowledge, this study is the first report of patients with primary OPC who received boosts by means of SBRT. Patients with OPC who are not suitable for the standard BTB can safely and effectively receive boosts by SBRT. With this radiation technique, an excellent outcome was achieved. Furthermore, the SBRT boost did not have a negative impact regarding acute and late side effects.

  5. Combinatorial parallel and scientific computing.

    SciTech Connect

    Pinar, Ali; Hendrickson, Bruce Alan

    2005-04-01

    Combinatorial algorithms have long played a pivotal enabling role in many applications of parallel computing. Graph algorithms in particular arise in load balancing, scheduling, mapping and many other aspects of the parallelization of irregular applications. These are still active research areas, mostly due to evolving computational techniques and rapidly changing computational platforms. But the relationship between parallel computing and discrete algorithms is much richer than the mere use of graph algorithms to support the parallelization of traditional scientific computations. Important, emerging areas of science are fundamentally discrete, and they are increasingly reliant on the power of parallel computing. Examples include computational biology, scientific data mining, and network analysis. These applications are changing the relationship between discrete algorithms and parallel computing. In addition to their traditional role as enablers of high performance, combinatorial algorithms are now customers for parallel computing. New parallelization techniques for combinatorial algorithms need to be developed to support these nontraditional scientific approaches. This chapter will describe some of the many areas of intersection between discrete algorithms and parallel scientific computing. Due to space limitations, this chapter is not a comprehensive survey, but rather an introduction to a diverse set of techniques and applications with a particular emphasis on work presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing. Some topics highly relevant to this chapter (e.g. load balancing) are addressed elsewhere in this book, and so we will not discuss them here.

  6. LINE-ABOVE-GROUND ATTENUATOR

    DOEpatents

    Wilds, R.B.; Ames, J.R.

    1957-09-24

    The line-above-ground attenuator provides a continuously variable microwave attenuator for a coaxial line that is capable of high attenuation and low insertion loss. The device consists of a short section of the line-above- ground plane type transmission lime, a pair of identical rectangular slabs of lossy material like polytron, whose longitudinal axes are parallel to and indentically spaced away from either side of the line, and a geared mechanism to adjust amd maintain this spaced relationship. This device permits optimum fineness and accuracy of attenuator control which heretofore has been difficult to achieve.

  7. Spacecraft boost and abort guidance and control systems requirement study, boost dynamics and control analysis study. Exhibit A: Boost dynamics and control anlaysis

    NASA Technical Reports Server (NTRS)

    Williams, F. E.; Price, J. B.; Lemon, R. S.

    1972-01-01

    The simulation developments for use in dynamics and control analysis during boost from liftoff to orbit insertion are reported. Also included are wind response studies of the NR-GD 161B/B9T delta wing booster/delta wing orbiter configuration, the MSC 036B/280 inch solid rocket motor configuration, the MSC 040A/L0X-propane liquid injection TVC configuration, the MSC 040C/dual solid rocket motor configuration, and the MSC 049/solid rocket motor configuration. All of the latest math models (rigid and flexible body) developed for the MSC/GD Space Shuttle Functional Simulator, are included.

  8. Externally Dispersed Interferometry for Resolution Boosting and Doppler Velocimetry

    SciTech Connect

    Erskine, D J

    2003-12-01

    Externally dispersed interferometry (EDI) is a rapidly advancing technique for wide bandwidth spectroscopy and radial velocimetry. By placing a small angle-independent interferometer near the slit of an existing spectrograph system, periodic fiducials are embedded on the recorded spectrum. The multiplication of the stellar spectrum times the sinusoidal fiducial net creates a moire pattern, which manifests high detailed spectral information heterodyned down to low spatial frequencies. The latter can more accurately survive the blurring, distortions and CCD Nyquist limitations of the spectrograph. Hence lower resolution spectrographs can be used to perform high resolution spectroscopy and radial velocimetry (under a Doppler shift the entire moir{acute e} pattern shifts in phase). A demonstration of {approx}2x resolution boosting (100,000 from 50,000) on the Lick Obs. echelle spectrograph is shown. Preliminary data indicating {approx}8x resolution boost (170,000 from 20,000) using multiple delays has been taken on a linear grating spectrograph.

  9. IMM tracking of a theater ballistic missile during boost phase

    NASA Astrophysics Data System (ADS)

    Hutchins, Robert G.; San Jose, Anthony

    1998-09-01

    Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper addresses the performance of tracking algorithms for TBMs during boost phase and across the transition to ballistic flight. Three families of tracking algorithms are examined: alpha-beta-gamma trackers, Kalman-based trackers, and the interactive multiple model (IMM) tracker. In addition, a variation on the IMM to include prior knowledge of a booster cutoff parameter is examined. Simulated data is used to compare algorithms. Also, the IMM tracker is run on an actual ballistic missile trajectory. Results indicate that IMM trackers show significant advantage in tracking through the model transition represented by booster cutoff.

  10. Enhanced solvent recovery process boosts PTA production, saves energy

    SciTech Connect

    1996-12-16

    Two producers of purified terephthalic acid (PTA) have licensed a new enhanced solvent-recovery process. The process uses a proprietary solvent that boosts the capacity of the acetic acid recovery system while reducing energy usage. Because acid recovery is usually the limiting step in PTA production, increasing it can boost PTA capacity 5--10%. The solvent recovery enhancement (SRE) technology is licensed by Glitsch Technology Corp. (GTC), Houston, The GT-SRE process can be applied in grassroots or existing plants. The GT-SRE process replaces the water used in high and low-pressure absorbers with a phosphine oxide-based solvent. The solvent is most selective to acetic acid, but is also selective to methyl acetate and, to a lesser extent, paraxylene. This selectivity increases recovery of these components from vent streams.

  11. Boosting bonsai trees for handwritten/printed text discrimination

    NASA Astrophysics Data System (ADS)

    Ricquebourg, Yann; Raymond, Christian; Poirriez, Baptiste; Lemaitre, Aurélie; Coüasnon, Bertrand

    2013-12-01

    Boosting over decision-stumps proved its efficiency in Natural Language Processing essentially with symbolic features, and its good properties (fast, few and not critical parameters, not sensitive to over-fitting) could be of great interest in the numeric world of pixel images. In this article we investigated the use of boosting over small decision trees, in image classification processing, for the discrimination of handwritten/printed text. Then, we conducted experiments to compare it to usual SVM-based classification revealing convincing results with very close performance, but with faster predictions and behaving far less as a black-box. Those promising results tend to make use of this classifier in more complex recognition tasks like multiclass problems.

  12. Boost capacity, slash LWBS rate with POD triage system.

    PubMed

    2011-04-01

    With bottlenecks boosting ED wait times as well as the LWBS rate, Methodist Hospital of Sacramento decided to boost its triage capacity by taking over six beds that were being used for fast-track patients, and by taking advantage of waiting-room space for patients who don't need to be placed in beds. Within a month of implementing the new approach, the LWBS rate dropped to less than 2%, and door-to-doc time was slashed by 20 minutes. Under the POD system, providers have 15 minutes to determine whether patients should be discharged, sent back to the waiting room while tests are conducted, or placed in an ED bed where they can be monitored. To implement the approach, no alterations in physician staffing were needed, but the hospital added a triage nurse and a task nurse to manage patient flow of the triage POD.

  13. High Temperature Boost (HTB) Power Processing Unit (PPU) Formulation Study

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Bradley, Arthur T.; Iannello, Christopher J.; Carr, Gregory A.; Mohammad, Mojarradi M.; Hunter, Don J.; DelCastillo, Linda; Stell, Christopher B.

    2013-01-01

    This technical memorandum is to summarize the Formulation Study conducted during fiscal year 2012 on the High Temperature Boost (HTB) Power Processing Unit (PPU). The effort is authorized and supported by the Game Changing Technology Division, NASA Office of the Chief Technologist. NASA center participation during the formulation includes LaRC, KSC and JPL. The Formulation Study continues into fiscal year 2013. The formulation study has focused on the power processing unit. The team has proposed a modular, power scalable, and new technology enabled High Temperature Boost (HTB) PPU, which has 5-10X improvement in PPU specific power/mass and over 30% in-space solar electric system mass saving.

  14. Black brane entropy and hydrodynamics: The boost-invariant case

    SciTech Connect

    Booth, Ivan; Heller, Michal P.; Spalinski, Michal

    2009-12-15

    The framework of slowly evolving horizons is generalized to the case of black branes in asymptotically anti-de Sitter spaces in arbitrary dimensions. The results are used to analyze the behavior of both event and apparent horizons in the gravity dual to boost-invariant flow. These considerations are motivated by the fact that at second order in the gradient expansion the hydrodynamic entropy current in the dual Yang-Mills theory appears to contain an ambiguity. This ambiguity, in the case of boost-invariant flow, is linked with a similar freedom on the gravity side. This leads to a phenomenological definition of the entropy of black branes. Some insights on fluid/gravity duality and the definition of entropy in a time-dependent setting are elucidated.

  15. Perception of straightness and parallelism with minimal distance information.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2016-07-01

    The ability of human observers to judge the straightness and parallelism of extended lines has been a neglected topic of study since von Helmholtz's initial observations 150 years ago. He showed that there were significant misperceptions of the straightness of extended lines seen in the peripheral visual field. The present study focused on the perception of extended lines (spanning 90° visual angle) that were directly fixated in the visual environment of a planetarium where there was only minimal information about the distance to the lines. Observers were asked to vary the curvature of 1 or more lines until they appeared to be straight and/or parallel, ignoring any perceived curvature in depth. When the horizon between the ground and the sky was visible, the results showed that observers' judgements of the straightness of a single line were significantly biased away from the veridical, great circle locations, and towards equal elevation settings. Similar biases can be seen in the jet trails of aircraft flying across the sky and in Rogers and Anstis's new moon illusion (Perception, 42(Abstract supplement) 18, 2013, 2016). The biasing effect of the horizon was much smaller when observers were asked to judge the straightness and parallelism of 2 or more extended lines. We interpret the results as showing that, in the absence of adequate distance information, observers tend to perceive the projected lines as lying on an approximately equidistant, hemispherical surface and that their judgements of straightness and parallelism are based on the perceived separation of the lines superimposed on that surface.

  16. Parallel Computation for Developing Nonlinear Control Procedures.

    DTIC Science & Technology

    1981-07-01

    optimal and subcptimal control systems. The early ’crk in the area cf zara :-eter :ientification can be attributed to Nyquist [i and 3ode [27 in which...line in an adaptive fashion . It should be emnhasized that the goal of this chapter is tz .evelo.p algorith--ms hich possess a high degree of -zrzale.ism...control in an adaptive fashion . Note that the major goal is to utilize these parallel algorithms in an explicit adaptive controller of the type shown in

  17. Massively-Parallel Dislocation Dynamics Simulations

    SciTech Connect

    Cai, W; Bulatov, V V; Pierce, T G; Hiratani, M; Rhee, M; Bartelt, M; Tang, M

    2003-06-18

    Prediction of the plastic strength of single crystals based on the collective dynamics of dislocations has been a challenge for computational materials science for a number of years. The difficulty lies in the inability of the existing dislocation dynamics (DD) codes to handle a sufficiently large number of dislocation lines, in order to be statistically representative and to reproduce experimentally observed microstructures. A new massively-parallel DD code is developed that is capable of modeling million-dislocation systems by employing thousands of processors. We discuss the general aspects of this code that make such large scale simulations possible, as well as a few initial simulation results.

  18. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  19. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  20. The Voltage Boost Enabled by Luminescence Extraction in Solar Cells

    SciTech Connect

    Ganapati, Vidya; Steiner, Myles A.; Yablonovitch, Eli

    2016-11-21

    A new physical principle has emerged to produce record voltages and efficiencies in photovoltaic cells, 'luminescence extraction.' This is exemplified by the mantra 'a good solar cell should also be a good LED.' Luminescence extraction is the escape of internal photons out of the front surface of a solar cell. Basic thermodynamics says that the voltage boost should be related to concentration ratio, C, of a resource by ..delta..V=(kT/q)ln{C}. In light trapping, (i.e. when the solar cell is textured and has a perfect back mirror) the concentration ratio of photons C={4n2}, so one would expect a voltage boost of ..delta..V=kT ln{4n2} over a solar cell with no texture and zero back reflectivity, where n is the refractive index. Nevertheless, there has been ambiguity over the voltage benefit to be expected from perfect luminescence extraction. Do we gain an open circuit voltage boost of ..delta..V=(kT/q)ln{n2}, ..delta..V=(kT/q)ln{2n2}, or ..delta..V=(kT/q)ln{4n2}? What is responsible for this voltage ambiguity ..delta..V=(kT/q)ln{4}=36mVolts? We show that different results come about, depending on whether the photovoltaic cell is optically thin or thick to its internal luminescence. In realistic intermediate cases of optical thickness the voltage boost falls in between; ln{n2}q..delta..V/kT)<;ln{4n2}.

  1. The Voltage Boost Enabled by Luminescence Extraction in Solar Cells

    DOE PAGES

    Ganapati, Vidya; Steiner, Myles A.; Yablonovitch, Eli

    2016-07-01

    Over the past few years, the application of the physical principle, i.e., 'luminescence extraction,' has produced record voltages and efficiencies in photovoltaic cells. Luminescence extraction is the use of optical design, such as a back mirror or textured surfaces, to help internal photons escape out of the front surface of a solar cell. The principle of luminescence extraction is exemplified by the mantra 'a good solar cell should also be a good LED.' Basic thermodynamics says that the voltage boost should be related to concentration ratio C of a resource by ΔV = (kT/q) ln{C}. In light trapping (i.e., when the solar cell is textured and has a perfect back mirror), the concentration ratio of photons C = {4n2}; therefore, one would expect a voltage boost of ΔV = (kT/q) ln{4n2} over a solar cell with no texture and zero back reflectivity, where n is the refractive index. Nevertheless, there has been ambiguity over the voltage benefit to be expected from perfect luminescence extraction. Do we gain an open-circuit voltage boost of ΔV = (kT/q) ln{n2}, ΔV = (kT/q) ln{2 n2}, or ΔV = (kT/q) ln{4 n2}? What is responsible for this voltage ambiguity ΔV = (kT/q) ln{4}more » $${\\asymp}$$ 36 mV? Finally, we show that different results come about, depending on whether the photovoltaic cell is optically thin or thick to its internal luminescence. In realistic intermediate cases of optical thickness, the voltage boost falls in between: ln{n2} < (qΔV/kT) < ln{4n 2}.« less

  2. G and C boost and abort study summary, exhibit B

    NASA Technical Reports Server (NTRS)

    Backman, H. D.

    1972-01-01

    A six degree of freedom simulation of rigid vehicles was developed to study space shuttle vehicle boost-abort guidance and control techniques. The simulation was described in detail as an all digital program and as a hybrid program. Only the digital simulation was implemented. The equations verified in the digital simulation were adapted for use in the hybrid simulation. Study results were obtained from four abort cases using the digital program.

  3. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  4. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  5. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  6. A parallel cholinergic brainstem pathway for enhancing locomotor drive

    PubMed Central

    Smetana, Roy; Juvin, Laurent; Dubuc, Réjean; Alford, Simon

    2010-01-01

    The brainstem locomotor system is believed to be organized serially from the mesencephalic locomotor region (MLR) to reticulospinal neurons, which in turn, project to locomotor neurons in the spinal cord. In contrast, we now identify in lampreys, brainstem muscarinoceptive neurons receiving parallel inputs from the MLR and projecting back to reticulospinal cells to amplify and extend durations of locomotor output. These cells respond to muscarine with extended periods of excitation, receive direct muscarinic excitation from the MLR, and project glutamatergic excitation to reticulospinal neurons. Targeted block of muscarine receptors over these neurons profoundly reduces MLR-induced excitation of reticulospinal neurons and markedly slows MLR-evoked locomotion. Their presence forces us to rethink the organization of supraspinal locomotor control, to include a sustained feedforward loop that boosts locomotor output. PMID:20473293

  7. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  8. The influence of the boost in breast-conserving therapy on cosmetic outcome in the EORTC "boost versus no boost" trial. EORTC Radiotherapy and Breast Cancer Cooperative Groups. European Organization for Research and Treatment of Cancer.

    PubMed

    Vrieling, C; Collette, L; Fourquet, A; Hoogenraad, W J; Horiot, J C; Jager, J J; Pierart, M; Poortmans, P M; Struikmans, H; Van der Hulst, M; Van der Schueren, E; Bartelink, H

    1999-10-01

    To evaluate the influence of a radiotherapy boost on the cosmetic outcome after 3 years of follow-up in patients treated with breast-conserving therapy (BCT). In EORTC trial 22881/10882, 5569 Stage I and II breast cancer patients were treated with tumorectomy and axillary dissection, followed by tangential irradiation of the breast to a dose of 50 Gy in 5 weeks, at 2 Gy per fraction. Patients having a microscopically complete tumor excision were randomized between no boost and a boost of 16 Gy. The cosmetic outcome was evaluated by a panel, scoring photographs of 731 patients taken soon after surgery and 3 years later, and by digitizer measurements, measuring the displacement of the nipple of 3000 patients postoperatively and of 1141 patients 3 years later. There was no difference in the cosmetic outcome between the two treatment arms after surgery, before the start of radiotherapy. At 3-year follow-up, both the panel evaluation and the digitizer measurements showed that the boost had a significant adverse effect on the cosmetic result. The panel evaluation at 3 years showed that 86% of patients in the no-boost group had an excellent or good global result, compared to 71% of patients in the boost group (p = 0.0001). The digitizer measurements at 3 years showed a relative breast retraction assessment (pBRA) of 7.6 pBRA in the no-boost group, compared to 8.3 pBRA in the boost group, indicating a worse cosmetic result in the boost group at follow-up (p = 0.04). These results showed that a boost dose of 16 Gy had a negative, but limited, impact on the cosmetic outcome after 3 years.

  9. Boosted learned kernels for data-driven vesselness measure

    NASA Astrophysics Data System (ADS)

    Grisan, E.

    2017-03-01

    Common vessel centerline extraction methods rely on the computation of a measure providing the likeness of the local appearance of the data to a curvilinear tube-like structure. The most popular techniques rely on empirically designed (hand crafted) measurements as the widely used Hessian vesselness, the recent oriented flux tubeness or filters (e.g. the Gaussian matched filter) that are developed to respond to local features, without exploiting any context information nor the rich structural information embedded in the data. At variance with the previously proposed methods, we propose a completely data-driven approach for learning a vesselness measure from expert-annotated dataset. For each data point (voxel or pixel), we extract the intensity values in a neighborhood region, and estimate the discriminative convolutional kernel yielding a positive response for vessel data and negative response for non-vessel data. The process is iterated within a boosting framework, providing a set of linear filters, whose combined response is the learned vesselness measure. We show the results of the general-use proposed method on the DRIVE retinal images dataset, comparing its performance against the hessian-based vesselness, oriented flux antisymmetry tubeness, and vesselness learned with a probabilistic boosting tree or with a regression tree. We demonstrate the superiority of our approach that yields a vessel detection accuracy of 0.95, with respect to 0.92 (hessian), 0.90 (oriented flux) and 0.85 (boosting tree).

  10. Action Classification by Joint Boosting Using Spatiotemporal and Depth Information

    NASA Astrophysics Data System (ADS)

    Ikemura, Sho; Fujiyoshi, Hironobu

    This paper presents a method for action classification by using Joint Boosting with depth information obtained by TOF camera. Our goal is to classify action of a customer who takes the goods from each of the upper, middle and lower shelf in the supermarkets and convenience stores. Our method detects of human region by using Pixel State Analysis (PSA) from the depth image stream obtained by TOF camera, and extracts the PSA features captured from human-motion and the depth features (peak value of depth) captured from the information of human-height. We employ Joint Boosting, which is a multi-class classification of boosting method, to perform the action classification. Since the proposed method employs spatiotemporal and depth feature, it is possible to perform the detection of action for taking the goods and the classification of the height of the shelf simultaneously. Experimental results show that our method using PSA feature and peak value of depth achieved a classification rate of 93.2%. It also had a 3.1% higher performance than that of the CHLAC feature, and 2.8% higher performance than that of the ST-patch feature.

  11. Stereotactic Body Radiation Therapy Boost in Locally Advanced Pancreatic Cancer

    SciTech Connect

    Seo, Young Seok; Kim, Mi-Sook; Yoo, Sung Yul; Cho, Chul Koo; Yang, Kwang Mo; Yoo, Hyung Jun; Choi, Chul Won; Lee, Dong Han; Kim, Jin; Kim, Min Suk; Kang, Hye Jin; Kim, YoungHan

    2009-12-01

    Purpose: To investigate the clinical application of a stereotactic body radiation therapy (SBRT) boost in locally advanced pancreatic cancer patients with a focus on local efficacy and toxicity. Methods and Materials: We retrospectively reviewed 30 patients with locally advanced and nonmetastatic pancreatic cancer who had been treated between 2004 and 2006. Follow-up duration ranged from 4 to 41 months (median, 14.5 months). A total dose of 40 Gy was delivered in 20 fractions using a conventional three-field technique, and then a single fraction of 14, 15, 16, or 17 Gy SBRT was administered as a boost without a break. Twenty-one patients received chemotherapy. Overall and local progression-free survival were calculated and prognostic factors were evaluated. Results: One-year overall survival and local progression-free survival rates were 60.0% and 70.2%, respectively. One patient (3%) developed Grade 4 toxicity. Carbohydrate antigen 19-9 response was found to be an independent prognostic factor for survival. Conclusions: Our findings indicate that a SBRT boost provides a safe means of increasing radiation dose. Based on the results of this study, we recommend that a well controlled Phase II study be conducted on locally advanced pancreatic cancer.

  12. Asia-Pacific Region Water Boosted Rocket Events

    NASA Astrophysics Data System (ADS)

    Oyama, K.-I.; Hidayat, A.; Sofyan, E.; Sinha, H. S. S.; Herudi, K.; Kubota, T.; Sukkarieh, S.; Arban, J. L.; Chung, D. M.; Medagangoda, I.; Mohd, Z. B.; Pitan, S.; Chin, C.; Sarkar, F. R.

    2010-05-01

    Space Education and Awareness Working Group, which is one of four working groups of Asia-Pacific Regional Space Agency Forum, had organized water boosted rocket competition in Japan in 2005, in Indonesia in 2006, and in 2007 in India. One junior high school student (12-15 years old) and one leader from 9 and 13 Asian/Pacific countries attended the 1st, 2nd, and 3rd water rocket events, respectively. The 4th event is planned in Vietnam, in December 2008. The manuscript introduces the structure and activities of Space Education and Awareness Working Group, which is working under Asia-Pacific Regional Space Agency Forum sponsored by Ministry of Education, Culture, Sports, and Technology of Japan; and application of water-boosted rocket to other field is described. Details of the water boosted rocket events, such as the purpose, competition rules, and the schedules are provided. Finally we discuss the issues to be taken into account for the future event.

  13. Chagas Parasite Detection in Blood Images Using AdaBoost

    PubMed Central

    Uc-Cetina, Víctor; Brito-Loeza, Carlos; Ruiz-Piña, Hugo

    2015-01-01

    The Chagas disease is a potentially life-threatening illness caused by the protozoan parasite, Trypanosoma cruzi. Visual detection of such parasite through microscopic inspection is a tedious and time-consuming task. In this paper, we provide an AdaBoost learning solution to the task of Chagas parasite detection in blood images. We give details of the algorithm and our experimental setup. With this method, we get 100% and 93.25% of sensitivity and specificity, respectively. A ROC comparison with the method most commonly used for the detection of malaria parasites based on support vector machines (SVM) is also provided. Our experimental work shows mainly two things: (1) Chagas parasites can be detected automatically using machine learning methods with high accuracy and (2) AdaBoost + SVM provides better overall detection performance than AdaBoost or SVMs alone. Such results are the best ones known so far for the problem of automatic detection of Chagas parasites through the use of machine learning, computer vision, and image processing methods. PMID:25861375

  14. Modeling of laser wakefield acceleration in Lorentz boosted frame using EM-PIC code with spectral solver

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng; Xu, Xinlu; Decyk, Viktor K.; An, Weiming; Vieira, Jorge; Tsung, Frank S.; Fonseca, Ricardo A.; Lu, Wei; Silva, Luis O.; Mori, Warren B.

    2014-06-01

    Simulating laser wakefield acceleration (LWFA) in a Lorentz boosted frame in which the plasma drifts towards the laser with vb can speed up the simulation by factors of γb2=(1. In these simulations the relativistic drifting plasma inevitably induces a high frequency numerical instability that contaminates the interesting physics. Various approaches have been proposed to mitigate this instability. One approach is to solve Maxwell equations in Fourier space (a spectral solver) as this has been shown to suppress the fastest growing modes of this instability in simple test problems using a simple low pass or "ring" or "shell" like filters in Fourier space. We describe the development of a fully parallelized, multi-dimensional, particle-in-cell code that uses a spectral solver to solve Maxwell's equations and that includes the ability to launch a laser using a moving antenna. This new EM-PIC code is called UPIC-EMMA and it is based on the components of the UCLA PIC framework (UPIC). We show that by using UPIC-EMMA, LWFA simulations in the boosted frames with arbitrary γb can be conducted without the presence of the numerical instability. We also compare the results of a few LWFA cases for several values of γb, including lab frame simulations using OSIRIS, an EM-PIC code with a finite-difference time domain (FDTD) Maxwell solver. These comparisons include cases in both linear and nonlinear regimes. We also investigate some issues associated with numerical dispersion in lab and boosted frame simulations and between FDTD and spectral solvers.

  15. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  16. Parallel Computational Protein Design

    PubMed Central

    Zhou, Yichao; Donald, Bruce R.; Zeng, Jianyang

    2016-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab [1] to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE [2] and DEEPer [3] to also consider continuous backbone and side-chain flexibility. PMID:27914056

  17. CS-Studio Scan System Parallelization

    SciTech Connect

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  18. Sequential and Parallel Matrix Computations.

    DTIC Science & Technology

    1985-11-01

    Theory" published by the American Math Society. (C) Jointly with A. Sameh of University of Illinois, a parallel algorithm for the single-input pole...an M.Sc. thesis at Northern Illinois University by Ava Chun and, the results were compared with parallel Q-R algorithm of Sameh and Kuck and the

  19. Parallel pseudospectral domain decomposition techniques

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Hirsh, Richard S.

    1988-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  20. Parallel pseudospectral domain decomposition techniques

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Hirsch, Richard S.

    1989-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  1. A Parallel Particle Swarm Optimizer

    DTIC Science & Technology

    2003-01-01

    by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.

  2. Parallelization of the Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.

  3. ParCAT: Parallel Climate Analysis Toolkit

    SciTech Connect

    Smith, Brian E.; Steed, Chad A.; Shipman, Galen M.; Ricciuto, Daniel M.; Thornton, Peter E.; Wehner, Michael; Williams, Dean N.

    2013-01-01

    Climate science is employing increasingly complex models and simulations to analyze the past and predict the future of Earth s climate. This growth in complexity is creating a widening gap between the data being produced and the ability to analyze the datasets. Parallel computing tools are necessary to analyze, compare, and interpret the simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools to efficiently use parallel computing techniques to make analysis of these datasets manageable. The toolkit provides the ability to compute spatio-temporal means, differences between runs or differences between averages of runs, and histograms of the values in a data set. ParCAT is implemented as a command-line utility written in C. This allows for easy integration in other tools and allows for use in scripts. This also makes it possible to run ParCAT on many platforms from laptops to supercomputers. ParCAT outputs NetCDF files so it is compatible with existing utilities such as Panoply and UV-CDAT. This paper describes ParCAT and presents results from some example runs on the Titan system at ORNL.

  4. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  5. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  6. Zone lines

    Treesearch

    Kevin T. Smith

    2001-01-01

    Zone lines are narrow, usually dark markings formed in decaying wood. Zone lines are found most frequently in advanced white rot of hardwoods, although they occasionally are associated both with brown rot and with softwoods.

  7. Measure Lines

    ERIC Educational Resources Information Center

    Crissman, Sally

    2011-01-01

    One tool for enhancing students' work with data in the science classroom is the measure line. As a coteacher and curriculum developer for The Inquiry Project, the author has seen how measure lines--a number line in which the numbers refer to units of measure--help students not only represent data but also analyze it in ways that generate…

  8. Vertical bloch line memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R. (Inventor); Stadler, Henry L. (Inventor); Wu, Jiin-chuan (Inventor)

    1995-01-01

    A new read gate design for the vertical Bloch line (VBL) memory is disclosed which offers larger operating margin than the existing read gate designs. In the existing read gate designs, a current is applied to all the stripes. The stripes that contain a VBL pair are chopped, while the stripes that do not contain a VBL pair are not chopped. The information is then detected by inspecting the presence or absence of the bubble. The margin of the chopping current amplitude is very small, and sometimes non-existent. A new method of reading Vertical Bloch Line memory is also disclosed. Instead of using the wall chirality to separate the two binary states, the spatial deflection of the stripe head is used. Also disclosed herein is a compact memory which uses vertical Bloch line (VBL) memory technology for providing data storage. A three-dimensional arrangement in the form of stacks of VBL memory layers is used to achieve high volumetric storage density. High data transfer rate is achieved by operating all the layers in parallel. Using Hall effect sensing, and optical sensing via the Faraday effect to access the data from within the three-dimensional packages, an even higher data transfer rate can be achieved due to parallel operation within each layer.

  9. "With One Lip, With Two Lips"; Parallelism in Nahuatl.

    ERIC Educational Resources Information Center

    Bright, William

    1990-01-01

    Texts in Classical Nahuatl from 1524, in the genre of formal oratory, reveal extensive use of lines showing parallel morphosyntactic and semantic structure. Analysis and translation of a passage point to the applicability of structural analysis to "expressive" as well as "referential" texts; and the importance of understanding…

  10. "With One Lip, With Two Lips"; Parallelism in Nahuatl.

    ERIC Educational Resources Information Center

    Bright, William

    1990-01-01

    Texts in Classical Nahuatl from 1524, in the genre of formal oratory, reveal extensive use of lines showing parallel morphosyntactic and semantic structure. Analysis and translation of a passage point to the applicability of structural analysis to "expressive" as well as "referential" texts; and the importance of understanding…

  11. Fast AdaBoost-Based Face Detection System on a Dynamically Coarse Grain Reconfigurable Architecture

    NASA Astrophysics Data System (ADS)

    Xiao, Jian; Zhang, Jinguo; Zhu, Min; Yang, Jun; Shi, Longxing

    An AdaBoost-based face detection system is proposed, on a Coarse Grain Reconfigurable Architecture (CGRA) named “REMUS-II”. Our work is quite distinguished from previous ones in three aspects. First, a new hardware-software partition method is proposed and the whole face detection system is divided into several parallel tasks implemented on two Reconfigurable Processing Units (RPU) and one micro Processors Unit (µPU) according to their relationships. These tasks communicate with each other by a mailbox mechanism. Second, a strong classifier is treated as a smallest phase of the detection system, and every phase needs to be executed by these tasks in order. A phase of Haar classifier is dynamically mapped onto a Reconfigurable Cell Array (RCA) only when needed, and it's quite different from traditional Field Programmable Gate Array (FPGA) methods in which all the classifiers are fabricated statically. Third, optimized data and configuration word pre-fetch mechanisms are employed to improve the whole system performance. Implementation results show that our approach under 200MHz clock rate can process up-to 17 frames per second on VGA size images, and the detection rate is over 95%. Our system consumes 194mW, and the die size of fabricated chip is 23mm2 using TSMC 65nm standard cell based technology. To the best of our knowledge, this work is the first implementation of the cascade Haar classifier algorithm on a dynamically CGRA platform presented in the literature.

  12. Gut inflammation can boost horizontal gene transfer between pathogenic and commensal Enterobacteriaceae

    PubMed Central

    Stecher, Bärbel; Denzler, Rémy; Maier, Lisa; Bernet, Florian; Sanders, Mandy J.; Pickard, Derek J.; Barthel, Manja; Westendorf, Astrid M.; Krogfelt, Karen A.; Walker, Alan W.; Ackermann, Martin; Dobrindt, Ulrich; Thomson, Nicholas R.; Hardt, Wolf-Dietrich

    2012-01-01

    The mammalian gut harbors a dense microbial community interacting in multiple ways, including horizontal gene transfer (HGT). Pangenome analyses established particularly high levels of genetic flux between Gram-negative Enterobacteriaceae. However, the mechanisms fostering intraenterobacterial HGT are incompletely understood. Using a mouse colitis model, we found that Salmonella-inflicted enteropathy elicits parallel blooms of the pathogen and of resident commensal Escherichia coli. These blooms boosted conjugative HGT of the colicin-plasmid p2 from Salmonella enterica serovar Typhimurium to E. coli. Transconjugation efficiencies of ∼100% in vivo were attributable to high intrinsic p2-transfer rates. Plasmid-encoded fitness benefits contributed little. Under normal conditions, HGT was blocked by the commensal microbiota inhibiting contact-dependent conjugation between Enterobacteriaceae. Our data show that pathogen-driven inflammatory responses in the gut can generate transient enterobacterial blooms in which conjugative transfer occurs at unprecedented rates. These blooms may favor reassortment of plasmid-encoded genes between pathogens and commensals fostering the spread of fitness-, virulence-, and antibiotic-resistance determinants. PMID:22232693

  13. Pharmacodynamics of long-acting folic acid-receptor targeted ritonavir boosted atazanavir nanoformulations

    PubMed Central

    Puligujja, Pavan; Balkundi, Shantanu; Kendrick, Lindsey; Baldridge, Hannah; Hilaire, James; Bade, Aditya N.; Dash, Prasanta K.; Zhang, Gang; Poluektova, Larisa; Gorantla, Santhi; Liu, Xin-Ming; Ying, Tianlei; Feng, Yang; Wang, Yanping; Dimitrov, Dimiter S.; McMillan, JoEllyn M.; Gendelman, Howard E.

    2014-01-01

    Long-acting nanoformulated antiretroviral therapy (nanoART) that target monocyte-macrophage could improve the drug’s half-life and protein binding capacities while facilitating cell and tissue depots. To this end, ART nanoparticles that target the folic acid (FA) receptor and permit cell-based drug depots were examined using pharmacokinetic and pharmacodynamic (PD) tests. FA receptor-targeted poloxamer 407 nanocrystals, containing ritonavir-boosted atazanavir (ATV/r), significantly affected several therapeutic factors: drug bioavailability increased as much as 5 times and PD activity improved as much as 100 times. Drug particles administered to human peripheral blood lymphocyte reconstituted NOD.Cg-PrkdcscidIl2rgtm1Wjl/SzJ mice and infected with HIV-1ADA at a tissue culture infective dose50 of 104 infectious viral particles/ml led to ATV/r drug concentrations that paralleled FA receptor beta staining in both the macrophage-rich parafollicular areas of spleen and lymph nodes. Drug levels were higher in these tissues than what could be achieved by either native drug or untargeted nanoART particles. The data also mirrored potent reductions in viral loads, tissue viral RNA and numbers of HIV-1p24+ cells in infected and treated animals. We conclude that FA-P407 coating of ART nanoparticles readily facilitate drug carriage and facilitate antiretroviral responses. PMID:25522973

  14. Pharmacodynamics of long-acting folic acid-receptor targeted ritonavir-boosted atazanavir nanoformulations.

    PubMed

    Puligujja, Pavan; Balkundi, Shantanu S; Kendrick, Lindsey M; Baldridge, Hannah M; Hilaire, James R; Bade, Aditya N; Dash, Prasanta K; Zhang, Gang; Poluektova, Larisa Y; Gorantla, Santhi; Liu, Xin-Ming; Ying, Tianlei; Feng, Yang; Wang, Yanping; Dimitrov, Dimiter S; McMillan, JoEllyn M; Gendelman, Howard E

    2015-02-01

    Long-acting nanoformulated antiretroviral therapy (nanoART) that targets monocyte-macrophages could improve the drug's half-life and protein-binding capacities while facilitating cell and tissue depots. To this end, ART nanoparticles that target the folic acid (FA) receptor and permit cell-based drug depots were examined using pharmacokinetic and pharmacodynamic (PD) tests. FA receptor-targeted poloxamer 407 nanocrystals, containing ritonavir-boosted atazanavir (ATV/r), significantly increased drug bioavailability and PD by five and 100 times, respectively. Drug particles administered to human peripheral blood lymphocyte reconstituted NOD.Cg-Prkdc(scid)Il2rg(tm1Wjl)/SzJ mice and infected with HIV-1ADA led to ATV/r drug concentrations that paralleled FA receptor beta staining in both the macrophage-rich parafollicular areas of spleen and lymph nodes. Drug levels were higher in these tissues than what could be achieved by either native drug or untargeted nanoART particles. The data also mirrored potent reductions in viral loads, tissue viral RNA and numbers of HIV-1p24+ cells in infected and treated animals. We conclude that FA-P407 coating of ART nanoparticles readily facilitates drug carriage and antiretroviral responses.

  15. Parallel Impurity Spreading During Massive Gas Injection

    NASA Astrophysics Data System (ADS)

    Izzo, V. A.

    2016-10-01

    Extended-MHD simulations of disruption mitigation in DIII-D demonstrate that both pre-existing islands (locked-modes) and plasma rotation can significantly influence toroidal spreading of impurities following massive gas injection (MGI). Given the importance of successful disruption mitigation in ITER and the large disparity in device parameters, empirical demonstrations of disruption mitigation strategies in present tokamaks are insufficient to inspire unreserved confidence for ITER. Here, MHD simulations elucidate how impurities injected as a localized jet spread toroidally and poloidally. Simulations with large pre-existing islands at the q = 2 surface reveal that the magnetic topology strongly influences the rate of impurity spreading parallel to the field lines. Parallel spreading is largely driven by rapid parallel heat conduction, and is much faster at low order rational surfaces, where a short parallel connection length leads to faster thermal equilibration. Consequently, the presence of large islands, which alter the connection length, can slow impurity transport; but the simulations also show that the appearance of a 4/2 harmonic of the 2/1 mode, which breaks up the large islands, can increase the rate of spreading. This effect is seen both for simulations with spontaneously growing and directly imposed 4/2 modes. Given the prevalence of locked-modes as a cause of disruptions, understanding the effect of large islands is of particular importance. Simulations with and without islands also show that rotation can alter impurity spreading, even reversing the predominant direction of spreading, which is toward the high-field-side in the absence of rotation. Given expected differences in rotation for ITER vs. DIII-D, rotation effects are another important consideration when extrapolating experimental results. Work supported by US DOE under DE-FG02-95ER54309.

  16. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  17. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  18. Parallel processing for control applications

    SciTech Connect

    Telford, J. W.

    2001-01-01

    Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

  19. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  20. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.