Science.gov

Sample records for accurate high performance

  1. High-performance computing and networking as tools for accurate emission computed tomography reconstruction.

    PubMed

    Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. PMID:9096089

  2. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    PubMed

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-01

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened. PMID:26278259

  3. High resolution as a key feature to perform accurate ELISPOT measurements using Zeiss KS ELISPOT readers.

    PubMed

    Malkusch, Wolf

    2005-01-01

    The enzyme-linked immunospot (ELISPOT) assay was originally developed for the detection of individual antibody secreting B-cells. Since then, the method has been improved, and ELISPOT is used for the determination of the production of tumor necrosis factor (TNF)-alpha, interferon (IFN)-gamma, or various interleukins (IL)-4, IL-5. ELISPOT measurements are performed in 96-well plates with nitrocellulose membranes either visually or by means of image analysis. Image analysis offers various procedures to overcome variable background intensity problems and separate true from false spots. ELISPOT readers offer a complete solution for precise and automatic evaluation of ELISPOT assays. Number, size, and intensity of each single spot can be determined, printed, or saved for further statistical evaluation. Cytokine spots are always round, but because of floating edges with the background, they have a nonsmooth borderline. Resolution is a key feature for a precise detection of ELISPOT. In standard applications shape and edge steepness are essential parameters in addition to size and color for an accurate spot recognition. These parameters need a minimum spot diameter of 6 pixels. Collecting one single image per well with a standard color camera with 750 x 560 pixels will result in a resolution much too low to get all of the spots in a specimen. IFN-gamma spots may have only 25 microm diameters, and TNF-alpha spots just 15 microm. A 750 x 560 pixel image of a 6-mm well has a pixel size of 12 microm, resulting in only 1 or 2 pixel for a spot. Using a precise microscope optic in combination with a high resolution (1300 x 1030 pixel) integrating digital color camera, and at least 2 x 2 images per well will result in a pixel size of 2.5 microm and, as a minimum, 6 pixel diameter per spot. New approaches try to detect two cytokines per cell at the same time (i.e., IFN-gamma and IL-5). Standard staining procedures produce brownish spots (horseradish peroxidase) and blue spots

  4. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  5. A simple, accurate, time-saving and green method for the determination of 15 sulfonamides and metabolites in serum samples by ultra-high performance supercritical fluid chromatography.

    PubMed

    Zhang, Yuan; Zhou, Wei-E; Li, Shao-Hui; Ren, Zhi-Qin; Li, Wei-Qing; Zhou, Yu; Feng, Xue-Song; Wu, Wen-Jie; Zhang, Feng

    2016-02-01

    An analytical method based on ultra-high performance supercritical fluid chromatography (UHPSFC) with photo-diode array detection (PDA) has been developed to quantify 15 sulfonamides and their N4-acetylation metabolites in serum. Under the optimized gradient elution conditions, it took only 7min to separate all 15 sulfonamides and the critical pairs of each parent drug and metabolite were completely separated. Variables affecting the UHPSFC were optimized to get a better separation. The performance of the developed method was evaluated. The UHPSFC method allowed the baseline separation and determination of 15 sulfonamides and metabolites with limit of detection ranging from 0.15 to 0.35μg/mL. Recoveries between 90.1 and 102.2% were obtained with satisfactory precision since relative standard deviations were always below 3%. The proposed method is simple, accurate, time-saving and green, it is applicable to a variety of sulfonamides detection in serum samples. PMID:26780846

  6. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  7. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  8. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  9. Performance of a Micro-Strip Gas Chamber for event wise, high rate thermal neutron detection with accurate 2D position determination

    NASA Astrophysics Data System (ADS)

    Mindur, B.; Alimov, S.; Fiutowski, T.; Schulz, C.; Wilpert, T.

    2014-12-01

    A two-dimensional (2D) position sensitive detector for neutron scattering applications based on low-pressure gas amplification and micro-strip technology was built and tested with an innovative readout electronics and data acquisition system. This detector contains a thin solid neutron converter and was developed for time- and thus wavelength-resolved neutron detection in single-event counting mode, which improves the image contrast in comparison with integrating detectors. The prototype detector of a Micro-Strip Gas Chamber (MSGC) was built with a solid natGd/CsI thermal neutron converter for spatial resolutions of about 100 μm and counting rates up to 107 neutrons/s. For attaining very high spatial resolutions and counting rates via micro-strip readout with centre-of-gravity evaluation of the signal amplitude distributions, a fast, channel-wise, self-triggering ASIC was developed. The front-end chips (MSGCROCs), which are very first signal processing components, are read out into powerful ADC-FPGA boards for on-line data processing and thereafter via Gigabit Ethernet link into the data receiving PC. The workstation PC is controlled by a modular, high performance dedicated software suite. Such a fast and accurate system is crucial for efficient radiography/tomography, diffraction or imaging applications based on high flux thermal neutron beam. In this paper a brief description of the detector concept with its operation principles, readout electronics requirements and design together with the signals processing stages performed in hardware and software are presented. In more detail the neutron test beam conditions and measurement results are reported. The focus of this paper is on the system integration, two dimensional spatial resolution, the time resolution of the readout system and the imaging capabilities of the overall setup. The detection efficiency of the detector prototype is estimated as well.

  10. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  11. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  12. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  15. Can Scores on an Interim High School Reading Assessment Accurately Predict Low Performance on College Readiness Exams? REL 2016-124

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2016-01-01

    During the 2013/14 school year two Florida school districts sought to develop an early warning system to identify students at risk of low performance on college readiness measures in grade 11 or 12 (such as the SAT or ACT) in order to support them with remedial coursework prior to high school graduation. The study presented in this report provides…

  16. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  17. Feasibility of ultra-high performance liquid and gas chromatography coupled to mass spectrometry for accurate determination of primary and secondary phthalate metabolites in urine samples.

    PubMed

    Herrero, Laura; Calvarro, Sagrario; Fernández, Mario A; Quintanilla-López, Jesús Eduardo; González, María José; Gómara, Belén

    2015-01-01

    Phthalates (PAEs) are ubiquitous toxic chemical compounds. During the last few years, some phthalate metabolites (MPAEs) have been proposed as appropriate biomarkers in human urine samples to determine PAE human intake and exposure. So, it is necessary to have fast, easy, robust and validated analytical methods to determine selected MPAEs in urine human samples. Two different instrumental methods based on gas (GC) and ultra-high performance liquid (UHPLC) chromatography coupled to mass spectrometry (MS) have been optimized, characterized and validated for the simultaneous determination of nine primary and secondary phthalate metabolites in urine samples. Both instrumental methods have similar sensitivity (detection limits ranged from 0.03 to 8.89 pg μL(-1) and from 0.06 to 0.49 pg μL(-1) in GC-MS and UHPLC-MS(2), respectively), precision (repeatability, expressed as relative standard deviation, which was lower than 8.4% in both systems, except for 5OH-MEHP in the case of GC-MS) and accuracy. But some advantages of the UHPLC-MS(2) method, such as more selectivity and lower time in the chromatographic runs (6.8 min vs. 28.5 min), have caused the UHPLC-MS(2) method to be chosen to analyze the twenty one human urine samples from the general Spanish population. Regarding these samples, MEP showed the highest median concentration (68.6 μg L(-1)), followed by MiBP (23.3 μg L(-1)), 5cx-MEPP (22.5 μg L(-1)) and MBP (19.3μgL(-1)). MMP (6.99 μg L(-1)), 5oxo-MEHP (6.15 μg L(-1)), 5OH-MEHP (5.30 μg L(-1)) and MEHP (4.40 μg L(-1)) showed intermediate levels. Finally, the lowest levels were found for MBzP (2.55 μg L(-1)). These data are within the same order of magnitude as those found in other similar populations. PMID:25467512

  18. An optimized method for neurotransmitters and their metabolites analysis in mouse hypothalamus by high performance liquid chromatography-Q Exactive hybrid quadrupole-orbitrap high-resolution accurate mass spectrometry.

    PubMed

    Yang, Zong-Lin; Li, Hui; Wang, Bing; Liu, Shu-Ying

    2016-02-15

    Neurotransmitters (NTs) and their metabolites are known to play an essential role in maintaining various physiological functions in nervous system. However, there are many difficulties in the detection of NTs together with their metabolites in biological samples. A new method for NTs and their metabolites detection by high performance liquid chromatography coupled with Q Exactive hybrid quadruple-orbitrap high-resolution accurate mass spectrometry (HPLC-HRMS) was established in this paper. This method was a great development of the applying of Q Exactive MS in the quantitative analysis. This method enabled a rapid quantification of ten compounds within 18min. Good linearity was obtained with a correlation coefficient above 0.99. The concentration range of the limit of detection (LOD) and the limit of quantitation (LOQ) level were 0.0008-0.05nmol/mL and 0.002-25.0nmol/mL respectively. Precisions (relative standard deviation, RSD) of this method were at 0.36-12.70%. Recovery ranges were between 81.83% and 118.04%. Concentrations of these compounds in mouse hypothalamus were detected by Q Exactive LC-MS technology with this method. PMID:26812177

  19. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  20. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  1. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  2. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  3. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  4. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    SciTech Connect

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  5. An accurate modeling, simulation, and analysis tool for predicting and estimating Raman LIDAR system performance

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.

    2007-09-01

    BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.

  6. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  7. Accurate strain measurements in highly strained Ge microbridges

    NASA Astrophysics Data System (ADS)

    Gassenq, A.; Tardif, S.; Guilloy, K.; Osvaldo Dias, G.; Pauc, N.; Duchemin, I.; Rouchon, D.; Hartmann, J.-M.; Widiez, J.; Escalante, J.; Niquet, Y.-M.; Geiger, R.; Zabel, T.; Sigg, H.; Faist, J.; Chelnokov, A.; Rieutord, F.; Reboud, V.; Calvo, V.

    2016-06-01

    Ge under high strain is predicted to become a direct bandgap semiconductor. Very large deformations can be introduced using microbridge devices. However, at the microscale, strain values are commonly deduced from Raman spectroscopy using empirical linear models only established up to ɛ100 = 1.2% for uniaxial stress. In this work, we calibrate the Raman-strain relation at higher strain using synchrotron based microdiffraction. The Ge microbridges show unprecedented high tensile strain up to 4.9% corresponding to an unexpected Δω = 9.9 cm-1 Raman shift. We demonstrate experimentally and theoretically that the Raman strain relation is not linear and we provide a more accurate expression.

  8. High energy laser testbed for accurate beam pointing control

    NASA Astrophysics Data System (ADS)

    Kim, Dojong; Kim, Jae Jun; Frist, Duane; Nagashima, Masaki; Agrawal, Brij

    2010-02-01

    Precision laser beam pointing is a key technology in High Energy Laser systems. In this paper, a laboratory High Energy Laser testbed developed at the Naval Postgraduate School is introduced. System identification is performed and a mathematical model is constructed to estimate system performance. New beam pointing control algorithms are designed based on this mathematical model. It is shown in both computer simulation and experiment that the adaptive filter algorithm can improve the pointing performance of the system.

  9. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  10. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    SciTech Connect

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented.

  11. Highly Accurate Inverse Consistent Registration: A Robust Approach

    PubMed Central

    Reuter, Martin; Rosas, H. Diana; Fischl, Bruce

    2010-01-01

    The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289

  12. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  13. A third-order-accurate upwind scheme for Navier-Stokes solutions at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Agarwal, R. K.

    1981-01-01

    A third-order-accurate upwind scheme is presented for solution of the steady two-dimensional Navier-Stokes equations in stream-function/vorticity form. The scheme is found to be accurate and stable at high Reynolds numbers. A series of test computations is performed on flows with large recirculating regions. In particular, highly accurate solutions are obtained for flow in a driven square cavity up to Reynolds numbers of 10,000. These computations are used to critically evaluate the accuracy of other existing first- and second-order-accurate upwind schemes. In addition, computations are carried out for flow in a channel with symmetric sudden expansion, flow in a channel with a symmetrically placed blunt base, and the flowfield of an impinging jet. Good agreement is obtained with the computations of other investigators as well as with the available experimental data.

  14. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    SciTech Connect

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss

  16. Developing accurate simulations for high-speed fiber links

    NASA Astrophysics Data System (ADS)

    Searcy, Steven; Stark, Andrew; Hsueh, Yu-Ting; Detwiler, Thomas; Tibuleac, Sorin; Chang, GK; Ralph, Stephen E.

    2011-01-01

    Reliable simulations of high-speed fiber optic links are necessary to understand, design, and deploy fiber networks. Laboratory experiments cannot explore all possible component variations and fiber environments that are found in today's deployed systems. Simulations typically depict relative penalties compared to a reference link. However, absolute performance metrics are required to assess actual deployment configurations. Here we detail the efforts within the Georgia Tech 100G Consortium towards achieving high absolute accuracy between simulation and experimental performance with a goal of +/-0.25 dB for back-to-back configuration, and +/-0.5 dB for transmission over multiple spans with different dispersion maps. We measure all possible component parameters including fiber length, loss, and dispersion for use in simulation. We also validate experimental methods of performance evaluation including OSNR assessment and DSP-based demodulation. We investigate a wide range of parameters including modulator chirp, polarization state, polarization dependent loss, transmit spectrum, laser linewidth, and fiber nonlinearity. We evaluate 56 Gb/s (single-polarization) and 112 Gb/s (dual-polarization) DQPSK and coherent QPSK within a 50 GHz DWDM environment with 10 Gb/s OOK adjacent channels for worst-case XPM effects. We demonstrate good simulation accuracy within linear and some nonlinear regimes for a wide range of OSNR in both back-to-back configuration and up to eight spans, over a range of launch powers. This allows us to explore a wide range of environments not available in the lab, including different fiber types, ROADM passbands, and levels of crosstalk. Continued exploration is required to validate robustness over various demodulation algorithms.

  17. CgWind: A high-order accurate simulation tool for wind turbines and wind farms

    SciTech Connect

    Chand, K K; Henshaw, W D; Lundquist, K A; Singer, M A

    2010-02-22

    CgWind is a high-fidelity large eddy simulation (LES) tool designed to meet the modeling needs of wind turbine and wind park engineers. This tool combines several advanced computational technologies in order to model accurately the complex and dynamic nature of wind energy applications. The composite grid approach provides high-quality structured grids for the efficient implementation of high-order accurate discretizations of the incompressible Navier-Stokes equations. Composite grids also provide a natural mechanism for modeling bodies in relative motion and complex geometry. Advanced algorithms such as matrix-free multigrid, compact discretizations and approximate factorization will allow CgWind to perform highly resolved calculations efficiently on a wide class of computing resources. Also in development are nonlinear LES subgrid-scale models required to simulate the many interacting scales present in large wind turbine applications. This paper outlines our approach, the current status of CgWind and future development plans.

  18. Cerebral cortical activity associated with non-experts' most accurate motor performance.

    PubMed

    Dyke, Ford; Godwin, Maurice M; Goel, Paras; Rehm, Jared; Rietschel, Jeremy C; Hunt, Carly A; Miller, Matthew W

    2014-10-01

    This study's specific aim was to determine if non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. To assess this, EEG was recorded from non-expert golfers executing putts; EEG spectral power and coherence were calculated for the epoch preceding putt execution; and spectral power and coherence for the five most accurate putts were contrasted with that for the five least accurate. Results revealed marked power in the theta frequency bandwidth at all cerebral cortical regions for the most accurate putts relative to the least accurate, and considerable power in the low-beta frequency bandwidth at the left temporal region for the most accurate compared to the least. As theta power is associated with working memory and low-beta power at the left temporal region with verbal analysis, results suggest non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. PMID:25058623

  19. Extensive and Highly Accurate Line Lists for Hydrogen Halides

    NASA Astrophysics Data System (ADS)

    Li, G.; Bernath, P. F.; Gordon, I. E.; Rothman, L. S.; Richard, C.; Le Roy, R. J.; Coxon, J. A.; Hajigeorgiou, P.

    2011-06-01

    New dipole moment functions (DMF) for the ground X 1Σ+ electronic states of the hydrogen halides (HF, HCl, HBr, HI) have been obtained using a direct fit approach that fits the best available and appropriately weighted experimental line intensity data for individual ro-vibrational transitions. Combining the newly developed (taking into account the most recent experiments) empirical potential energy functions and the DMFs, line positions and line intensities of the hydrogen halides and their isotopologues have been calculated numerically using program LEVEL. In addition, new semi-empirical algorithms for assigning line-shape parameters for these species have been developed. Using these improvements, new line lists for hydrogen halides were created to update the HITRAN spectroscopic database. These new lists are more accurate and significantly more extensive than those included in the current version of the database (HITRAN2008). R.J. Le Roy, ``LEVEL 8.0, 2007'', University of Waterloo Chemical Physics Research Report CP-663 (2007); see http://leroy.uwaterloo.ca/programs/. L.S. Rothman, I.E. Gordon, A. Barbe, D.C. Benner, P.F. Bernath, et al., ``The HITRAN 2008 Molecular Spectroscopic Database,'' JQSRT 110, 532-572 (2009).

  20. Informatics-based, highly accurate, noninvasive prenatal paternity testing

    PubMed Central

    Ryan, Allison; Baner, Johan; Demko, Zachary; Hill, Matthew; Sigurjonsson, Styrmir; Baird, Michael L.; Rabinowitz, Matthew

    2013-01-01

    Purpose: The aim of the study was to evaluate the diagnostic accuracy of an informatics-based, noninvasive, prenatal paternity test using array-based single-nucleotide polymorphism measurements of cell-free DNA isolated from maternal plasma. Methods: Blood samples were taken from 21 adult pregnant women (with gestational ages between 6 and 21 weeks), and a genetic sample was taken from the corresponding biological fathers. Paternity was confirmed by genetic testing of the infant, products of conception, control of fertilization, and/or preimplantation genetic diagnosis during in vitro fertilization. Parental DNA samples and maternal plasma cell-free DNA were amplified and analyzed using a HumanCytoSNP-12 array. An informatics-based method measured single-nucleotide polymorphism data, confirming or rejecting paternity. Each plasma sample with a sufficient fetal cell-free DNA fraction was independently tested against the confirmed father and 1,820 random, unrelated males. Results: One of the 21 samples had insufficient fetal cell-free DNA. The test correctly confirmed paternity for the remaining 20 samples (100%) when tested against the biological father, with P values of <10−4. For the 36,400 tests using an unrelated male as the alleged father, 99.95% (36,382) correctly excluded paternity and 0.05% (18) were indeterminate. There were no miscalls. Conclusion: A noninvasive paternity test using informatics-based analysis of single-nucleotide polymorphism array measurements accurately determined paternity early in pregnancy. PMID:23258349

  1. Preschoolers can make highly accurate judgments of learning.

    PubMed

    Lipowski, Stacy L; Merriman, William E; Dunlosky, John

    2013-08-01

    Preschoolers' ability to make judgments of learning (JOLs) was examined in 3 experiments in which they were taught proper names for animals. In Experiment 1, when judgments were made immediately after studying, nearly every child predicted subsequent recall of every name. When judgments were made after a delay, fewer showed this response tendency. The delayed JOLs of those who predicted at least 1 recall failure were still overconfident, however, and were not correlated with final recall. In Experiment 2, children received a second study trial with feedback, made JOLs after a delay, and completed an additional forced-choice judgment task. In this task, an animal whose name had been recalled was pitted against an animal whose name had not been recalled, and the children chose the one they were more likely to remember later. Compared with Experiment 1, more children predicted at least 1 recall failure and predictions were moderately accurate. In the forced-choice task, animal names that had just been successfully recalled were typically chosen over ones that had not. Experiment 3 examined the effect of providing an additional retrieval attempt on delayed JOLs. Half of the children received a single study session, and half received an additional study session with feedback. Children in the practice group showed less overconfidence than those in the no-practice group. Taken together, the results suggest that, with minimal task experience, most preschoolers understand that they will not remember everything and that if they cannot recall something at present, they are unlikely to recall it in the future. PMID:23148937

  2. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  3. Highly accurate boronimeter assay of concentrated boric acid solutions

    SciTech Connect

    Ball, R.M. )

    1992-01-01

    The Random-Walk Boronimeter has successfully been used as an on-line indicator of boric acid concentration in an operating commercial pressurized water reactor. The principle has been adapted for measurement of discrete samples to high accuracy and to concentrations up to 6000 ppm natural boron in light water. Boric acid concentration in an aqueous solution is a necessary measurement in many nuclear power plants, particularly those that use boric acid dissolved in the reactor coolant as a reactivity control system. Other nuclear plants use a high-concentration boric acid solution as a backup shutdown system. Such a shutdown system depends on rapid injection of the solution and frequent surveillance of the fluid to ensure the presence of the neutron absorber. The two methods typically used to measure boric acid are the chemical and the physical methods. The chemical method uses titration to determine the ionic concentration of the BO[sub 3] ions and infers the boron concentration. The physical method uses the attenuation of neutrons by the solution and infers the boron concentration from the neutron absorption properties. This paper describes the Random-Walk Boronimeter configured to measure discrete samples to high accuracy and high concentration.

  4. An accurate continuous calibration system for high voltage current transformer

    SciTech Connect

    Tong Yue; Li Binhong

    2011-02-15

    A continuous calibration system for high voltage current transformers is presented in this paper. The sensor of this system is based on a kind of electronic instrument current transformer, which is a clamp-shape air core coil. This system uses an optical fiber transmission system for its signal transmission and power supply. Finally the digital integrator and fourth-order convolution window algorithm as error calculation methods are realized by the virtual instrument with a personal computer. It is found that this system can calibrate a high voltage current transformer while energized, which means avoiding a long calibrating period in the power system and the loss of power metering expense. At the same time, it has a wide dynamic range and frequency band, and it can achieve a high accuracy measurement in a complex electromagnetic field environment. The experimental results and the on-site operation results presented in the last part of the paper, prove that it can reach the 0.05 accuracy class and is easy to operate on site.

  5. An accurate continuous calibration system for high voltage current transformer.

    PubMed

    Tong, Yue; Li, Bin Hong

    2011-02-01

    A continuous calibration system for high voltage current transformers is presented in this paper. The sensor of this system is based on a kind of electronic instrument current transformer, which is a clamp-shape air core coil. This system uses an optical fiber transmission system for its signal transmission and power supply. Finally the digital integrator and fourth-order convolution window algorithm as error calculation methods are realized by the virtual instrument with a personal computer. It is found that this system can calibrate a high voltage current transformer while energized, which means avoiding a long calibrating period in the power system and the loss of power metering expense. At the same time, it has a wide dynamic range and frequency band, and it can achieve a high accuracy measurement in a complex electromagnetic field environment. The experimental results and the on-site operation results presented in the last part of the paper, prove that it can reach the 0.05 accuracy class and is easy to operate on site. PMID:21361633

  6. Radiologists’ ability to accurately estimate and compare their own interpretative mammography performance to their peers

    PubMed Central

    Cook, Andrea J.; Elmore, Joann G.; Zhu, Weiwei; Jackson, Sara L.; Carney, Patricia A.; Flowers, Chris; Onega, Tracy; Geller, Berta; Rosenberg, Robert D.; Miglioretti, Diana L.

    2013-01-01

    Objective To determine if U.S. radiologists accurately estimate their own interpretive performance of screening mammography and how they compare their performance to their peers’. Materials and Methods 174 radiologists from six Breast Cancer Surveillance Consortium (BCSC) registries completed a mailed survey between 2005 and 2006. Radiologists’ estimated and actual recall, false positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography were compared. Radiologists’ ratings of their performance as lower, similar, or higher than their peers were compared to their actual performance. Associations with radiologist characteristics were estimated using weighted generalized linear models. The study was approved by the institutional review boards of the participating sites, informed consent was obtained from radiologists, and procedures were HIPAA compliant. Results While most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false positive rate and PPV2 (19% and 26%). Radiologists reported having similar (43%) or lower (31%) recall rates and similar (52%) or lower (33%) false positive rates compared to their peers, and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV2. Estimation accuracy did not differ by radiologists’ characteristics except radiologists who interpret ≤1,000 mammograms annually were less accurate at estimating their recall rates. Conclusion Radiologists perceive their performance to be better than it actually is and at least as good as their peers. Radiologists have particular difficulty estimating their false positive rates and PPV2. PMID:22915414

  7. Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems

    NASA Astrophysics Data System (ADS)

    Oden, J. T.

    1992-08-01

    This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.

  8. Automated generation of highly accurate, efficient and transferable pseudopotentials

    NASA Astrophysics Data System (ADS)

    Hansel, R. A.; Brock, C. N.; Paikoff, B. C.; Tackett, A. R.; Walker, D. G.

    2015-11-01

    A multi-objective genetic algorithm (MOGA) was used to automate a search for optimized pseudopotential parameters. Pseudopotentials were generated using the atomPAW program and density functional theory (DFT) simulations were conducted using the pwPAW program. The optimized parameters were the cutoff radius and projector energies for the s and p orbitals. The two objectives were low pseudopotential error and low computational work requirements. The error was determined from (1) the root mean square difference between the all-electron and pseudized-electron log derivative, (2) the calculated lattice constant versus reference data of Holzwarth et al., and (3) the calculated bulk modulus versus reference potentials. The computational work was defined as the number of flops required to perform the DFT simulation. Pseudopotential transferability was encouraged by optimizing each element in different lattices: (1) nitrogen in GaN, AlN, and YN, (2) oxygen in NO, ZnO, and SiO4, and (3) fluorine in LiF, NaF, and KF. The optimal solutions were equivalent in error and required significantly less computational work than the reference data. This proof-of-concept study demonstrates that the combination of MOGA and ab-initio simulations is a powerful tool that can generate a set of transferable potentials with a trade-off between accuracy (error) and computational efficiency (work).

  9. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  10. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  11. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  12. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  13. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    SciTech Connect

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  14. Direct Simulations of Transition and Turbulence Using High-Order Accurate Finite-Difference Schemes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    1997-01-01

    In recent years the techniques of computational fluid dynamics (CFD) have been used to compute flows associated with geometrically complex configurations. However, success in terms of accuracy and reliability has been limited to cases where the effects of turbulence and transition could be modeled in a straightforward manner. Even in simple flows, the accurate computation of skin friction and heat transfer using existing turbulence models has proved to be a difficult task, one that has required extensive fine-tuning of the turbulence models used. In more complex flows (for example, in turbomachinery flows in which vortices and wakes impinge on airfoil surfaces causing periodic transitions from laminar to turbulent flow) the development of a model that accounts for all scales of turbulence and predicts the onset of transition may prove to be impractical. Fortunately, current trends in computing suggest that it may be possible to perform direct simulations of turbulence and transition at moderate Reynolds numbers in some complex cases in the near future. This seminar will focus on direct simulations of transition and turbulence using high-order accurate finite-difference methods. The advantage of the finite-difference approach over spectral methods is that complex geometries can be treated in a straightforward manner. Additionally, finite-difference techniques are the prevailing methods in existing application codes. In this seminar high-order-accurate finite-difference methods for the compressible and incompressible formulations of the unsteady Navier-Stokes equations and their applications to direct simulations of turbulence and transition will be presented.

  15. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  16. Indirect Terahertz Spectroscopy of Molecular Ions Using Highly Accurate and Precise Mid-Ir Spectroscopy

    NASA Astrophysics Data System (ADS)

    Mills, Andrew A.; Ford, Kyle B.; Kreckel, Holger; Perera, Manori; Crabtree, Kyle N.; McCall, Benjamin J.

    2009-06-01

    With the advent of Herschel and SOFIA, laboratory methods capable of providing molecular rest frequencies in the terahertz and sub-millimeter regime are increasingly important. As of yet, it has been difficult to perform spectroscopy in this wavelength region due to the limited availability of radiation sources, optics, and detectors. Our goal is to provide accurate THz rest frequencies for molecular ions by combining previously recorded microwave transitions with combination differences obtained from high precision mid-IR spectroscopy. We are constructing a Sensitive Resolved Ion Beam Spectroscopy setup which will harness the benefits of kinematic compression in a molecular ion beam to enable very high resolution spectroscopy. This ion beam is interrogated by continuous-wave cavity ringdown spectroscopy using a home-made widely tunable difference frequency laser that utilizes two near-IR lasers and a periodically-poled lithium niobate crystal. Here, we report our efforts to optimize our ion beam spectrometer and to perform high-precision and high-accuracy frequency measurements using an optical frequency comb. footnote

  17. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  18. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  19. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  20. High-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion

    NASA Astrophysics Data System (ADS)

    Moiseev, N. Ya.

    2011-04-01

    An approach to the construction of high-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion is proposed. Godunov's theorem on monotone schemes is used to construct a new antidiffusion flux limiter in high-order accurate difference schemes as applied to linear advection equations with constant coefficients. The efficiency of the approach is demonstrated by solving linear advection equations with constant coefficients and one-dimensional gasdynamic equations.

  1. Procedure for accurate fabrication of tissue compensators with high-density material

    NASA Astrophysics Data System (ADS)

    Mejaddem, Younes; Lax, Ingmar; Adakkai K, Shamsuddin

    1997-02-01

    An accurate method for producing compensating filters using high-density material (Cerrobend) is described. The procedure consists of two cutting steps in a Styrofoam block: (i) levelling a surface of the block to a reference level; (ii) depth-modulated milling of the levelled block in accordance with pre-calculated thickness profiles of the compensator. The calculated thickness (generated by a dose planning system) can be reproduced within acceptable accuracy. The desired compensator thickness manufactured according to this procedure is reproduced to within 0.1 mm, corresponding to a 0.5% change in dose at a beam quality of 6 MV. The results of our quality control checks performed with the technique of stylus profiling measurements show an accuracy of 0.04 mm in the milling process over an arbitrary profile along the milled-out Styrofoam block.

  2. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance

    PubMed Central

    Talamas, Sean N.; Mavor, Kenneth I.; Perrett, David I.

    2016-01-01

    Despite the old adage not to ‘judge a book by its cover’, facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone’s health or intelligence, but such cues are overshadowed by an ‘attractiveness halo’ whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students’ future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  3. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  4. Highly accurate nitrogen dioxide (NO2) in nitrogen standards based on permeation.

    PubMed

    Flores, Edgar; Viallon, Joële; Moussay, Philippe; Idrees, Faraz; Wielgosz, Robert Ian

    2012-12-01

    The development and operation of a highly accurate primary gas facility for the dynamic production of mixtures of nitrogen dioxide (NO(2)) in nitrogen (N(2)) based on continuous weighing of a permeation tube and accurate impurity quantification and correction of the gas mixtures using Fourier transform infrared spectroscopy (FT-IR) is described. NO(2) gas mixtures in the range of 5 μmol mol(-1) to 15 μmol mol(-1) with a standard relative uncertainty of 0.4% can be produced with this facility. To achieve an uncertainty at this level, significant efforts were made to reduce, identify and quantify potential impurities present in the gas mixtures, such as nitric acid (HNO(3)). A complete uncertainty budget, based on the analysis of the performance of the facility, including the use of a FT-IR spectrometer and a nondispersive UV analyzer as analytical techniques, is presented in this work. The mixtures produced by this facility were validated and then selected to provide reference values for an international comparison of the Consultative Committee for Amount of Substance (CCQM), number CCQM-K74, (1) which was designed to evaluate the consistency of primary NO(2) gas standards from 17 National Metrology Institutes. PMID:23148702

  5. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  6. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  7. Highly accurate isotope measurements of surface material on planetary objects in situ

    NASA Astrophysics Data System (ADS)

    Riedo, Andreas; Neuland, Maike; Meyer, Stefan; Tulej, Marek; Wurz, Peter

    2013-04-01

    Studies of isotope variations in solar system objects are of particular interest and importance. Highly accurate isotope measurements provide insight into geochemical processes, constrain the time of formation of planetary material (crystallization ages) and can be robust tracers of pre-solar events and processes. A detailed understanding of the chronology of the early solar system and dating of planetary materials require precise and accurate measurements of isotope ratios, e.g. lead, and abundance of trace element. However, such measurements are extremely challenging and until now, they never have been attempted in space research. Our group designed a highly miniaturized and self-optimizing laser ablation time-of-flight mass spectrometer for space flight for sensitive and accurate measurements of the elemental and isotopic composition of extraterrestrial materials in situ. Current studies were performed by using UV radiation for ablation and ionization of sample material. High spatial resolution is achieved by focusing the laser beam to about Ø 20μm onto the sample surface. The instrument supports a dynamic range of at least 8 orders of magnitude and a mass resolution m/Δm of up to 800—900, measured at iron peak. We developed a measurement procedure, which will be discussed in detail, that allows for the first time to measure with the instrument the isotope distribution of elements, e.g. Ti, Pb, etc., with a measurement accuracy and precision in the per mill and sub per mill level, which is comparable to well-known and accepted measurement techniques, such as TIMS, SIMS and LA-ICP-MS. The present instrument performance offers together with the measurement procedure in situ measurements of 207Pb/206Pb ages with the accuracy for age in the range of tens of millions of years. Furthermore, and in contrast to other space instrumentation, our instrument can measure all elements present in the sample above 10 ppb concentration, which offers versatile applications

  8. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  9. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  10. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  11. Highly accurate recognition of human postures and activities through classification with rejection.

    PubMed

    Tang, Wenlong; Sazonov, Edward S

    2014-01-01

    Monitoring of postures and activities is used in many clinical and research applications, some of which may require highly reliable posture and activity recognition with desired accuracy well above 99% mark. This paper suggests a method for performing highly accurate recognition of postures and activities from data collected by a wearable shoe monitor (SmartShoe) through classification with rejection. Signals from pressure and acceleration sensors embedded in SmartShoe are used either as raw sensor data or after feature extraction. The Support vector machine (SVM) and multilayer perceptron (MLP) are used to implement classification with rejection. Unreliable observations are rejected by measuring the distance from the decision boundary and eliminating those observations that reside below rejection threshold. The results show a significant improvement (from 97.3% ± 2.3% to 99.8% ± 0.1%) in the classification accuracy after the rejection, using MLP with raw sensor data and rejecting 31.6% of observations. The results also demonstrate that MLP outperformed the SVM, and the classification accuracy based on raw sensor data was higher than the accuracy based on extracted features. The proposed approach will be especially beneficial in applications where high accuracy of recognition is desired while not all observations need to be assigned a class label. PMID:24403429

  12. High Performance Work Practices and Firm Performance.

    ERIC Educational Resources Information Center

    Department of Labor, Washington, DC. Office of the American Workplace.

    A literature survey established that a substantial amount of research has been conducted on the relationship between productivity and the following specific high performance work practices: employee involvement in decision making, compensation linked to firm or worker performance, and training. According to these studies, high performance work…

  13. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  14. Multiple apolipoprotein kinetics measured in human HDL by high-resolution/accurate mass parallel reaction monitoring.

    PubMed

    Singh, Sasha A; Andraski, Allison B; Pieper, Brett; Goh, Wilson; Mendivil, Carlos O; Sacks, Frank M; Aikawa, Masanori

    2016-04-01

    Endogenous labeling with stable isotopes is used to study the metabolism of proteins in vivo. However, traditional detection methods such as GC/MS cannot measure tracer enrichment in multiple proteins simultaneously, and multiple reaction monitoring MS cannot measure precisely the low tracer enrichment in slowly turning-over proteins as in HDL. We exploited the versatility of the high-resolution/accurate mass (HR/AM) quadrupole Orbitrap for proteomic analysis of five HDL sizes. We identified 58 proteins in HDL that were shared among three humans and that were organized into five subproteomes according to HDL size. For seven of these proteins, apoA-I, apoA-II, apoA-IV, apoC-III, apoD, apoE, and apoM, we performed parallel reaction monitoring (PRM) to measure trideuterated leucine tracer enrichment between 0.03 to 1.0% in vivo, as required to study their metabolism. The results were suitable for multicompartmental modeling in all except apoD. These apolipoproteins in each HDL size mainly originated directly from the source compartment, presumably the liver and intestine. Flux of apolipoproteins from smaller to larger HDL or the reverse contributed only slightly to apolipoprotein metabolism. These novel findings on HDL apolipoprotein metabolism demonstrate the analytical breadth and scope of the HR/AM-PRM technology to perform metabolic research. PMID:26862155

  15. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  16. Development of a high accurate gear measuring machine based on laser interferometry

    NASA Astrophysics Data System (ADS)

    Lin, Hu; Xue, Zi; Yang, Guoliang; Huang, Yao; Wang, Heyan

    2015-02-01

    Gear measuring machine is a specialized device for gear profile, helix or pitch measurement. The classic method for gear measurement and the conventional gear measuring machine are introduced. In this gear measuring machine, the Abbe errors arisen from the angle error of guideways hold a great weight in affection of profile measurement error. For minimize of the Abbe error, a laser measuring system is applied to develop a high accurate gear measuring machine. In this laser measuring system, two cube-corner reflectors are placed close to the tip of probe, a laser beam from laser head is splited along two paths, one is arranged tangent to the base circle of gear for the measurement of profile and pitch, another is arranged parallel to the gear axis for the measurement of helix, both laser measurement performed with a resolution of 0.3nm. This approach not only improves the accuracy of length measurement but minimize the Abbe offset directly. The configuration of this improved measuring machine is illustrated in detail. The measurements are performed automatically, and all the measurement signals from guide rails, rotary table, probe and laser measuring system are obtained synchronously. Software collects all the data for further calculation and evaluation. The first measurements for a gear involute artifact and a helix artifact are carried out, the results are shown and analyzed as well.

  17. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  18. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  19. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  20. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner

    PubMed Central

    Lu, David V.; Brown, Randall H.; Arumugam, Manimozhiyan; Brent, Michael R.

    2009-01-01

    Motivation: The most accurate way to determine the intron–exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. Results: We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created ‘perfect’ simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Availability: Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/ Contact: davidlu@wustl.edu; brent@cse.wustl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19414532

  1. Gold nanospikes based microsensor as a highly accurate mercury emission monitoring system

    NASA Astrophysics Data System (ADS)

    Sabri, Ylias M.; Ippolito, Samuel J.; Tardio, James; Bansal, Vipul; O'Mullane, Anthony P.; Bhargava, Suresh K.

    2014-10-01

    Anthropogenic elemental mercury (Hg0) emission is a serious worldwide environmental problem due to the extreme toxicity of the heavy metal to humans, plants and wildlife. Development of an accurate and cheap microsensor based online monitoring system which can be integrated as part of Hg0 removal and control processes in industry is still a major challenge. Here, we demonstrate that forming Au nanospike structures directly onto the electrodes of a quartz crystal microbalance (QCM) using a novel electrochemical route results in a self-regenerating, highly robust, stable, sensitive and selective Hg0 vapor sensor. The data from a 127 day continuous test performed in the presence of volatile organic compounds and high humidity levels, showed that the sensor with an electrodeposted sensitive layer had 260% higher response magnitude, 3.4 times lower detection limit (~22 μg/m3 or ~2.46 ppbv) and higher accuracy (98% Vs 35%) over a Au control based QCM (unmodified) when exposed to a Hg0 vapor concentration of 10.55 mg/m3 at 101°C. Statistical analysis of the long term data showed that the nano-engineered Hg0 sorption sites on the developed Au nanospikes sensitive layer play a critical role in the enhanced sensitivity and selectivity of the developed sensor towards Hg0 vapor.

  2. Gold nanospikes based microsensor as a highly accurate mercury emission monitoring system

    PubMed Central

    Sabri, Ylias M.; Ippolito, Samuel J.; Tardio, James; Bansal, Vipul; O'Mullane, Anthony P.; Bhargava, Suresh K.

    2014-01-01

    Anthropogenic elemental mercury (Hg0) emission is a serious worldwide environmental problem due to the extreme toxicity of the heavy metal to humans, plants and wildlife. Development of an accurate and cheap microsensor based online monitoring system which can be integrated as part of Hg0 removal and control processes in industry is still a major challenge. Here, we demonstrate that forming Au nanospike structures directly onto the electrodes of a quartz crystal microbalance (QCM) using a novel electrochemical route results in a self-regenerating, highly robust, stable, sensitive and selective Hg0 vapor sensor. The data from a 127 day continuous test performed in the presence of volatile organic compounds and high humidity levels, showed that the sensor with an electrodeposted sensitive layer had 260% higher response magnitude, 3.4 times lower detection limit (~22 μg/m3 or ~2.46 ppbv) and higher accuracy (98% Vs 35%) over a Au control based QCM (unmodified) when exposed to a Hg0 vapor concentration of 10.55 mg/m3 at 101°C. Statistical analysis of the long term data showed that the nano-engineered Hg0 sorption sites on the developed Au nanospikes sensitive layer play a critical role in the enhanced sensitivity and selectivity of the developed sensor towards Hg0 vapor. PMID:25338965

  3. An accurate dynamical electron diffraction algorithm for reflection high-energy electron diffraction

    NASA Astrophysics Data System (ADS)

    Huang, J.; Cai, C. Y.; Lv, C. L.; Zhou, G. W.; Wang, Y. G.

    2015-12-01

    The conventional multislice method (CMS) method, one of the most popular dynamical electron diffraction calculation procedures in transmission electron microscopy, was introduced to calculate reflection high-energy electron diffraction (RHEED) as it is well adapted to deal with the deviations from the periodicity in the direction parallel to the surface. However, in the present work, we show that the CMS method is no longer sufficiently accurate for simulating RHEED with the accelerating voltage 3-100 kV because of the high-energy approximation. An accurate multislice (AMS) method can be an alternative for more accurate RHEED calculations with reasonable computing time. A detailed comparison of the numerical calculation of the AMS method and the CMS method is carried out with respect to different accelerating voltages, surface structure models, Debye-Waller factors and glancing angles.

  4. An improved method for accurate and rapid measurement of flight performance in Drosophila.

    PubMed

    Babcock, Daniel T; Ganetzky, Barry

    2014-01-01

    Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810

  5. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  6. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    SciTech Connect

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  7. Accurate and reliable high-throughput detection of copy number variation in the human genome

    PubMed Central

    Fiegler, Heike; Redon, Richard; Andrews, Dan; Scott, Carol; Andrews, Robert; Carder, Carol; Clark, Richard; Dovey, Oliver; Ellis, Peter; Feuk, Lars; French, Lisa; Hunt, Paul; Kalaitzopoulos, Dimitrios; Larkin, James; Montgomery, Lyndal; Perry, George H.; Plumb, Bob W.; Porter, Keith; Rigby, Rachel E.; Rigler, Diane; Valsesia, Armand; Langford, Cordelia; Humphray, Sean J.; Scherer, Stephen W.; Lee, Charles; Hurles, Matthew E.; Carter, Nigel P.

    2006-01-01

    This study describes a new tool for accurate and reliable high-throughput detection of copy number variation in the human genome. We have constructed a large-insert clone DNA microarray covering the entire human genome in tiling path resolution that we have used to identify copy number variation in human populations. Crucial to this study has been the development of a robust array platform and analytic process for the automated identification of copy number variants (CNVs). The array consists of 26,574 clones covering 93.7% of euchromatic regions. Clones were selected primarily from the published “Golden Path,” and mapping was confirmed by fingerprinting and BAC-end sequencing. Array performance was extensively tested by a series of validation assays. These included determining the hybridization characteristics of each individual clone on the array by chromosome-specific add-in experiments. Estimation of data reproducibility and false-positive/negative rates was carried out using self–self hybridizations, replicate experiments, and independent validations of CNVs. Based on these studies, we developed a variance-based automatic copy number detection analysis process (CNVfinder) and have demonstrated its robustness by comparison with the SW-ARRAY method. PMID:17122085

  8. Accurate calculation of the dissociation energy of the highly anharmonic system ClHCl(-).

    PubMed

    Stein, Christopher; Oswald, Rainer; Botschwina, Peter; Peterson, Kirk A

    2015-05-28

    Accurate bond dissociation energies (D0) are reported for different isotopologues of the highly anharmonic system ClHCl(-). The mass-independent equilibrium dissociation energy De was obtained by a composite method with frozen-core (fc) CCSD(T) as the basic contribution. Basis sets as large as aug-cc-pV8(+d)Z were employed, and extrapolation to the complete basis set (CBS) limit was carried out. Explicitly correlated calculations with the CCSD(T)-F12b method were also performed to support the conventionally calculated values. Core-core and core-valence correlation, scalar relativity, and higher-order correlation were considered as well. Two mass-dependent contributions, namely, the diagonal Born-Oppenheimer correction and the difference in zero-point energies between the complex and the HCl fragment, were then added in order to arrive at precise D0 values. Results for (35)ClH(35)Cl(-) and (35)ClD(35)Cl(-) are 23.81 and 23.63 kcal/mol, respectively, with estimated uncertainties of 0.05 kcal/mol. In contrast to FHF(-) ( Stein , C. ; Oswald , R. ; Sebald , P. ; Botschwina , P. ; Stoll , H. , Peterson , K. A. Mol. Phys. 2013 , 111 , 2647 - 2652 ), the D0 values of the bichloride species are larger than their De counterparts, which is an unusual situation in hydrogen-bonded systems. PMID:25405989

  9. Teacher Performance Pay Signals and Student Achievement: Are Signals Accurate, and How well Do They Work?

    ERIC Educational Resources Information Center

    Manzeske, David; Garland, Marshall; Williams, Ryan; West, Benjamin; Kistner, Alexandra Manzella; Rapaport, Amie

    2016-01-01

    High-performing teachers tend to seek out positions at more affluent or academically challenging schools, which tend to hire more experienced, effective educators. Consequently, low-income and minority students are more likely to attend schools with less experienced and less effective educators (see, for example, DeMonte & Hanna, 2014; Office…

  10. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  11. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  12. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  13. A High-Order Accurate Parallel Solver for Maxwell's Equations on Overlapping Grids

    SciTech Connect

    Henshaw, W D

    2005-09-23

    A scheme for the solution of the time dependent Maxwell's equations on composite overlapping grids is described. The method uses high-order accurate approximations in space and time for Maxwell's equations written as a second-order vector wave equation. High-order accurate symmetric difference approximations to the generalized Laplace operator are constructed for curvilinear component grids. The modified equation approach is used to develop high-order accurate approximations that only use three time levels and have the same time-stepping restriction as the second-order scheme. Discrete boundary conditions for perfect electrical conductors and for material interfaces are developed and analyzed. The implementation is optimized for component grids that are Cartesian, resulting in a fast and efficient method. The solver runs on parallel machines with each component grid distributed across one or more processors. Numerical results in two- and three-dimensions are presented for the fourth-order accurate version of the method. These results demonstrate the accuracy and efficiency of the approach.

  14. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  15. Accurate route demonstration by experienced homing pigeons does not improve subsequent homing performance in naive conspecifics.

    PubMed Central

    Banks, A N; Guilford, T

    2000-01-01

    We describe an experiment that uses the grouping tendencies and navigational abilities of the homing pigeon (Columba livia) to investigate the possibility of socially mediated information transfer in a field setting. By varying the composition of paired-release types, we allowed some naive birds to receive an accurate demonstration of the home route whilst others were paired with similarly naive conspecifics. After this 'paired phase', we predicted that if any learning of spatial information occurred then naive members of the former pairs would outperform their untutored conspecifics when re-released individually during the subsequent 'single phase' of the experiment. This prediction was not confirmed. Neither homing speed nor initial orientation was superior in individually released tutored versus untutored birds, despite the fact that both performance measures were better in the earlier 'paired phase' with experienced demonstrators. Our results suggest that although naive homing pigeons clearly interact with their experienced partners, they are unable to transfer any individually useful spatial information to subsequent homing flights. PMID:11413647

  16. Accurate route demonstration by experienced homing pigeons does not improve subsequent homing performance in naive conspecifics.

    PubMed

    Banks, A N; Guilford, T

    2000-11-22

    We describe an experiment that uses the grouping tendencies and navigational abilities of the homing pigeon (Columba livia) to investigate the possibility of socially mediated information transfer in a field setting. By varying the composition of paired-release types, we allowed some naive birds to receive an accurate demonstration of the home route whilst others were paired with similarly naive conspecifics. After this 'paired phase', we predicted that if any learning of spatial information occurred then naive members of the former pairs would outperform their untutored conspecifics when re-released individually during the subsequent 'single phase' of the experiment. This prediction was not confirmed. Neither homing speed nor initial orientation was superior in individually released tutored versus untutored birds, despite the fact that both performance measures were better in the earlier 'paired phase' with experienced demonstrators. Our results suggest that although naive homing pigeons clearly interact with their experienced partners, they are unable to transfer any individually useful spatial information to subsequent homing flights. PMID:11413647

  17. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  18. High Performance Computing Today

    SciTech Connect

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  19. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  20. Highly accurate spectral retardance characterization of a liquid crystal retarder including Fabry-Perot interference effects

    SciTech Connect

    Vargas, Asticio; Mar Sánchez-López, María del; García-Martínez, Pascuala; Arias, Julia; Moreno, Ignacio

    2014-01-21

    Multiple-beam Fabry-Perot (FP) interferences occur in liquid crystal retarders (LCR) devoid of an antireflective coating. In this work, a highly accurate method to obtain the spectral retardance of such devices is presented. On the basis of a simple model of the LCR that includes FP effects and by using a voltage transfer function, we show how the FP features in the transmission spectrum can be used to accurately retrieve the ordinary and extraordinary spectral phase delays, and the voltage dependence of the latter. As a consequence, the modulation characteristics of the device are fully determined with high accuracy by means of a few off-state physical parameters which are wavelength-dependent, and a single voltage transfer function that is valid within the spectral range of characterization.

  1. Detailed and Highly Accurate 3d Models of High Mountain Areas by the Macs-Himalaya Aerial Camera Platform

    NASA Astrophysics Data System (ADS)

    Brauchle, J.; Hein, D.; Berger, R.

    2015-04-01

    Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not achieved. It is for this reason that the DLR Institute of Optical Sensor Systems has developed a new aerial camera, the MACS-Himalaya. The measuring unit comprises four camera modules with an overall aperture angle of 116° perpendicular to the direction of flight. A High Dynamic Range (HDR) mode was introduced so that within a scene, bright areas such as sun-flooded snow and dark areas such as shaded stone can be imaged. In 2014, a measuring survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at heights up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced. Special challenges and gaps in the investigation of high mountain areas, approaches for resolution of these problems, the camera system and the state of evaluation are presented with examples.

  2. High Voltage TAL Performance

    NASA Technical Reports Server (NTRS)

    Jacobson, David T.; Jankovsky, Robert S.; Rawlin, Vincent K.; Manzella, David H.

    2001-01-01

    The performance of a two-stage, anode layer Hall thruster was evaluated. Experiments were conducted in single and two-stage configurations. In single-stage configuration, the thruster was operated with discharge voltages ranging from 300 to 1700 V. Discharge specific impulses ranged from 1630 to 4140 sec. Thruster investigations were conducted with input power ranging from 1 to 8.7 kW, corresponding to power throttling of nearly 9: 1. An extensive two-stage performance map was generated. Data taken with total voltage (sum of discharge and accelerating voltage) constant revealed a decrease in thruster efficiency as the discharge voltage was increased. Anode specific impulse values were comparable in the single and two-stage configurations showing no strong advantage for two-stage operation.

  3. High Performance Arcjet Engines

    NASA Technical Reports Server (NTRS)

    Kennel, Elliot B.; Ivanov, Alexey Nikolayevich; Nikolayev, Yuri Vyacheslavovich

    1994-01-01

    This effort sought to exploit advanced single crystal tungsten-tantalum alloy material for fabrication of a high strength, high temperature arcjet anode. The use of this material is expected to result in improved strength, temperature resistance, and lifetime compared to state of the art polycrystalline alloys. In addition, the use of high electrical and thermal conductivity carbon-carbon composites was considered, and is believed to be a feasible approach. Highly conductive carbon-carbon composite anode capability represents enabling technology for rotating-arc designs derived from the Russian Scientific Research Institute of Thermal Processes (NIITP) because of high heat fluxes at the anode surface. However, for US designs the anode heat flux is much smaller, and thus the benefits are not as great as in the case of NIITP-derived designs. Still, it does appear that the tensile properties of carbon-carbon can be even better than those of single crystal tungsten alloys, especially when nearly-single-crystal fibers such as vapor grown carbon fiber (VGCF) are used. Composites fabricated from such materials must be coated with a refractory carbide coating in order to ensure compatibility with high temperature hydrogen. Fabrication of tungsten alloy single crystals in the sizes required for fabrication of an arcjet anode has been shown to be feasible. Test data indicate that the material can be expected to be at least the equal of W-Re-HfC polycrystalline alloy in terms of its tensile properties, and possibly superior. We are also informed by our colleagues at Scientific Production Association Luch (NP0 Luch) that it is possible to use Russian technology to fabricate polycrystalline W-Re-HfC or other high strength alloys if desired. This is important because existing engines must rely on previously accumulated stocks of these materials, and a fabrication capability for future requirements is not assured.

  4. In Pursuit of Highly Accurate Atomic Lifetime Measurements of Multiply Charged Ions

    SciTech Connect

    Trabert, E

    2009-06-01

    Accurate atomic lifetime data are useful for terrestrial and astrophysical plasma diagnostics. At accuracies higher than those required for these applications, lifetime measurements test atomic structure theory in ways complementary to spectroscopic energy determinations. At the highest level of accuracy, the question arises whether such tests reach the limits of modern theory, a combination of quantum mechanics and QED, adn possibly point to physics beyond the Standard Model. If high-precision atomic lifetime measurements, especially on multiply charged ions, have not quite reached this high accuracy yet, then what is necessary to attain this goal?

  5. The Basingstoke Orthopaedic Database: a high quality accurate information system for audit.

    PubMed

    Barlow, I W; Flynn, N A; Britton, J M

    1994-11-01

    The accuracy of a computerised audit system custom produced for the Orthopaedic Department has been validated by comparison with operating theatre records and patients' case notes. The study revealed only 2.5 per cent missed entries; of the recorded entries information regarding the nature of the operation was found to be 92.5 per cent complete and 98 per cent accurate. The high percentage accuracy reflects the high degree of medical input in operation of the system. The Basingstoke Orthopaedic Database is flexible, cheap and easy to maintain. Data is stored in a form that is readily applicable to standard software packages. PMID:7598401

  6. High performance cyclone development

    SciTech Connect

    Giles, W.B.

    1981-01-01

    The results of cold flow experiments at atmospheric conditions of an air-shielded 18 in-dia electrocyclone with a central cusped electrode are reported using fine test dusts of both flyash and nickel powder. These results are found to confirm expectations of enhanced performance, similar to earlier work on a 12 in-dia model. An analysis of the combined inertial-electrostatic force field is also presented which identifies general design goals and scaling laws. From this, it is found that electrostatic enhancement will be particularly beneficial for fine dusts in large cyclones. Recommendations for further improvement in cyclone collection efficiency are proposed.

  7. START High Performance Discharges

    NASA Astrophysics Data System (ADS)

    Gates, D. A.

    1997-11-01

    Improvements to START (Small Tight Aspect Ratio Tokamak), the first spherical tokamak in the world to achieve high plasma temperature with both a significant pulse length and confinement time, have been ongoing since 1991. Recent modifications include: expansion of the existing capacitor banks allowing plasma currents as high as 300kA, an increase in the available neutral beam heating power ( ~ 500kW), and improvements to the vacuum system. These improvements have led to the achievement of the world record plasma β (≡ 2μ_0 /B^2) of ~ 30% in a tokamak. The normalised β ( βN ≡ β aB/I_p) reached 4.5 with q_95 = 2.3. Properties of the reconstructed equilibrium will be discussed in detail. The theoretical limit to β is higher in a spherical tokamak than in a conventional machine, due to the higher values of normalised current (IN ≡ I_p/aB) achievable at low aspect ratio. The record β was achieved with IN ~ 8 while conventional tokamaks are limited to IN ~ 3, or less. Calculations of the ideal MHD stability of the record discharge indicate high β low-n kink modes are stable, but that the entire profile is at or near marginal stability for high-n ballooning modes. The phenomenology of the events leading up to the plasma termination is discussed. An important aspect of the START program is to explore the physics of neutral beam absorption at low aspect ratio. A passive neutral particle analyser has been used to study the temporal and spatial dependence of the fast hydrogen beam ions. These measurements have been used in conjunction with a single particle orbit code to estimate the fast ion losses due to collisions with slow neutrals from the plasma edge. Numerical analysis of neutral beam power deposition profiles are compared with the data from an instrumented beam stop. The global energy confinement time τE in beam heated discharges on START is similar to that obtained in Ohmic discharges, even though the input power has roughly doubled over the Ohmic case

  8. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  9. Techniques for determining propulsion system forces for accurate high speed vehicle drag measurements in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.

    1975-01-01

    As part of a NASA program to evaluate current methods of predicting the performance of large, supersonic airplanes, the drag of the XB-70 airplane was measured accurately in flight at Mach numbers from 0.75 to 2.5. This paper describes the techniques used to determine engine net thrust and the drag forces charged to the propulsion system that were required for the in-flight drag measurements. The accuracy of the measurements and the application of the measurement techniques to aircraft with different propulsion systems are discussed. Examples of results obtained for the XB-70 airplane are presented.

  10. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-10-01

    Over 30 years ago U.S. industry introduced the world`s highest temperature (1200{degrees}F at 5000 psig) and most efficient power plant, the Eddystone coal-burning steam plant. The highest alloy material used in the plant was 316 stainless steel. Problems during the first few years of operation caused a reduction in operating temperature to 1100{degrees}F which has generally become the highest temperature used in plants around the world. Leadership in high temperature steam has moved to Japan and Europe over the last 30 years.

  11. High Voltage SPT Performance

    NASA Technical Reports Server (NTRS)

    Manzella, David; Jacobson, David; Jankovsky, Robert

    2001-01-01

    A 2.3 kW stationary plasma thruster designed to operate at high voltage was tested at discharge voltages between 300 and 1250 V. Discharge specific impulses between 1600 and 3700 sec were demonstrated with thrust between 40 and 145 mN. Test data indicated that discharge voltage can be optimized for maximum discharge efficiency. The optimum discharge voltage was between 500 and 700 V for the various anode mass flow rates considered. The effect of operating voltage on optimal magnet field strength was investigated. The effect of cathode flow rate on thruster efficiency was considered for an 800 V discharge.

  12. Defining allowable physical property variations for high accurate measurements on polymer parts

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Sonne, M. R.; Madruga, D. G.; De Chiffre, L.; Hattel, J. H.

    2016-06-01

    Measurement conditions and material properties have a significant impact on the dimensions of a part, especially for polymers parts. Temperature variation causes part deformations that increase the uncertainty of the measurement process. Current industrial tolerances of a few micrometres demand high accurate measurements in non-controlled ambient. Most of polymer parts are manufactured by injection moulding and their inspection is carried out after stabilization, around 200 hours. The overall goal of this work is to reach ±5μm in uncertainty measurements a polymer products which is a challenge in today`s production and metrology environments. The residual deformations in polymer products at room temperature after injection molding are important when micrometer accuracy needs to be achieved. Numerical modelling can give a valuable insight to what is happening in the polymer during cooling down after injection molding. In order to obtain accurate simulations, accurate inputs to the model are crucial. In reality however, the material and physical properties will have some variations. Although these variations may be small, they can act as a source of uncertainty for the measurement. In this paper, we investigated how big the variation in material and physical properties are allowed in order to reach the 5 μm target on the uncertainty.

  13. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  14. The Use of Accurate Mass Tags for High-Throughput Microbial Proteomics

    SciTech Connect

    Smith, Richard D. ); Anderson, Gordon A. ); Lipton, Mary S. ); Masselon, Christophe D. ); Pasa Tolic, Ljiljana ); Shen, Yufeng ); Udseth, Harold R. )

    2002-08-01

    We describe and demonstrate a global strategy that extends the sensitivity, dynamic range, comprehensiveness, and throughput of proteomic measurements based upon the use of peptide accurate mass tags (AMTs) produced by global protein enzymatic digestion. The two-stage strategy exploits Fourier transform-ion cyclotron resonance (FT-ICR) mass spectrometry to validate peptide AMTs for a specific organism, tissue or cell type from potential mass tags identified using conventional tandem mass spectrometry (MS/MS) methods, providing greater confidence in identifications as well as the basis for subsequent measurements without the need for MS/MS, and thus with greater sensitivity and increased throughput. A single high resolution capillary liquid chromatography separation combined with high sensitivity, high resolution and ac-curate FT-ICR measurements has been shown capable of characterizing peptide mixtures of significantly more than 10 5 components with mass accuracies of -1 ppm, sufficient for broad protein identification using AMTs. Other attractions of the approach include the broad and relatively unbiased proteome coverage, the capability for exploiting stable isotope labeling methods to realize high precision for relative protein abundance measurements, and the projected potential for study of mammalian proteomes when combined with additional sample fractionation. Using this strategy, in our first application we have been able to identify AMTs for 60% of the potentially expressed proteins in the organism Deinococcus radiodurans.

  15. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  16. High performance alloy electroforming

    NASA Technical Reports Server (NTRS)

    Malone, G. A.; Winkelman, D. M.

    1989-01-01

    Electroformed copper and nickel are used in structural applications for advanced propellant combustion chambers. An improved process has been developed by Bell Aerospace Textron, Inc. wherein electroformed nickel-manganese alloy has demonstrated superior mechanical and thermal stability when compared to previously reported deposits from known nickel plating processes. Solution chemistry and parametric operating procedures are now established and material property data is established for deposition of thick, large complex shapes such as the Space Shuttle Main Engine. The critical operating variables are those governing the ratio of codeposited nickel and manganese. The deposition uniformity which in turn affects the manganese concentration distribution is affected by solution resistance and geometric effects as well as solution agitation. The manganese concentration in the deposit must be between 2000 and 3000 ppm for optimum physical properties to be realized. The study also includes data regarding deposition procedures for achieving excellent bond strength at an interface with copper, nickel-manganese or INCONEL 718. Applications for this electroformed material include fabrication of complex or re-entry shapes which would be difficult or impossible to form from high strength alloys such as INCONEL 718.

  17. Highly accurate apparatus for electrochemical characterization of the felt electrodes used in redox flow batteries

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon

    2016-04-01

    Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.

  18. High Poverty, High Performing Schools. IDRA Focus.

    ERIC Educational Resources Information Center

    IDRA Newsletter, 1997

    1997-01-01

    This theme issue includes four articles on high performance by poor Texas schools. In "Principal of National Blue Ribbon School Says High Poverty Schools Can Excel" (interview with Robert Zarate by Christie L. Goodman), the principal of Mary Hull Elementary School (San Antonio, Texas) describes how the high-poverty, high-minority school…

  19. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    NASA Astrophysics Data System (ADS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-08-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  20. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  1. Highly accurate coating composition control during co-sputtering, based on controlling plasma chromaticity

    SciTech Connect

    Anguita, J.V.; Thwaites, M.; Holton, B.; Hockley, P.; Holton, B.; Rand, S.

    2005-03-01

    Highly accurate control of sputtering processes is of paramount importance to industry. Plasma diagnostic equipment based on spectroscopic methods such as optical emission spectroscopy (OES) have been commercially available for many years and have the ability to deliver a high level of accuracy. Despite this, their complexity, demand for operator time, and disregard for the vast majority of the optical emission spectrum have rendered them as unpopular, and they are rarely used in manufacturing lines. This article introduces the measurement of the chromaticity of the plasma as a new method of analysis, as an alternative to OES. This method is simple, while maintaining a high level of sensitivity. Chromaticity monitors a wide range of the optical emission spectrum, obtaining a large amount of process information. It also averages and simplifies the data, making them easier to analyze.

  2. Accurate Sample Assignment in a Multiplexed, Ultrasensitive, High-Throughput Sequencing Assay for Minimal Residual Disease.

    PubMed

    Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike

    2016-07-01

    High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. PMID:27183494

  3. Highly accurate measurements of the spontaneous fission half-life of 240,242Pu

    NASA Astrophysics Data System (ADS)

    Salvador-Castiñeira, P.; Bryś, T.; Eykens, R.; Hambsch, F.-J.; Moens, A.; Oberstedt, S.; Sibbens, G.; Vanleeuw, D.; Vidali, M.; Pretel, C.

    2013-12-01

    Fast spectrum neutron-induced fission cross-section data for transuranic isotopes are of special demand from the nuclear data community. In particular highly accurate data are needed for the new generation IV nuclear applications. The aim is to obtain precise neutron-induced fission cross sections for 240Pu and 242Pu. To do so, accurate data on spontaneous fission half-lives must be available. Also, minimizing uncertainties in the detector efficiency is a key point. We studied both isotopes by means of a twin Frisch-grid ionization chamber with the goal of improving the present data on the neutron-induced fission cross section. For the two plutonium isotopes the high α-particle decay rates pose a particular problem to experiments due to piling-up events in the counting gas. Argon methane and methane were employed as counting gases, the latter showed considerable improvement in signal generation due to its higher drift velocity. The detection efficiency for both samples was determined, and improved spontaneous fission half-lives were obtained with very low statistical uncertainty (0.13% for 240Pu and 0.04% for 242Pu): for 240Pu, T1/2,SF=1.165×1011 yr (1.1%), and for 242Pu, T1/2,SF=6.74×1010 yr (1.3%). Systematic uncertainties are due to sample mass (0.4% for 240Pu and 0.9% for 242Pu) and efficiency (1%).

  4. Highly Accurate Structure-Based Prediction of HIV-1 Coreceptor Usage Suggests Intermolecular Interactions Driving Tropism

    PubMed Central

    Kieslich, Chris A.; Tamamis, Phanourios; Guzman, Yannis A.; Onel, Melis; Floudas, Christodoulos A.

    2016-01-01

    HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/. PMID:26859389

  5. All-reflective, highly accurate polarization rotator for high-power short-pulse laser systems.

    PubMed

    Keppler, S; Hornung, M; Bödefeld, R; Kahle, M; Hein, J; Kaluza, M C

    2012-08-27

    We present the setup of a polarization rotating device and its adaption for high-power short-pulse laser systems. Compared to conventional halfwave plates, the all-reflective principle using three zero-phase shift mirrors provides a higher accuracy and a higher damage threshold. Since plan-parallel plates, e.g. these halfwave plates, generate postpulses, which could lead to the generation of prepulses during the subsequent laser chain, the presented device avoids parasitic pulses and is therefore the preferable alternative for high-contrast applications. Moreover the device is easily scalable for large beam diameters and its spectral reflectivity can be adjusted by an appropriate mirror coating to be well suited for ultra-short laser pulses. PMID:23037123

  6. C-Sibelia: an easy-to-use and highly accurate tool for bacterial genome comparison

    PubMed Central

    Minkin, Ilya; Pham, Hoa; Starostina, Ekaterina; Vyahhi, Nikolay; Pham, Son

    2013-01-01

    We present C-Sibelia, a highly accurate and easy-to-use software tool for comparing two closely related bacterial genomes, which can be presented as either finished sequences or fragmented assemblies. C-Sibelia takes as input two FASTA files and produces: (1) a VCF file containing all identified single nucleotide variations and indels; (2) an XMFA file containing alignment information. The software also produces Circos diagrams visualizing high level genomic architecture for rearrangement analyses. C-Sibelia is a part of the Sibelia comparative genomics suite, which is freely available under the GNU GPL v.2 license at http://sourceforge.net/projects/sibelia-bio. C-Sibelia is compatible with Unix-like operating systems. A web-based version of the software is available at http://etool.me/software/csibelia. PMID:25110578

  7. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  8. Improved highly accurate localized motion imaging for monitoring high-intensity focused ultrasound therapy

    NASA Astrophysics Data System (ADS)

    Qu, Xiaolei; Azuma, Takashi; Sugiyama, Ryusuke; Kanazawa, Kengo; Seki, Mika; Sasaki, Akira; Takeuchi, Hideki; Fujiwara, Keisuke; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro; Matsumoto, Yoichiro

    2016-07-01

    Visualizing an area subjected to high-intensity focused ultrasound (HIFU) therapy is necessary for controlling the amount of HIFU exposure. One of the promising monitoring methods is localized motion imaging (LMI), which estimates coagulation length by detecting the change in stiffness. In this study, we improved the accuracy of our previous LMI by dynamic cross-correlation window (DCCW) and maximum vibration amount (MVA) methods. The DCCW method was used to increase the accuracy of estimating vibration amplitude, and the MVA method was employed to increase signal–noise ratio of the decrease ratio at the coagulated area. The qualitative comparison of results indicated that the two proposed methods could suppress the effect of noise. Regarding the results of the quantitative comparison, coagulation length was estimated with higher accuracy by the improved LMI method, and the root-mean-square error (RMSE) was reduced from 2.51 to 1.69 mm.

  9. High- and low-pressure pneumotachometers measure respiration rates accurately in adverse environments

    NASA Technical Reports Server (NTRS)

    Fagot, R. J.; Mc Donald, R. T.; Roman, J. A.

    1968-01-01

    Respiration-rate transducers in the form of pneumotachometers measure respiration rates of pilots operating high performance research aircraft. In each low pressure or high pressure oxygen system a sensor is placed in series with the pilots oxygen supply line to detect gas flow accompanying respiration.

  10. Use of Monocrystalline Silicon as Tool Material for Highly Accurate Blanking of Thin Metal Foils

    SciTech Connect

    Hildering, Sven; Engel, Ulf; Merklein, Marion

    2011-05-04

    The trend towards miniaturisation of metallic mass production components combined with increased component functionality is still unbroken. Manufacturing these components by forming and blanking offers economical and ecological advantages combined with the needed accuracy. The complexity of producing tools with geometries below 50 {mu}m by conventional manufacturing methods becomes disproportional higher. Expensive serial finishing operations are required to achieve an adequate surface roughness combined with accurate geometry details. A novel approach for producing such tools is the use of advanced etching technologies for monocrystalline silicon that are well-established in the microsystems technology. High-precision vertical geometries with a width down to 5 {mu}m are possible. The present study shows a novel concept using this potential for the blanking of thin copper foils with monocrystallline silicon as a tool material. A self-contained machine-tool with compact outer dimensions was designed to avoid tensile stresses in the brittle silicon punch by an accurate, careful alignment of the punch, die and metal foil. A microscopic analysis of the monocrystalline silicon punch shows appropriate properties regarding flank angle, edge geometry and surface quality for the blanking process. Using a monocrystalline silicon punch with a width of 70 {mu}m blanking experiments on as-rolled copper foils with a thickness of 20 {mu}m demonstrate the general applicability of this material for micro production processes.

  11. Highly accurate potential energy surface for the He-H2 dimer.

    PubMed

    Bakr, Brandon W; Smith, Daniel G A; Patkowski, Konrad

    2013-10-14

    A new highly accurate interaction potential is constructed for the He-H2 van der Waals complex. This potential is fitted to 1900 ab initio energies computed at the very large-basis coupled-cluster level and augmented by corrections for higher-order excitations (up to full configuration interaction level) and the diagonal Born-Oppenheimer correction. At the vibrationally averaged H-H bond length of 1.448736 bohrs, the well depth of our potential, 15.870 ± 0.065 K, is nearly 1 K larger than the most accurate previous studies have indicated. In addition to constructing our own three-dimensional potential in the van der Waals region, we present a reparameterization of the Boothroyd-Martin-Peterson potential surface [A. I. Boothroyd, P. G. Martin, and M. R. Peterson, J. Chem. Phys. 119, 3187 (2003)] that is suitable for all configurations of the triatomic system. Finally, we use the newly developed potentials to compute the properties of the lone bound states of (4)He-H2 and (3)He-H2 and the interaction second virial coefficient of the hydrogen-helium mixture. PMID:24116617

  12. High-resolution accurate mass spectrometry as a technique for characterization of complex lysimeter leachate samples.

    PubMed

    Hand, Laurence H; Marshall, Samantha J; Saeed, Mansoor; Earll, Mark; Hadfield, Stephen T; Richardson, Kevan; Rawlinson, Paul

    2016-06-01

    Lysimeter studies can be used to identify and quantify soil degradates of agrochemicals (metabolites) that have the potential to leach to groundwater. However, the apparent metabolic profile of such lysimeter leachate samples will often be significantly more complex than would be expected in true groundwater samples. This is particularly true for S-metolachlor, which has an extremely complex metabolic pathway. Consequently, it was not practically possible to apply a conventional analytical approach to identify all metabolites in an S-metolachlor lysimeter study, because there was insufficient mass to enable the use of techniques such as nuclear magnetic resonance. Recent advances in high-resolution accurate mass spectrometry, however, allow innovative screening approaches to characterize leachate samples to a greater extent than previously possible. Leachate from the S-metolachlor study was screened for accurate masses (±5 ppm of the nominal mass) corresponding to more than 400 hypothetical metabolite structures. A refined list of plausible metabolites was constructed from these data to provide a comprehensive description of the most likely metabolites present. The properties of these metabolites were then evaluated using a principal component analysis model, based on molecular descriptors, to visualize the entire chemical space and to cluster the metabolites into a number of subclasses. This characterization and principal component analysis evaluation enabled the selection of suitable representative metabolites that were subsequently used as exemplars to assess the toxicological relevance of the leachate as a whole. Environ Toxicol Chem 2016;35:1401-1412. © 2015 SETAC. PMID:26627902

  13. High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.

    2015-12-01

    We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several

  14. A Highly Accurate Stress Measurement System for Producing Precise X-Ray Masks

    NASA Astrophysics Data System (ADS)

    Oda, Masatoshi; Une, Atsunobu; Okada, Ikuo; Shinohara, Shinji; Nakayama, Yasuo; Yoshihara, Hideo

    1995-12-01

    A new system that measures stress in film deposited on Si wafers has been developed to produce highly accurate X-ray masks. The system consists of very rigid air sliders, an electrostatic sensor, and a soft-handling wafer chuck. With the system, wafer warp is precisely measured before and after film deposition, and the stress distribution is calculated from those measurements. Wafer warps can be measured with a repeatability of a few nanometers by this system. The stress distribution of absorber film on 2-mm-thick Si wafers can be determined with an accuracy of ±5 MPa. The stress distribution agrees well with the pattern position shifts in the membrane.

  15. Highly accurate thickness measurement of multi-layered automotive paints using terahertz technology

    NASA Astrophysics Data System (ADS)

    Krimi, Soufiene; Klier, Jens; Jonuscheit, Joachim; von Freymann, Georg; Urbansky, Ralph; Beigang, René

    2016-07-01

    In this contribution, we present a highly accurate approach for thickness measurements of multi-layered automotive paints using terahertz time domain spectroscopy in reflection geometry. The proposed method combines the benefits of a model-based material parameters extraction method to calibrate the paint coatings, a generalized Rouard's method to simulate the terahertz radiation behavior within arbitrary thin films, and the robustness of a powerful evolutionary optimization algorithm to increase the sensitivity of the minimum thickness measurement limit. Within the framework of this work, a self-calibration model is introduced, which takes into consideration the real industrial challenges such as the effect of wet-on-wet spray in the painting process.

  16. Geometrically invariant and high capacity image watermarking scheme using accurate radial transform

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Ranade, Sukhjeet K.

    2013-12-01

    Angular radial transform (ART) is a region based descriptor and possesses many attractive features such as rotation invariance, low computational complexity and resilience to noise which make them more suitable for invariant image watermarking than that of many transform domain based image watermarking techniques. In this paper, we introduce ART for fast and geometrically invariant image watermarking scheme with high embedding capacity. We also develop an accurate and fast framework for the computation of ART coefficients based on Gaussian quadrature numerical integration, 8-way symmetry/anti-symmetry properties and recursive relations for the calculation of sinusoidal kernel functions. ART coefficients so computed are then used for embedding the binary watermark using dither modulation. Experimental studies reveal that the proposed watermarking scheme not only provides better robustness against geometric transformations and other signal processing distortions, but also has superior advantages over the existing ones in terms of embedding capacity, speed and visual imperceptibility.

  17. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  18. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  19. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  20. Conservative high-order-accurate finite-difference methods for curvilinear grids

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Chakrvarthy, Sukumar

    1993-01-01

    Two fourth-order-accurate finite-difference methods for numerically solving hyperbolic systems of conservation equations on smooth curvilinear grids are presented. The first method uses the differential form of the conservation equations; the second method uses the integral form of the conservation equations. Modifications to these schemes, which are required near boundaries to maintain overall high-order accuracy, are discussed. An analysis that demonstrates the stability of the modified schemes is also provided. Modifications to one of the schemes to make it total variation diminishing (TVD) are also discussed. Results that demonstrate the high-order accuracy of both schemes are included in the paper. In particular, a Ringleb-flow computation demonstrates the high-order accuracy and the stability of the boundary and near-boundary procedures. A second computation of supersonic flow over a cylinder demonstrates the shock-capturing capability of the TVD methodology. An important contribution of this paper is the dear demonstration that higher order accuracy leads to increased computational efficiency.

  1. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  2. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  3. Accurate Quantification of High Density Lipoprotein Particle Concentration by Calibrated Ion Mobility Analysis

    PubMed Central

    Hutchins, Patrick M.; Ronsein, Graziella E.; Monette, Jeffrey S.; Pamir, Nathalie; Wimberger, Jake; He, Yi; Anantharamaiah, G.M.; Kim, Daniel Seung; Ranchalis, Jane E.; Jarvik, Gail P.; Vaisar, Tomas; Heinecke, Jay W.

    2015-01-01

    Background It is critical to develop new metrics to determine whether high density lipoprotein (HDL) is cardioprotective in humans. One promising approach is HDL particle concentration (HDL-P) – the size and concentration of HDL in plasma or serum. However, the two methods currently used to determine HDL-P yield concentrations that differ more than 5-fold. We therefore developed and validated an improved approach to quantify HDL-P, termed calibrated ion mobility analysis (calibrated IMA). Methods HDL was isolated from plasma by ultracentrifugation, introduced into the gas phase with electrospray ionization, separated by size, and quantified by particle counting. A calibration curve constructed with purified proteins was used to correct for the ionization efficiency of HDL particles. Results The concentrations of gold nanoparticles and reconstituted HDLs measured by calibrated IMA were indistinguishable from concentrations determined by orthogonal methods. In plasma of control (n=40) and cerebrovascular disease (n=40) subjects, three subspecies of HDL were reproducibility measured, with an estimated total HDL-P of 13.4±2.4 µM (mean±SD). HDL-C accounted for 48% of the variance in HDL-P. HDL-P was significantly lower in subjects with cerebrovascular disease, and this difference remained significant after adjustment for HDL cholesterol levels. Conclusions Calibrated IMA accurately and reproducibly determined the concentration of gold nanoparticles and synthetic HDL, strongly suggesting the method could accurately quantify HDL particle concentration. Importantly, the estimated stoichiometry of apoA-I determined by calibrated IMA was 3–4 per HDL particle, in excellent agreement with current structural models. Furthermore, HDL-P associated with cardiovascular disease status in a clinical population independently of HDL cholesterol. PMID:25225166

  4. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  5. High-order accurate physical-constraints-preserving finite difference WENO schemes for special relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Wu, Kailiang; Tang, Huazhong

    2015-10-01

    The paper develops high-order accurate physical-constraints-preserving finite difference WENO schemes for special relativistic hydrodynamical (RHD) equations, built on the local Lax-Friedrichs splitting, the WENO reconstruction, the physical-constraints-preserving flux limiter, and the high-order strong stability preserving time discretization. They are extensions of the positivity-preserving finite difference WENO schemes for the non-relativistic Euler equations [20]. However, developing physical-constraints-preserving methods for the RHD system becomes much more difficult than the non-relativistic case because of the strongly coupling between the RHD equations, no explicit formulas of the primitive variables and the flux vectors with respect to the conservative vector, and one more physical constraint for the fluid velocity in addition to the positivity of the rest-mass density and the pressure. The key is to prove the convexity and other properties of the admissible state set and discover a concave function with respect to the conservative vector instead of the pressure which is an important ingredient to enforce the positivity-preserving property for the non-relativistic case. Several one- and two-dimensional numerical examples are used to demonstrate accuracy, robustness, and effectiveness of the proposed physical-constraints-preserving schemes in solving RHD problems with large Lorentz factor, or strong discontinuities, or low rest-mass density or pressure etc.

  6. Revisiting binary sequence length requirements to accurately emulate optical transmission systems in highly dispersive regime

    NASA Astrophysics Data System (ADS)

    Grellier, Edouard; Antona, Jean-Christophe; Bononi, Alberto; Bigo, Sébastien

    2008-11-01

    When increasing channel bit rate beyond 10Gb/s or when operating over fiber lines with sparse or no in-line dispersion compensation, Kerr-like non-linear effects can be considered as second order with respect to dispersive effects, because pulse broadening can expand over numerous neighbor pulses, before optical non-linear effects imprint their signature noticeably. To accurately emulate the interactions between pulses in this case, a few studies emphasized that Pseudo- Random Binary Sequences (PRBS) should be used, with exponential dependence of the required PRBS length on bit rate and accumulated dispersion. In this paper, we explain our strategy to numerically estimate the required number of random, noisy bits for Monte-Carlo simulations, and show that it weakly increases in presence of pulse to pulse correlations and commonly tolerated levels of non-linearities (i.e. leading to transmission penalties as high as 1.5dB, for reference BERs of 10-2, 10-3 or 10-5) . Then we determine the actual required PRBS length that yields the same (sufficient) BER accuracy as the MC method. We demonstrate its actual dependence on BER, and show that MC theory provides a reliable upper bound in FEC-assisted, highly dispersive systems.

  7. Highly ordered protein nanorings designed by accurate control of glutathione S-transferase self-assembly.

    PubMed

    Bai, Yushi; Luo, Quan; Zhang, Wei; Miao, Lu; Xu, Jiayun; Li, Hongbin; Liu, Junqiu

    2013-07-31

    Protein self-assembly into exquisite, complex, yet highly ordered architectures represents the supreme wisdom of nature. However, precise manipulation of protein self-assembly behavior in vitro is a great challenge. Here we report that by taking advantage of the cooperation of metal-ion-chelating interactions and nonspecific protein-protein interactions, we achieved accurate control of the orientation of proteins and their self-assembly into protein nanorings. As a building block, we utilized the C2-symmetric protein sjGST-2His, a variant of glutathione S-transferase from Schistosoma japonicum having two properly oriented His metal-chelating sites on the surface. Through synergic metal-coordination and non-covalent interactions, sjGST-2His self-assembled in a fixed bending manner to form highly ordered protein nanorings. The diameters of the nanorings can be regulated by tuning the strength of the non-covalent interaction network between sjGST-2His interfaces through variation of the ionic strength of the solution. This work provides a de novo design strategy that can be applied in the construction of novel protein superstructures. PMID:23865524

  8. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-06-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  9. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  10. Assessing temporal flux of plant hormones in stored processing potatoes using high definition accurate mass spectrometry

    PubMed Central

    Ordaz-Ortiz, José Juan; Foukaraki, Sofia; Terry, Leon Alexander

    2015-01-01

    Plant hormones are important molecules which at low concentration can regulate various physiological processes. Mass spectrometry has become a powerful technique for the quantification of multiple classes of plant hormones because of its high sensitivity and selectivity. We developed a new ultrahigh pressure liquid chromatography–full-scan high-definition accurate mass spectrometry method, for simultaneous determination of abscisic acid and four metabolites phaseic acid, dihydrophaseic acid, 7′-hydroxy-abscisic acid and abscisic acid glucose ester, cytokinins zeatin, zeatin riboside, gibberellins (GA1, GA3, GA4 and GA7) and indole-3-acetyl-L-aspartic acid. We measured the amount of plant hormones in the flesh and skin of two processing potato cvs. Sylvana and Russet Burbank stored for up to 30 weeks at 6 °C under ambient air conditions. Herein, we report for the first time that abscisic acid glucose ester seems to accumulate in the skin of potato tubers throughout storage time. The method achieved a lowest limit of detection of 0.22 ng g−1 of dry weight and a limit of quantification of 0.74 ng g−1 dry weight (zeatin riboside), and was able to recover, detect and quantify a total of 12 plant hormones spiked on flesh and skin of potato tubers. In addition, the mass accuracy for all compounds (<5 ppm) was evaluated. PMID:26504563

  11. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  12. High-power CMOS current driver with accurate transconductance for electrical impedance tomography.

    PubMed

    Constantinou, Loucas; Triantis, Iasonas F; Bayford, Richard; Demosthenous, Andreas

    2014-08-01

    Current drivers are fundamental circuits in bioimpedance measurements including electrical impedance tomography (EIT). In the case of EIT, the current driver is required to have a large output impedance to guarantee high current accuracy over a wide range of load impedance values. This paper presents an integrated current driver which meets these requirements and is capable of delivering large sinusoidal currents to the load. The current driver employs a differential architecture and negative feedback, the latter allowing the output current to be accurately set by the ratio of the input voltage to a resistor value. The circuit was fabricated in a 0.6- μm high-voltage CMOS process technology and its core occupies a silicon area of 0.64 mm (2) . It operates from a ± 9 V power supply and can deliver output currents up to 5 mA p-p. The accuracy of the maximum output current is within 0.41% up to 500 kHz, reducing to 0.47% at 1 MHz with a total harmonic distortion of 0.69%. The output impedance is 665 k Ω at 100 kHz and 372 k Ω at 500 kHz. PMID:25073130

  13. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  14. High resolution DEM from Tandem-X interferometry: an accurate tool to characterize volcanic activity

    NASA Astrophysics Data System (ADS)

    Albino, Fabien; Kervyn, Francois

    2013-04-01

    Tandem-X mission was launched by the German agency (DLR) in June 2010. It is a new generation high resolution SAR sensor mainly dedicated to topographic applications. For the purpose of our researches focused on the study of the volcano-tectonic activity in the Kivu Rift area, a set of Tandem-X bistatic radar images were used to produce a high resolution InSAR DEM of the Virunga Volcanic Province (VVP). The VVP is part of the Western branch of the African rift, situated at the boundary between D.R. Congo, Rwanda and Uganda. It has two highly active volcanoes, Nyiragongo and Nyamulagira. A first task concerns the quantitative assessment of the vertical accuracy that can be achieved with these new data. The new DEMs are compared to other space borne datasets (SRTM, ASTER) but also to field measurements given by differential GPS. Multi-temporal radar acquisitions allow us to produce several DEM of the same area. This appeared to be very useful in the context of an active volcanic context where new geomorphological features (faults, fissures, volcanic cones and lava flows) appear continuously through time. For example, since the year 2000, time of the SRTM acquisition, we had one eruption at Nyiragongo (2002) and six eruptions at Nyamulagira (2001, 2002, 2004, 2006, 2010 and 2011) which all induce large changes in the landscape with the emplacement of new lava fields and scoria cones. From our repetitive Tandem-X DEM production, we have a tool to identify and also quantify in term of size and volume all the topographic changes relative to this past volcanic activity. These parameters are high value information to improve the understanding of the Virunga volcanoes; the accurate estimation of erupted volume and knowledge of structural features associated to past eruptions are key parameters to understand the volcanic system, to ameliorate the hazard assessment, and finally contribute to risk mitigation in a densely populated area.

  15. Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis

    NASA Astrophysics Data System (ADS)

    Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C.; Matarrese, G.; Del Guerra, A.; Bisogni, M. G.

    2013-08-01

    It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM-FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.

  16. Towards first-principles based prediction of highly accurate electrochemical Pourbiax diagrams

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenhua; Chan, Maria; Greeley, Jeff

    2015-03-01

    Electrochemical Pourbaix diagrams lie at the heart of aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such Pourbaix diagrams, inherent errors in the description of strongly-correlated transition metal (hydr)oxides, together with neglect of weak van der Waals (vdW) interactions, has limited the reliability of the predictions for even the simplest bulk systems; corresponding predictions for more complex alloy or surface structures are even more challenging . Through introduction of a Hubbard U correction, employment of a state-of-the-art van der Waals functional, and use of pure water as a reference state for the calculations, these errors are systematically corrected. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxide, oxyhydroxide, binary and ternary oxides where the corresponding thermodynamics of oxidation and reduction can be accurately described with standard errors of less than 0.04 eV in comparison with experiment.

  17. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  18. Procedures for accurate U and Th isotope measurements by high precision MC-ICPMS

    NASA Astrophysics Data System (ADS)

    Hoffmann, Dirk L.; Prytulak, Julie; Richards, David A.; Elliott, Tim; Coath, Christopher D.; Smart, Peter L.; Scholz, Denis

    2007-07-01

    We present multi-collector (MC) inductively coupled plasma mass spectrometry (ICPMS) protocols developed to obtain high precision, accurate determinations of U and Th isotope ratios that are applicable to a wide range of geological materials. MC-ICPMS provides a means to make high precision measurements but a recent laboratory inter-comparison, Regular European Inter-laboratory Measurement Evaluation Programme (REIMEP)-18, indicates that accurate results for U isotope ratios are not currently achieved by all facilities using MC-ICPMS. We detail a suite of protocols that can be used for a wide variety of U and Th isotope ratios and total loads. Particular attention is devoted to instrument optimisation, instrumental backgrounds, stability and memory effects, multiplier nonlinearity and yield determinations. Our results indicate that the extent of mass fractionation of U and Th analyses run under similar instrumental conditions is 0.48% per amu and 0.45% per amu, respectively, but cannot be distinguished at per mil precision levels. However, we note that multiplier-Faraday cup gain can be significantly different for U and Th by 1% and thus a U standard should not be used to correct Th measurements. For this reason, a combination of thermal ionisation mass spectrometry (TIMS) and MC-ICPMS methods have been used to determine the isotopic ratio of an in-house Th standard (TEDDi). As part of our methods, TEDDi and the U standard NBL-112a are used as bracketing standards for Th and U samples, respectively. While the in-house Th standard has 229Th-230Th-232Th composition specific for bracketing low 232Th analyses, the methods have been also successful for silicates with 230Th/232Th <10-5. Using NBL-112a, TEDDi and a gravimetrically calibrated mixed 229Th-236U spike, we demonstrate secular equilibrium in natural materials such as Table Mountain Latite and a Long Valley Glass Mountain sample with a reproducibility of ±3.8 per mil for 230Th/238U and ±2.8 per mil for 234U

  19. A High-Accurate and High-Efficient Monte Carlo Code by Improved Molière Functions with Ionization

    NASA Astrophysics Data System (ADS)

    Nakatsuka, Takao; Okei, Kazuhide

    2003-07-01

    Although the Molière theory of multiple Coulomb scattering is less accue rate in tracing solid angles than the Goudsmit and Saunderson theory due to the small angle approximation, it still acts very important roles in developments of high-efficient simulation codes of relativistic charged particles like cosmic-ray particles. Molière expansion is well explained by the physical model, that is the e normal distribution attributing to the high-frequent moderate scatterings and subsequent correction terms attributing to the additive large-angle scatterings. Based on these physical concepts, we have improved a high-accurate and highefficient Monte Carlo code taking account of ionization loss.

  20. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  1. Ion chromatography as highly suitable method for rapid and accurate determination of antibiotic fosfomycin in pharmaceutical wastewater.

    PubMed

    Zeng, Ping; Xie, Xiaolin; Song, Yonghui; Liu, Ruixia; Zhu, Chaowei; Galarneau, Anne; Pic, Jean-Stéphane

    2014-01-01

    A rapid and accurate ion chromatography (IC) method (limit of detection as low as 0.06 mg L(-1)) for fosfomycin concentration determination in pharmaceutical industrial wastewater was developed. This method was compared with the performance of high performance liquid chromatography determination (with a high detection limit of 96.0 mg L(-1)) and ultraviolet spectrometry after reacting with alizarin (difficult to perform in colored solutions). The accuracy of the IC method was established in the linear range of 1.0-15.0 mg L(-1) and a linear correlation was found with a correlation coefficient of 0.9998. The recoveries of fosfomycin from industrial pharmaceutical wastewater at spiking concentrations of 2.0, 5.0 and 8.0 mg L(-1) ranged from 81.91 to 94.74%, with a relative standard deviation (RSD) from 1 to 4%. The recoveries of effluent from a sequencing batch reactor treated fosfomycin with activated sludge at spiking concentrations of 5.0, 8.0, 10.0 mg L(-1) ranging from 98.25 to 99.91%, with a RSD from 1 to 2%. The developed IC procedure provided a rapid, reliable and sensitive method for the determination of fosfomycin concentration in industrial pharmaceutical wastewater and samples containing complex components. PMID:24845315

  2. High-order accurate multi-phase simulations: building blocks and whats tricky about them

    NASA Astrophysics Data System (ADS)

    Kummer, Florian

    2015-11-01

    We are going to present a high-order numerical method for multi-phase flow problems, which employs a sharp interface representation by a level-set and an extended discontinuous Galerkin (XDG) discretization for the flow properties. The shape of the XDG basis functions is dynamically adapted to the position of the fluid interface, so that the spatial approximation space can represent jumps in pressure and kinks in velocity accurately. By this approach, the `hp-convergence' property of the classical discontinuous Galerkin (DG) method can be preserved for the low-regularity, discontinuous solutions, such as those appearing in multi-phase flows. Within the past years, several building blocks of such a method were presented: this includes numerical integration on cut-cells, the spatial discretization by the XDG method, precise evaluation of curvature and level-set algorithms tailored to the special requirements of XDG-methods. The presentation covers a short review on these building-block and their integration into a full multi-phase solver. A special emphasis is put on the discussion of the several pitfalls one may expire in the formulation of such a solver. German Research Foundation.

  3. Continuous Digital Light Processing (cDLP): Highly Accurate Additive Manufacturing of Tissue Engineered Bone Scaffolds.

    PubMed

    Dean, David; Jonathan, Wallace; Siblani, Ali; Wang, Martha O; Kim, Kyobum; Mikos, Antonios G; Fisher, John P

    2012-03-01

    Highly accurate rendering of the external and internal geometry of bone tissue engineering scaffolds effects fit at the defect site, loading of internal pore spaces with cells, bioreactor-delivered nutrient and growth factor circulation, and scaffold resorption. It may be necessary to render resorbable polymer scaffolds with 50 μm or less accuracy to achieve these goals. This level of accuracy is available using Continuous Digital Light processing (cDLP) which utilizes a DLP(®) (Texas Instruments, Dallas, TX) chip. One such additive manufacturing device is the envisionTEC (Ferndale, MI) Perfactory(®). To use cDLP we integrate a photo-crosslinkable polymer, a photo-initiator, and a biocompatible dye. The dye attenuates light, thereby limiting the depth of polymerization. In this study we fabricated scaffolds using the well-studied resorbable polymer, poly(propylene fumarate) (PPF), titanium dioxide (TiO(2)) as a dye, Irgacure(®) 819 (BASF [Ciba], Florham Park, NJ) as an initiator, and diethyl fumarate as a solvent to control viscosity. PMID:23066427

  4. Highly accurate analytic formulae for projectile motion subjected to quadratic drag

    NASA Astrophysics Data System (ADS)

    Turkyilmazoglu, Mustafa

    2016-05-01

    The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.

  5. Continuous Digital Light Processing (cDLP): Highly Accurate Additive Manufacturing of Tissue Engineered Bone Scaffolds

    PubMed Central

    Dean, David; Wallace, Jonathan; Siblani, Ali; Wang, Martha O.; Kim, Kyobum; Mikos, Antonios G.; Fisher, John P.

    2012-01-01

    Highly accurate rendering of the external and internal geometry of bone tissue engineering scaffolds effects fit at the defect site, loading of internal pore spaces with cells, bioreactor-delivered nutrient and growth factor circulation, and scaffold resorption. It may be necessary to render resorbable polymer scaffolds with 50 μm or less accuracy to achieve these goals. This level of accuracy is available using Continuous Digital Light processing (cDLP) which utilizes a DLP® (Texas Instruments, Dallas, TX) chip. One such additive manufacturing device is the envisionTEC (Ferndale, MI) Perfactory®. To use cDLP we integrate a photo-crosslinkable polymer, a photo-initiator, and a biocompatible dye. The dye attenuates light, thereby limiting the depth of polymerization. In this study we fabricated scaffolds using the well-studied resorbable polymer, poly(propylene fumarate) (PPF), titanium dioxide (TiO2) as a dye, Irgacure® 819 (BASF [Ciba], Florham Park, NJ) as an initiator, and diethyl fumarate as a solvent to control viscosity. PMID:23066427

  6. Highly accurate retrieval method of Japanese document images through a combination of morphological analysis and OCR

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Takebe, Hiroaki; Kurokawa, Koji; Saitoh, Takahiro; Naoi, Satoshi

    2001-12-01

    We have developed a method that allows Japanese document images to be retrieved more accurately by using OCR character candidate information and a conventional plain text search engine. In this method, the document image is first recognized by normal OCR to produce text. Keyword areas are then estimated from the normal OCR produced text through morphological analysis. A lattice of candidate- character codes is extracted from these areas, and then character strings are extracted from the lattice using a word-matching method in noun areas and a K-th DP-matching method in undefined word areas. Finally, these extracted character strings are added to the normal OCR produced text to improve document retrieval accuracy when u sing a conventional plain text search engine. Experimental results from searches of 49 OHP sheet images revealed that our method has a high recall rate of 98.2%, compared to 90.3% with a conventional method using only normal OCR produced text, while requiring about the same processing time as normal OCR.

  7. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  8. High-Performance Thermoelectric Semiconductors

    NASA Technical Reports Server (NTRS)

    Fleurial, Jean-Pierre; Caillat, Thierry; Borshchevsky, Alexander

    1994-01-01

    Figures of merit almost double current state-of-art thermoelectric materials. IrSb3 is semiconductor found to exhibit exceptional thermoelectric properties. CoSb3 and RhSb3 have same skutterudite crystallographic structure as IrSb3, and exhibit exceptional transport properties expected to contribute to high thermoelectric performance. These three compounds form solid solutions. Combination of properties offers potential for development of new high-performance thermoelectric materials for more efficient thermoelectric power generators, coolers, and detectors.

  9. High-performance membrane chromatography.

    PubMed

    Belenkii, B G; Malt'sev, V G

    1995-02-01

    In gradient chromatography for proteins migrating along the chromatographic column, the critical distance X0 has been shown to exist at which the separation of zones is at a maximum and band spreading is at a minimum. With steep gradients and small elution velocity, the column length may be reduced to the level of membrane thickness--about one millimeter. The peculiarities of this novel separation method for proteins, high-performance membrane chromatography (HPMC), are discussed and stepwise elution is shown to be especially effective. HPMC combines the advantages of membrane technology and high-performance liquid chromatography, and avoids their drawbacks. PMID:7727132

  10. High Performance Photovoltaic Project Overview

    SciTech Connect

    Symko-Davies, M.; McConnell, R.

    2005-01-01

    The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

  11. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  12. Achieving accurate simulations of urban impacts on ozone at high resolution

    NASA Astrophysics Data System (ADS)

    Li, J.; Georgescu, M.; Hyde, P.; Mahalov, A.; Moustaoui, M.

    2014-11-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O3] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O3] are highly heterogeneous across the metropolitan area. Increased near surface [O3] due to urbanization of 10-20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O3] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas.

  13. Uniqueness of a high-order accurate bicompact scheme for quasilinear hyperbolic equations

    NASA Astrophysics Data System (ADS)

    Bragin, M. D.; Rogov, B. V.

    2014-05-01

    The possibility of constructing new third- and fourth-order accurate differential-difference bicompact schemes is explored. The schemes are constructed for the one-dimensional quasilinear advection equation on a symmetric three-point spatial stencil. It is proved that this family of schemes consists of a single fourth-order accurate bicompact scheme. The result is extended to the case of an asymmetric three-point stencil.

  14. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  15. Panelized high performance multilayer insulation

    NASA Technical Reports Server (NTRS)

    Burkley, R. A.; Shriver, C. B.; Stuckey, J. M.

    1968-01-01

    Multilayer insulation coverings with low conductivity foam spacers are interleaved with quarter mil aluminized polymer film radiation shields to cover flight type liquid hydrogen tankage of space vehicles with a removable, structurally compatible, lightweight, high performance cryogenic insulation capable of surviving extended space mission environments.

  16. High performance rolling element bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Jr., Roger W. (Inventor); Olinger, Jr., John B. (Inventor); Owen, Samuel S. (Inventor); Poole, William E. (Inventor); Haluck, David A. (Inventor)

    1993-01-01

    A high performance rolling element bearing (5) which is particularly suitable for use in a cryogenically cooled environment, comprises a composite cage (45) formed from glass fibers disposed in a solid lubricant matrix of a fluorocarbon polymer. The cage includes inserts (50) formed from a mixture of a soft metal and a solid lubricant such as a fluorocarbon polymer.

  17. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  18. High performance storable propellant resistojet

    NASA Technical Reports Server (NTRS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  19. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system. PMID:15458092

  20. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  1. Highly accurate incremental CCSD(T) calculations on aqua- and amine-complexes

    NASA Astrophysics Data System (ADS)

    Anacker, Tony; Friedrich, Joachim

    2013-07-01

    In this work, the accuracy of the second-order incremental expansion using the domain-specific basis set approach is tested for 20 cationic metal-aqua and 25 cationic metal-amine complexes. The accuracy of the approach is analysed by the statistical measures range, arithmetic mean, mean absolute deviation, root mean square deviation and standard deviation. Using these measures we find that the error due to the local approximations decreases with increasing basis set. Next we construct a local virtual space using projected atomic orbitals (PAOs). The accuracy of the incremental series in combination with a distance-based truncation of the PAO space is analysed and compared to the convergence of the incremental series within the domain-specific basis set approach. Furthermore, we establish the recently proposed incremental CCSD(T)|MP2 method as a benchmark method to obtain highly accurate CCSD(T) energies. In combination with a basis set of quintuple-ζ quality we establish benchmarks for the binding energies of the investigated complexes. Finally, we use the inc-CCSD(T)|MP2/aV5Z' binding energies of 45 complexes and 34 dissociation reactions to compute the accuracy of several state of the art density functional theory (DFT) functionals like BP86, B3LYP, CAM-B3LYP, M06, PBE0 and TPSSh. With our implementation of the incremental scheme it was possible to compute the inc-CCSD(T)|MP2/aV5Z' energy for Al(H2O)3+ 25 (6106 AOs).

  2. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  3. In-Depth Glycoproteomic Characterization of γ-Conglutin by High-Resolution Accurate Mass Spectrometry

    PubMed Central

    Schiarea, Silvia; Arnoldi, Lolita; Fanelli, Roberto; De Combarieu, Eric; Chiabrando, Chiara

    2013-01-01

    The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s), and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation) were analyzed by high-resolution accurate mass liquid chromatography–multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl) (Fuc) GlcNAc2, Man3(Xyl) (Fuc) GlcNAc2, GlcNAcMan3(Xyl) (Fuc) GlcNAc2 and GlcNAc 2Man3(Xyl) (Fuc) GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants), but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit. PMID:24069245

  4. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  5. High definition i-SCAN endoscopy with water immersion technique accurately reflects histological severity of celiac disease

    PubMed Central

    Iacucci, Marietta; Poon, Tiffany; Gui, X. Sean; Subrata, Ghosh

    2016-01-01

    Background and aims: Severe villous atrophy can be revealed with conventional white light endoscopy (WLE), however, milder grades or patchy villous atrophy are more difficult to detect. Novel endoscopic techniques such as high definition i-SCAN endoscopy with the water immersion technique (i-SCAN-HDWI) may provide the ability to visualize duodenal villi more accurately. We aimed to determine the performance of i-SCAN-HDWI in evaluating the severity of histological damage in the duodenum of patients with celiac disease. Patients and methods: A retrospective cohort study was performed in a single tertiary academic endoscopic center. We studied 58 patients (46 women; median age 36.5 years, range 18 – 72 years) with positive anti-TTG IgA antibody. The villous pattern of the second part of the duodenum was assessed by WLE and i-SCAN-HDWI. The endoscopic grades in both techniques were correlated using Marsh histologic grades by Spearman correlation coefficient. The diagnostic accuracy of i-SCAN-HDWI for detection of patchy or complete atrophy of the villi was evaluated. Results: A significant correlation was demonstrated between endoscopic grade using i-SCAN-HDWI and Marsh histologic grade (r = 0.732; P < 0.00001). The correlation between WLE grade and Marsh histologic grade was inferior to i-SCAN-HDWI (r = 0.31; P = 0.01). The sensitivity of i-SCAN-HDWI was 96 % (95 %CI: 85 – 99 %) and the specificity was 63 % (95 %CI: 26 – 90 %) in diagnosing abnormal biopsy consistent with celiac disease. Conclusion: i-SCAN-HDWI endoscopy can reflect the histological severity of celiac disease more accurately than conventional WLE alone. This novel endoscopic imaging can improve the diagnostic yield of duodenal biopsies in celiac patients, especially for those with a patchy distribution of villous damage. PMID:27227112

  6. Highly-accurate metabolomic detection of early-stage ovarian cancer

    PubMed Central

    Gaul, David A.; Mezencev, Roman; Long, Tran Q.; Jones, Christina M.; Benigno, Benedict B.; Gray, Alexander; Fernández, Facundo M.; McDonald, John F.

    2015-01-01

    High performance mass spectrometry was employed to interrogate the serum metabolome of early-stage ovarian cancer (OC) patients and age-matched control women. The resulting spectral features were used to establish a linear support vector machine (SVM) model of sixteen diagnostic metabolites that are able to identify early-stage OC with 100% accuracy in our patient cohort. The results provide evidence for the importance of lipid and fatty acid metabolism in OC and serve as the foundation of a clinically significant diagnostic test. PMID:26573008

  7. Accurate human microsatellite genotypes from high-throughput resequencing data using informed error profiles.

    PubMed

    Highnam, Gareth; Franck, Christopher; Martin, Andy; Stephens, Calvin; Puthige, Ashwin; Mittelman, David

    2013-01-01

    Repetitive sequences are biologically and clinically important because they can influence traits and disease, but repeats are challenging to analyse using short-read sequencing technology. We present a tool for genotyping microsatellite repeats called RepeatSeq, which uses Bayesian model selection guided by an empirically derived error model that incorporates sequence and read properties. Next, we apply RepeatSeq to high-coverage genomes from the 1000 Genomes Project to evaluate performance and accuracy. The software uses common formats, such as VCF, for compatibility with existing genome analysis pipelines. Source code and binaries are available at http://github.com/adaptivegenome/repeatseq. PMID:23090981

  8. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  9. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  10. SINA: Accurate high-throughput multiple sequence alignment of ribosomal RNA genes

    PubMed Central

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-01-01

    Motivation: In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. Results: In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Availability: Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license. Contact: epruesse@mpi-bremen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22556368

  11. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  12. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  13. Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices

    NASA Astrophysics Data System (ADS)

    Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun

    2014-04-01

    Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection

  14. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  15. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    NASA Astrophysics Data System (ADS)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  16. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  17. Performance Patterns of High, Medium, and Low Performers during and following a Reward versus Non-Reward Contingency Phase

    ERIC Educational Resources Information Center

    Oliver, Renee; Williams, Robert L.

    2006-01-01

    Three contingency conditions were applied to the math performance of 4th and 5th graders: bonus credit for accurately solving math problems, bonus credit for completing math problems, and no bonus credit for accurately answering or completing math problems. Mixed ANOVAs were used in tracking the performance of high, medium, and low performers…

  18. Aptamer-Conjugated Graphene Oxide Membranes for Highly Efficient Capture and Accurate Identification of Multiple Types of Circulating Tumor Cells

    PubMed Central

    2016-01-01

    Tumor metastasis is responsible for 1 in 4 deaths in the United States. Though it has been well-documented over past two decades that circulating tumor cells (CTCs) in blood can be used as a biomarker for metastatic cancer, there are enormous challenges in capturing and identifying CTCs with sufficient sensitivity and specificity. Because of the heterogeneous expression of CTC markers, it is now well understood that a single CTC marker is insufficient to capture all CTCs from the blood. Driven by the clear need, this study reports for the first time highly efficient capture and accurate identification of multiple types of CTCs from infected blood using aptamer-modified porous graphene oxide membranes. The results demonstrate that dye-modified S6, A9, and YJ-1 aptamers attached to 20–40 μm porous garphene oxide membranes are capable of capturing multiple types of tumor cells (SKBR3 breast cancer cells, LNCaP prostate cancer cells, and SW-948 colon cancer cells) selectively and simultaneously from infected blood. Our result shows that the capture efficiency of graphene oxide membranes is ∼95% for multiple types of tumor cells; for each tumor concentration, 10 cells are present per milliliter of blood sample. The selectivity of our assay for capturing targeted tumor cells has been demonstrated using membranes without an antibody. Blood infected with different cells also has been used to demonstrate the targeted tumor cell capturing ability of aptamer-conjugated membranes. Our data also demonstrate that accurate analysis of multiple types of captured CTCs can be performed using multicolor fluorescence imaging. Aptamer-conjugated membranes reported here have good potential for the early diagnosis of diseases that are currently being detected by means of cell capture technologies. PMID:25565372

  19. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  20. High Performance Pulse Tube Cryocoolers

    NASA Astrophysics Data System (ADS)

    Olson, J. R.; Roth, E.; Champagne, P.; Evtimov, B.; Nast, T. C.

    2008-03-01

    Lockheed Martin's Advanced Technology Center has been developing pulse tube cryocoolers for more than ten years. Recent innovations include successful testing of four-stage coldheads, no-load temperature below 4 K, and the recent development of a high-efficiency compressor. This paper discusses the predicted performance of single and multiple stage pulse tube coldheads driven by our new 6 kg "M5Midi" compressor, which is capable of 90% efficiency with 200 W input power, and a maximum input power of 1000 W. This compressor retains the simplicity of earlier LM-ATC compressors: it has a moving magnet and an external electrical coil, minimizing organics in the working gas and requiring no electrical penetrations through the pressure wall. Motor losses were minimized during design, resulting in a simple, easily-manufactured compressor with state-of-the-art motor efficiency. The predicted cryocooler performance is presented as simple formulae, allowing an engineer to include the impact of a highly-optimized cryocooler into a full system analysis. Performance is given as a function of the heat rejection temperature and the cold tip temperatures and cooling loads.

  1. Accurate determination of specific heat at high temperatures using the flash diffusivity method

    NASA Technical Reports Server (NTRS)

    Vandersande, J. W.; Zoltan, A.; Wood, C.

    1989-01-01

    The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.

  2. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ. PMID:26510657

  3. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  4. Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging.

    PubMed

    Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2016-06-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles. PMID:27093598

  5. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  6. High-Frequency CTD Measurements for Accurate GPS/acoustic Sea-floor Crustal Deformation Measurement System

    NASA Astrophysics Data System (ADS)

    Tadokoro, K.; Yasuda, K.; Taniguchi, S.; Uemura, Y.; Matsuhiro, K.

    2015-12-01

    The GPS/acoustic sea-floor crustal deformation measurement system has developed as a useful tool to observe tectonic deformation especially at subduction zones. One of the factors preventing accurate GPS/acoustic sea-floor crustal deformation measurement is horizontal heterogeneity of sound speed in the ocean. It is therefore necessary to measure the gradient directly from sound speed structure. We report results of high-frequency CTD measurements using Underway CTD (UCTD) in the Kuroshio region. We perform the UCTD measurements on May 2nd, 2015 at two stations (TCA and TOA) above the sea-floor benchmarks installed across the Nankai Trough, off the south-east of Kii Peninsula, middle Japan. The number of measurement points is six at each station along circles with a diameter of 1.8 nautical miles around the sea-floor benchmark. The stations TCA and TOA are located on the edge and the interior of the Kuroshio current, respectively, judging from difference in sea water density measured at the two stations, as well as a satellite image of sea-surface temperature distribution. We detect a sound speed gradient of high speeds in the southern part and low speeds in the northern part at the two stations. At the TCA station, the gradient is noticeable down to 300 m in depth; the maximum difference in sound speed is +/- 5 m/s. The sound speed difference is as small as +/- 1.3 m/s at depths below 300 m, which causes seafloor benchmark positioning error as large as 1 m. At the TOA station, the gradient is extremely small down to 100 m in depth. The maximum difference in sound speed is less than +/- 0.3 m/s that is negligible small for seafloor benchmark positioning error. Clear gradient of high speed is observed to the depths; the maximum difference in sound speed is +/- 0.8-0.9 m/s, causing seafloor benchmark positioning error of several tens centimeters. The UCTD measurement is effective tool to detect sound speed gradient. We establish a method for accurate sea

  7. HIGH PERFORMANCE EBIS FOR RHIC.

    SciTech Connect

    ALESSI,J.; BEEBE, E.; GOULD, O.; KPONOU, A.; LOCKEY, R.; PIKIN, A.; RAPARIA, D.; RITTER, J.; SNYDSTRUP, L.

    2007-06-25

    An Electron Beam Ion Source (EBIS), capable of producing high charge states and high beam currents of any heavy ion species in short pulses, is ideally suited for injection into a synchrotron. An EBIS-based, high current, heavy ion preinjector is now being built at Brookhaven to provide increased capabilities for the Relativistic Heavy Ion Collider (RHIC), and the NASA Space Radiation Laboratory (NSRL). Benefits of the new preinjector include the ability to produce ions of any species, fast switching between species to serve the simultaneous needs of multiple programs, and lower operating and maintenance costs. A state-of-the-art EBIS, operating with an electron beam current of up to 10 A, and producing multi-milliamperes of high charge state heavy ions, has been developed at Brookhaven, and has been operating very successfully on a test bench for several years. The present performance of this high-current EBIS is presented, along with details of the design of the scaled-up EBIS for RHIC, and the status of its construction. Other aspects of the project, including design and construction of the heavy ion RFQ, Linac, and matching beamlines, are also mentioned.

  8. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  9. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  10. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  11. Sampling strategies for accurate computational inferences of gametic phase across highly polymorphic major histocompatibility complex loci

    PubMed Central

    2011-01-01

    Background Genes of the Major Histocompatibility Complex (MHC) are very popular genetic markers among evolutionary biologists because of their potential role in pathogen confrontation and sexual selection. However, MHC genotyping still remains challenging and time-consuming in spite of substantial methodological advances. Although computational haplotype inference has brought into focus interesting alternatives, high heterozygosity, extensive genetic variation and population admixture are known to cause inaccuracies. We have investigated the role of sample size, genetic polymorphism and genetic structuring on the performance of the popular Bayesian PHASE algorithm. To cover this aim, we took advantage of a large database of known genotypes (using traditional laboratory-based techniques) at single MHC class I (N = 56 individuals and 50 alleles) and MHC class II B (N = 103 individuals and 62 alleles) loci in the lesser kestrel Falco naumanni. Findings Analyses carried out over real MHC genotypes showed that the accuracy of gametic phase reconstruction improved with sample size as a result of the reduction in the allele to individual ratio. We then simulated different data sets introducing variations in this parameter to define an optimal ratio. Conclusions Our results demonstrate a critical influence of the allele to individual ratio on PHASE performance. We found that a minimum allele to individual ratio (1:2) yielded 100% accuracy for both MHC loci. Sampling effort is therefore a crucial step to obtain reliable MHC haplotype reconstructions and must be accomplished accordingly to the degree of MHC polymorphism. We expect our findings provide a foothold into the design of straightforward and cost-effective genotyping strategies of those MHC loci from which locus-specific primers are available. PMID:21615903

  12. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  13. Prognostic factors in pediatric high-grade astrocytoma: the importance of accurate pathologic diagnosis.

    PubMed

    Hales, Russell K; Shokek, Ori; Burger, Peter C; Paynter, Nina P; Chaichana, Kaisorn L; Quiñones-Hinojosa, Alfredo; Jallo, George I; Cohen, Kenneth J; Song, Danny Y; Carson, Benjamin S; Wharam, Moody D

    2010-08-01

    To characterize a population of pediatric high-grade astrocytoma (HGA) patients by confirming the proportion with a correct diagnosis, and determine prognostic factors for survival in a subset diagnosed with uniform pathologic criteria. Sixty-three children diagnosed with HGA were treated at the Johns Hopkins Hospital between 1977 and 2004. A single neuropathologist (P.C.B.) reviewed all available histologic samples (n = 48). Log-rank analysis was used to compare survival by patient, tumor, and treatment factors. Median follow-up was 16 months for all patients and 155 months (minimum 54 months) for surviving patients. Median survival for all patients (n = 63) was 14 months with 10 long-term survivors (survival >48 months). At initial diagnosis, 27 patients were grade III (43%) and 36 grade IV (57%). Forty-eight patients had pathology slides available for review, including seven of ten long-term surviving patients. Four patients had non-HGA pathology, all of whom were long term survivors. The remaining 44 patients with confirmed HGG had a median survival of 14 months and prognostic analysis was confined to these patients. On multivariate analysis, five factors were associated with inferior survival: performance status (Lansky) <80% (13 vs. 15 months), bilaterality (13 vs. 19 months), parietal lobe location (13 vs. 16 months), resection less than gross total (13 vs. 22 months), and radiotherapy dose <50 Gy (9 vs. 16 months). Among patients with more than one of the five adverse factors (n = 27), median survival and proportion of long-term survivors were 12.9 months and 0%, compared with 41.4 months and 18% for patients with 0-1 adverse factors (n = 17). In an historical cohort of children with HGA, the potential for long term survival was confined to the subset with less than two of the following adverse prognostic factors: low performance status, bilaterality, parietal lobe site, less than gross total resection, and radiotherapy dose <50 Gy. Pathologic misdiagnosis

  14. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The

  15. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  16. High-Performance Bipropellant Engine

    NASA Technical Reports Server (NTRS)

    Biaglow, James A.; Schneider, Steven J.

    1999-01-01

    TRW, under contract to the NASA Lewis Research Center, has successfully completed over 10 000 sec of testing of a rhenium thrust chamber manufactured via a new-generation powder metallurgy. High performance was achieved for two different propellants, N2O4- N2H4 and N2O4 -MMH. TRW conducted 44 tests with N2O4-N2H4, accumulating 5230 sec of operating time with maximum burn times of 600 sec and a specific impulse Isp of 333 sec. Seventeen tests were conducted with N2O4-MMH for an additional 4789 sec and a maximum Isp of 324 sec, with a maximum firing duration of 700 sec. Together, the 61 tests totalled 10 019 sec of operating time, with the chamber remaining in excellent condition. Of these tests, 11 lasted 600 to 700 sec. The performance of radiation-cooled rocket engines is limited by their operating temperature. For the past two to three decades, the majority of radiation-cooled rockets were composed of a high-temperature niobium alloy (C103) with a disilicide oxide coating (R512) for oxidation resistance. The R512 coating practically limits the operating temperature to 1370 C. For the Earth-storable bipropellants commonly used in satellite and spacecraft propulsion systems, a significant amount of fuel film cooling is needed. The large film-cooling requirement extracts a large penalty in performance from incomplete mixing and combustion. A material system with a higher temperature capability has been matured to the point where engines are being readied for flight, particularly the 100-lb-thrust class engine. This system has powder rhenium (Re) as a substrate material with an iridium (Ir) oxidation-resistant coating. Again, the operating temperature is limited by the coating; however, Ir is capable of long-life operation at 2200 C. For Earth-storable bipropellants, this allows for the virtual elimination of fuel film cooling (some film cooling is used for thermal control of the head end). This has resulted in significant increases in specific impulse performance

  17. Accurate High-Temperature Reaction Networks for Alternative Fuels: Butanol Isomers

    SciTech Connect

    Van Geem, K. M.; Pyl, S. P.; Marin, G. B.; Harper, M. R.; Green, W. H.

    2010-11-03

    Oxygenated hydrocarbons, particularly alcohol compounds, are being studied extensively as alternatives and additives to conventional fuels due to their propensity of decreasing soot formation and improving the octane number of gasoline. However, oxygenated fuels also increase the production of toxic byproducts, such as formaldehyde. To gain a better understanding of the oxygenated functional group’s influence on combustion properties—e.g., ignition delay at temperatures above the negative temperature coefficient regime, and the rate of benzene production, which is the common precursor to soot formation—a detailed pressure-dependent reaction network for n-butanol, sec-butanol, and tert-butanol consisting of 281 species and 3608 reactions is presented. The reaction network is validated against shock tube ignition delays and doped methane flame concentration profiles reported previously in the literature, in addition to newly acquired pyrolysis data. Good agreement between simulated and experimental data is achieved in all cases. Flux and sensitivity analyses for each set of experiments have been performed, and high-pressure-limit reaction rate coefficients for important pathways, e.g., the dehydration reactions of the butanol isomers, have been computed using statistical mechanics and quantum chemistry. The different alcohol decomposition pathways, i.e., the pathways from primary, secondary, and tertiary alcohols, are discussed. Furthermore, comparisons between ethanol and n-butanol, two primary alcohols, are presented, as they relate to ignition delay.

  18. Development of an unmanned aerial vehicle-based spray system for highly accurate site-specific application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Application of crop production and protection materials is a crucial component in the high productivity of American agriculture. Agricultural chemical application is frequently needed at a specific time and location for accurate site-specific management of crop pests. Piloted aircrafts that carry ...

  19. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    PubMed

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200

  20. Accurate taxonomy assignments from 16S rRNA sequences produced by highly parallel pyrosequencers

    PubMed Central

    Liu, Zongzhi; DeSantis, Todd Z.; Andersen, Gary L.; Knight, Rob

    2008-01-01

    The recent introduction of massively parallel pyrosequencers allows rapid, inexpensive analysis of microbial community composition using 16S ribosomal RNA (rRNA) sequences. However, a major challenge is to design a workflow so that taxonomic information can be accurately and rapidly assigned to each read, so that the composition of each community can be linked back to likely ecological roles played by members of each species, genus, family or phylum. Here, we use three large 16S rRNA datasets to test whether taxonomic information based on the full-length sequences can be recaptured by short reads that simulate the pyrosequencer outputs. We find that different taxonomic assignment methods vary radically in their ability to recapture the taxonomic information in full-length 16S rRNA sequences: most methods are sensitive to the region of the 16S rRNA gene that is targeted for sequencing, but many combinations of methods and rRNA regions produce consistent and accurate results. To process large datasets of partial 16S rRNA sequences obtained from surveys of various microbial communities, including those from human body habitats, we recommend the use of Greengenes or RDP classifier with fragments of at least 250 bases, starting from one of the primers R357, R534, R798, F343 or F517. PMID:18723574

  1. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    NASA Astrophysics Data System (ADS)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  2. High Performance Field Reversed Configurations

    NASA Astrophysics Data System (ADS)

    Binderbauer, Michl

    2014-10-01

    The field-reversed configuration (FRC) is a prolate compact toroid with poloidal magnetic fields. FRCs could lead to economic fusion reactors with high power density, simple geometry, natural divertor, ease of translation, and possibly capable of burning aneutronic fuels. However, as in other high-beta plasmas, there are stability and confinement concerns. These concerns can be addressed by introducing and maintaining a significant fast ion population in the system. This is the approach adopted by TAE and implemented for the first time in the C-2 device. Studying the physics of FRCs driven by Neutral Beam (NB) injection, significant improvements were made in confinement and stability. Early C-2 discharges had relatively good confinement, but global power losses exceeded the available NB input power. The addition of axially streaming plasma guns, magnetic end plugs as well as advanced surface conditioning leads to dramatic reductions in turbulence driven losses and greatly improved stability. As a result, fast ion confinement significantly improved and allowed for build-up of a dominant fast particle population. Under such appropriate conditions we achieved highly reproducible, long-lived, macroscopically stable FRCs with record lifetimes. This demonstrated many beneficial effects of large orbit particles and their performance impact on FRCs Together these achievements point to the prospect of beam-driven FRCs as a path toward fusion reactors. This presentation will review and expand on key results and present context for their interpretation.

  3. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  4. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  5. Learning the Structure of High-Dimensional Manifolds with Self-Organizing Maps for Accurate Information Extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Lili

    This work aims to improve the capability of accurate information extraction from high-dimensional data, with a specific neural learning paradigm, the Self-Organizing Map (SOM). The SOM is an unsupervised learning algorithm that can faithfully sense the manifold structure and support supervised learning of relevant information from the data. Yet open problems regarding SOM learning exist. We focus on the following two issues. (1) Evaluation of topology preservation. Topology preservation is essential for SOMs in faithful representation of manifold structure. However, in reality, topology violations are not unusual, especially when the data have complicated structure. Measures capable of accurately quantifying and informatively expressing topology violations are lacking. One contribution of this work is a new measure, the Weighted Differential Topographic Function (WDTF), which differentiates an existing measure, the Topographic Function (TF), and incorporates detailed data distribution as an importance weighting of violations to distinguish severe violations from insignificant ones. Another contribution is an interactive visual tool, TopoView, which facilitates the visual inspection of violations on the SOM lattice. We show the effectiveness of the combined use of the WDTF and TopoView through a simple two-dimensional data set and two hyperspectral images. (2) Learning multiple latent variables from high-dimensional data. We use an existing two-layer SOM-hybrid supervised architecture, which captures the manifold structure in its SOM hidden layer, and then, uses its output layer to perform the supervised learning of latent variables. In the customary way, the output layer only uses the strongest output of the SOM neurons. This severely limits the learning capability. We allow multiple, k, strongest responses of the SOM neurons for the supervised learning. Moreover, the fact that different latent variables can be best learned with different values of k motivates a

  6. TROP-2 immunohistochemistry: a highly accurate method in the differential diagnosis of papillary thyroid carcinoma.

    PubMed

    Bychkov, Andrey; Sampatanukul, Pichet; Shuangshoti, Shanop; Keelawat, Somboon

    2016-08-01

    We aimed to evaluate the diagnostic utility of the novel immunohistochemical marker TROP-2 on thyroid specimens (226 tumours and 207 controls). Whole slide immunohistochemistry was performed and scored by automated digital image analysis. Non-neoplastic thyroid, follicular adenomas, follicular carcinomas, and medullary carcinomas were negative for TROP-2 immunostaining. The majority of papillary thyroid carcinoma (PTC) specimens (94/114, 82.5%) were positive for TROP-2; however, the pattern of staining differed significantly between the histopathological variants. All papillary microcarcinomas (mPTC), PTC classic variant (PTC cv), and tall cell variant (PTC tcv) were TROP-2 positive, with mainly diffuse staining. In contrast, less than half of the PTC follicular variant specimens were positive for TROP-2, with only focal immunoreactivity. TROP-2 could identify PTC cv with 98.1% sensitivity and 97.5% specificity. ROC curve analysis found that the presence of >10% of TROP-2 positive cells in a tumour supported a diagnosis of PTC. The study of intratumoural heterogeneity showed that low-volume cytological samples of PTC cv could be adequately assessed by TROP-2 immunostaining. The TROP-2 H-score (intensity multiplied by proportion) was significantly associated with PTC variant and capsular invasion in encapsulated PTC follicular variant (p<0.001). None of the baseline (age, gender) and clinical (tumour size, nodal disease, stage) parameters were correlated with TROP-2 expression. In conclusion, TROP-2 membranous staining is a very sensitive and specific marker for PTC cv, PTC tcv, and mPTC, with high overall specificity for PTC. PMID:27311870

  7. High performance solar Stirling system

    NASA Technical Reports Server (NTRS)

    Stearns, J. W.; Haglund, R.

    1981-01-01

    A full-scale Dish-Stirling system experiment, at a power level of 25 kWe, has been tested during 1981 on the Test Bed Concentrator No. 2 at the Parabolic Dish Test Site, Edwards, CA. Test components, designed and developed primarily by industrial contractors for the Department of Energy, include an advanced Stirling engine driving an induction alternator, a directly-coupled solar receiver with a natural gas combustor for hybrid operation and a breadboard control system based on a programmable controller and standard utility substation components. The experiment demonstrated practicality of the solar Stirling application and high system performance into a utility grid. This paper describes the design and its functions, and the test results obtained.

  8. FPGA Based High Performance Computing

    SciTech Connect

    Bennett, Dave; Mason, Jeff; Sundararajan, Prasanna; Dellinger, Erik; Putnam, Andrew; Storaasli, Olaf O

    2008-01-01

    Current high performance computing (HPC) applications are found in many consumer, industrial and research fields. From web searches to auto crash simulations to weather predictions, these applications require large amounts of power by the compute farms and supercomputers required to run them. The demand for more and faster computation continues to increase along with an even sharper increase in the cost of the power required to operate and cool these installations. The ability of standard processor based systems to address these needs has declined in both speed of computation and in power consumption over the past few years. This paper presents a new method of computation based upon programmable logic as represented by Field Programmable Gate Arrays (FPGAs) that addresses these needs in a manner requiring only minimal changes to the current software design environment.

  9. Highly accurate P-SV complete synthetic seismograms using modified DSM operators

    NASA Astrophysics Data System (ADS)

    Takeuchi, Nozomu; Geller, Robert J.; Cummins, Phil R.

    In previous papers [Cummins et al., 1994ab] (hereafter referred to as DSMI and DSMII respectively), we presented accurate methods for computing complete synthetic seismograms for SH and P-SV respectively in a spherical earth model. The SH calculations used computationally efficient modified matrix operators, but the P-SV synthetics were computationally intensive. Geller and Takeuchi [1995] (hereafter referred to as GT95) presented a general theory for deriving modified operators and gave the explicit form of the modified operators for the P-SV case in cylindrical or cartesian coordinates. In this paper we extend GT95's results to derive modified operators for the P-SV case in spherical coordinates. The use of the modified operators reduces the CPU time by a factor of about 5 without a loss of accuracy. 10 CPU min on a SPARC-20 workstation with one CPU are required to compute a profile of synthetic seismograms from DC to 20 sec period.

  10. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  11. MULTEM: A new multislice program to perform accurate and fast electron diffraction and imaging simulations using Graphics Processing Units with CUDA.

    PubMed

    Lobato, I; Van Dyck, D

    2015-09-01

    The main features and the GPU implementation of the MULTEM program are presented and described. This new program performs accurate and fast multislice simulations by including higher order expansion of the multislice solution of the high energy Schrödinger equation, the correct subslicing of the three-dimensional potential and top-bottom surfaces. The program implements different kinds of simulation for CTEM, STEM, ED, PED, CBED, ADF-TEM and ABF-HC with proper treatment of the spatial and temporal incoherences. The multislice approach described here treats the specimen as amorphous material which allows a straightforward implementation of the frozen phonon approximation. The generalized transmission function for each slice is calculated when is needed and then discarded. This allows us to perform large simulations that can include millions of atoms and keep the computer memory requirements to a reasonable level. PMID:25965576

  12. High power ion thruster performance

    NASA Technical Reports Server (NTRS)

    Rawlin, Vincent K.; Patterson, Michael J.

    1987-01-01

    The ion thruster is one of several forms of space electric propulsion being considered for use on future SP-100-based missions. One possible major mission ground rule is the use of a single Space Shuttle launch. Thus, the mass in orbit at the reactor activation altitude would be limited by the Shuttle mass constraints. When the spacecraft subsystem masses are subtracted from this available mass limit, a maximum propellant mass may be calculated. Knowing the characteristics of each type of electric thruster allows maximum values of total impulse, mission velocity increment, and thrusting time to be calculated. Because ion thrusters easily operate at high values of efficiency (60 to 70%) and specific impulse (3000 to 5000 sec), they can impart large values of total impulse to a spacecraft. They also can be operated with separate control of the propellant flow rate and exhaust velocity. This paper presents values of demonstrated and projected performance of high power ion thrusters used in an analysis of electric propulsion for an SP-100 based mission.

  13. A highly accurate method for the determination of mass and center of mass of a spacecraft

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Trubert, M. R.; Egwuatu, A.

    1978-01-01

    An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.

  14. Toward accurate molecular identification of species in complex environmental samples: testing the performance of sequence filtering and clustering methods

    PubMed Central

    Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E

    2015-01-01

    Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860

  15. High Performance Solution Processable TFTs

    NASA Astrophysics Data System (ADS)

    Gundlach, David

    2008-03-01

    Organic-based electronic devices offer the potential to significantly impact the functionality and pervasiveness of large-area electronics. We report on soluble acene-based organic thin film transistors (OTFTs) where the microstructure of as-cast films can be precisely controlled via interfacial chemistry. Chemically tailoring the source/drain contact interface is a novel route to self-patterning of soluble small molecule organic semiconductors and enables the growth of highly ordered regions along opposing contact edges which extend into the transistor channel. The unique film forming properties of soluble fluorinated anthradithiophenes allows us to fabricate high performance OTFTs, OTFT circuits, and to deterministically study the influence of the film microstructure on the electrical characteristics of devices. Most recently we have grown single crystals of soluble fluorinated anthradithiophenes by vapor transport method allowing us to probe deeper into their intrinsic properties and determine the potential and limitations of this promising family of oligomers for use in organic-based electronic devices. Co-Authors: O. D. Jurchescu^1,4, B. H. Hamadani^1, S. K. Park^4, D. A. Mourey^4, S. Subramanian^5, A. J. Moad^2, R. J. Kline^3, L. C. Teague^2, J. G. Kushmerick^2, L. J. Richter^2, T. N. Jackson^4, and J. E. Anthony^5 ^1Semiconductor Electronics Division, ^2Surface and Microanalysis Science Division, ^3Polymers Division, National Institute of Standards and Technology, Gaithersburg, MD 20899 ^4Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802 ^5Department of Chemistry, University of Kentucky, Lexington, KY 40506-0055

  16. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  17. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  18. Workplace Learning of High Performance Sports Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Clifford J.; Tinning, Richard

    2010-01-01

    The Australian coaching workplace (to be referred to as the State Institute of Sport; SIS) under consideration in this study employs significant numbers of full-time performance sport coaches and can be accurately characterized as a genuine workplace. Through a consideration of the interaction between what the workplace (SIS) affords the…

  19. A Polymer Visualization System with Accurate Heating and Cooling Control and High-Speed Imaging

    PubMed Central

    Wong, Anson; Guo, Yanting; Park, Chul B.; Zhou, Nan Q.

    2015-01-01

    A visualization system to observe crystal and bubble formation in polymers under high temperature and pressure has been developed. Using this system, polymer can be subjected to a programmable thermal treatment to simulate the process in high pressure differential scanning calorimetry (HPDSC). With a high-temperature/high-pressure view-cell unit, this system enables in situ observation of crystal formation in semi-crystalline polymers to complement thermal analyses with HPDSC. The high-speed recording capability of the camera not only allows detailed recording of crystal formation, it also enables in situ capture of plastic foaming processes with a high temporal resolution. To demonstrate the system’s capability, crystal formation and foaming processes of polypropylene/carbon dioxide systems were examined. It was observed that crystals nucleated and grew into spherulites, and they grew at faster rates as temperature decreased. This observation agrees with the crystallinity measurement obtained with the HPDSC. Cell nucleation first occurred at crystals’ boundaries due to CO2 exclusion from crystal growth fronts. Subsequently, cells were nucleated around the existing ones due to tensile stresses generated in the constrained amorphous regions between networks of crystals. PMID:25915031

  20. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles. PMID:26517180

  1. Accurate calculations of the high-pressure elastic constants based on the first-principles

    NASA Astrophysics Data System (ADS)

    Wang, Chen-Ju; Gu, Jian-Bing; Kuang, Xiao-Yu; Yang, Xiang-Dong

    2015-08-01

    The energy term corresponding to the first order of the strain in Taylor series expansion of the energy with respect to strain is always ignored when high-pressure elastic constants are calculated. Whether the modus operandi would affect the results of the high-pressure elastic constants is still unsolved. To clarify this query, we calculate the high-pressure elastic constants of tantalum and rhenium when the energy term mentioned above is considered and neglected, respectively. Results show that the neglect of the energy term corresponding to the first order of the strain indeed would influence the veracity of the high-pressure elastic constants, and this influence becomes larger with pressure increasing. Therefore, the energy term corresponding to the first-order of the strain should be considered when the high-pressure elastic constants are calculated. Project supported by the National Natural Science Foundation of China (Grant No. 11274235), the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 11104190), and the Doctoral Education Fund of Education Ministry of China (Grant Nos. 20100181110086 and 20110181120112).

  2. High-throughput Accurate-wavelength Lens-based Visible Spectrometera

    SciTech Connect

    Ronald E. Belll and Filippo Scotti

    2010-06-04

    A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm-1 grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arcsec, corresponding to a wavelength error ≤ 0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature and pressure are monitored between the time of calibration and the time of measurement to insure a persistent wavelength calibration

  3. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The United States Department of Agriculture’s high-speed wind tunnel facility in College Station, Texas, USA was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds comparable to those used in aerial dispersant application. A laser particle size anal...

  4. Solid rocket booster internal flow analysis by highly accurate adaptive computational methods

    NASA Technical Reports Server (NTRS)

    Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.

    1991-01-01

    The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.

  5. Implementing an Inexpensive and Accurate Introductory Gas Density Activity with High School Students

    ERIC Educational Resources Information Center

    Cunningham, W. Patrick; Joseph, Christopher; Morey, Samantha; Santos Romo, Ana; Shope, Cullen; Strang, Jonathan; Yang, Kevin

    2015-01-01

    A simplified activity examined gas density while employing cost-efficient syringes in place of traditional glass bulbs. The exercise measured the density of methane, with very good accuracy and precision, in both first-year high school and AP chemistry settings. The participating students were tasked with finding the density of a gas. The…

  6. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  7. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  8. Accurate blackbodies

    NASA Astrophysics Data System (ADS)

    Latvakoski, Harri M.; Watson, Mike; Topham, Shane; Scott, Deron; Wojcik, Mike; Bingham, Gail

    2010-07-01

    Infrared radiometers and spectrometers generally use blackbodies for calibration, and with the high accuracy needs of upcoming missions, blackbodies capable of meeting strict accuracy requirements are needed. One such mission, the NASA climate science mission Climate Absolute Radiance and Refractivity Observatory (CLARREO), which will measure Earth's emitted spectral radiance from orbit, has an absolute accuracy requirement of 0.1 K (3σ) at 220 K over most of the thermal infrared. Space Dynamics Laboratory (SDL) has a blackbody design capable of meeting strict modern accuracy requirements. This design is relatively simple to build, was developed for use on the ground or onorbit, and is readily scalable for aperture size and required performance. These-high accuracy blackbodies are currently in use as a ground calibration unit and with a high-altitude balloon instrument. SDL is currently building a prototype blackbody to demonstrate the ability to achieve very high accuracy, and we expect it to have emissivity of ~0.9999 from 1.5 to 50 μm, temperature uncertainties of ~25 mK, and radiance uncertainties of ~10 mK due to temperature gradients. The high emissivity and low thermal gradient uncertainties are achieved through cavity design, while the low temperature uncertainty is attained by including phase change materials such as mercury, gallium, and water in the blackbody. Blackbody temperature sensors are calibrated at the melt points of these materials, which are determined by heating through their melt point. This allows absolute temperature calibration traceable to the SI temperature scale.

  9. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    SciTech Connect

    Bell, Ronald E.

    2014-11-15

    A high-throughput spectrometer for the 400–820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm{sup −1} grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  10. Development and operation of a high-throughput accurate-wavelength lens-based spectrometera)

    DOE PAGESBeta

    Bell, Ronald E.

    2014-07-11

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤ 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. The computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection,more » and wavelength calibration.« less

  11. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    NASA Astrophysics Data System (ADS)

    Bell, Ronald E.

    2014-11-01

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  12. Development and Operation of High-throughput Accurate-wavelength Lens-based Spectrometer

    SciTech Connect

    Bell, Ronald E

    2014-07-01

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy < 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  13. Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction

    NASA Astrophysics Data System (ADS)

    Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.

    2015-12-01

    Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.

  14. An experimental device for accurate ultrasounds measurements in liquid foods at high pressure

    NASA Astrophysics Data System (ADS)

    Hidalgo-Baltasar, E.; Taravillo, M.; Baonza, V. G.; Sanz, P. D.; Guignon, B.

    2012-12-01

    The use of high hydrostatic pressure to ensure safe and high-quality product has markedly increased in the food industry during the last decade. Ultrasonic sensors can be employed to control such processes in an equivalent way as they are currently used in processes carried out at room pressure. However, their installation, calibration and use are particularly challenging in the context of a high pressure environment. Besides, data about acoustic properties of food under pressure and even for water are quite scarce in the pressure range of interest for food treatment (namely, above 200 MPa). The objective of this work was to establish a methodology to determine the speed of sound in foods under pressure. An ultrasonic sensor using the multiple reflections method was adapted to a lab-scale HHP equipment to determine the speed of sound in water between 253.15 and 348.15 K, and at pressures up to 700 MPa. The experimental speed-of-sound data were compared to the data calculated from the equation of state of water (IAPWS-95 formulation). From this analysis, the way to calibrate cell path was validated. After this calibration procedure, the speed of sound could be determined in liquid foods by using this sensor with a relative uncertainty between (0.22 and 0.32) % at a confidence level of 95 % over the whole pressure domain.

  15. High-throughput baggage scanning employing x-ray diffraction for accurate explosives detection

    NASA Astrophysics Data System (ADS)

    Green, Michael C.; Partain, Larry D.

    2003-07-01

    X-ray systems dominate the installed base of airport baggage scanning systems for explosives detection. The majority are conveyer systems with projection line scanners. These systems can achieve a high throughput but exhibit a high false positive rate and require significant operator involvement. Systems employing computed tomography (CT) are currently being installed at a rapid rate. These can provide good discrimination of levels of xray absorption coefficient and can largely circumvent superimposition effects. Nonetheless CT measures only x-ray absorption coefficient per voxel which does not provide a means of specific material identification resulting in many false positives, and it is relatively straightforward to configure explosive materials so that they are undetectable by CT systems. Diffraction-based x-ray systems present a solution to this problem. They detect and measure atomic layer spacings in crystalline and microcrystalline materials with high sensitivity. This provides a means of specific material identification. The majority of explosive compounds are well crystallized solids at room temperature. X-ray diffraction systems using both conventional wavelength-dispersive diffraction and fixed-angle, multi-wavelength diffraction for improved throughput are described. Large-area, flat-panel x-ray detector technology coupled with an extended x-ray source will permit a full 3D volumetric x-ray diffraction scan of a bag in a single pass, (patent pending).

  16. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  17. High-throughput accurate-wavelength lens-based visible spectrometer.

    PubMed

    Bell, Ronald E; Scotti, Filippo

    2010-10-01

    A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm(-1) grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arc  sec, corresponding to a wavelength error ≤0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all the relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature, and pressure are monitored between the time of calibration and the time of measurement to ensure a persistent wavelength calibration. PMID:21033924

  18. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  19. Kashima RAy-Tracing Service (KARATS) for high accurate GNSS positioning

    NASA Astrophysics Data System (ADS)

    Ichikawa, R.; Hobiger, T.; Hasegawa, S.; Tsutsumi, M.; Koyama, Y.; Kondo, T.

    2010-12-01

    Radio signal delays associated with the neutral atmosphere are one of the major error sources of space geodesy such as GPS, GLONASS, GALILEO, VLBI, In-SAR measurements. We have developed a state-of-art tool to estimate the atmospheric path delays by ray-tracing through JMA meso-scale analysis (MANAL data) data. The tools, which we have named 'KAshima RAytracing Tools (KARAT)', are capable of calculating total slant delays and ray-bending angles considering real atmospheric phenomena. Numerical weather models such as MANAL data have undergone a significant improvement of accuracy and spatial resolution, which makes it feasible to utilize them for the correction of atmosphere excess path delays. In the previous studies for evaluating KARAT performance, the KARAT solutions are slightly better than the solutions using VMF1 and GMF with linear gradient model for horizontal and height positions. Based on these results we have started the web-based online service, 'KAshima RAytracing Service (KARATS)' for providing the atmospheric delay correction of RINEX files on Jan 27th, 2010. The KARATS receives user's RINEX data via a proper web site (http://vps.nict.go.jp/karats/index.html) and processes user's data files using KARAT for reducing atmospheric slant delays. The reduced RINEX files are archived in the specific directory for each user on the KARATS server. Once the processing is finished the information of data archive is sent privately via email to each user. If user want to process a large amount of data files, user can prepare own server which archives them. The KARATS can get these files from the user's server using GNU ¥emph{wget} and performs ray-traced corrections. We will present a brief status of the KARATS and summarize first experiences gained after this service went operational in December 2009. In addition, we will also demonstrate the newest KARAT performance based on the 5km MANAL data which has been operational from April 7th, 2009 and an outlook on

  20. Towards high accurate neutron-induced fission cross sections of 240,242Pu: Spontaneous fission half-lives

    NASA Astrophysics Data System (ADS)

    Salvador-Castiñeira, P.; Bryś, T.; Eykens, R.; Hambsch, F.-J.; Moens, A.; Oberstedt, S.; Pretel, C.; Sibbens, G.; Vanleeuw, D.; Vidali, M.

    2013-12-01

    Fast spectrum neutron-induced fission cross sections of transuranic isotopes are being of special demand in order to provide accurate data for the new GEN-IV nuclear power plants. To minimize the uncertainties on these measurements accurate data on spontaneous fission half-lives and detector efficiencies are a key point. High α-active actinides need special attention since the misinterpretation of detector signals can lead to low efficiency values or underestimation in fission fragment detection. In that context, 240,242Pu isotopes have been studied by means of a Twin Frisch-Grid Ionization Chamber (TFGIC) for measurements of their neutron-induced fission cross section. Gases with different drift velocities have been used, namely P10 and CH4. The detector efficiencies for both samples have been determined and improved spontaneous fission half-life values were obtained.

  1. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  2. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Britt, Darrell Steven, Jr.

    Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is

  3. High-order accurate difference schemes for solving gasdynamic equations by the Godunov method with antidiffusion

    NASA Astrophysics Data System (ADS)

    Moiseev, N. Ya.; Silant'eva, I. Yu.

    2009-05-01

    A technique is proposed for improving the accuracy of the Godunov method as applied to gasdynamic simulations in one dimension. The underlying idea is the reconstruction of fluxes arsoss cell boundaries (“large” values) by using antidiffusion corrections, which are obtained by analyzing the differential approximation of the schemes. In contrast to other approaches, the reconstructed values are not the initial data but rather large values calculated by solving the Riemann problem. The approach is efficient and yields higher accuracy difference schemes with a high resolution.

  4. Rapid, high-order accurate calculation of flows due to free source or vortex distributions

    NASA Technical Reports Server (NTRS)

    Halsey, D.

    1981-01-01

    Fast Fourier transform (FFT) techniques are applied to the problem of finding the flow due to source or vortex distributions in the field outside an airfoil or other two-dimensional body. Either the complex potential or the complex velocity may be obtained to a high order of accuracy, with computational effort similar to that required by second-order fast Poisson solvers. These techniques are applicable to general flow problems with compressibility and rotation. An example is given of their use for inviscid compressible flow.

  5. Accurate Mass Searching of Individual Lipid Species Candidate from High-resolution Mass Spectra for Shotgun Lipidomics

    PubMed Central

    Wang, Miao; Huang, Yingying; Han, Xianlin

    2014-01-01

    RATIONALE With the increased mass accuracy and resolution in commercialized mass spectrometers, new development on shotgun lipidomics could be expected with increased speed, dynamic range, and coverage over lipid species and classes. However, we found that the major issue by using high mass accuracy/resolution instruments to search lipid species is the partial overlap between the two 13C atom-containing isotopologue of a species M (i.e., M+2 isotopologue) and the ion of a species less a double bond than M (assigned here as L). This partial overlap alone could cause a mass shift of the species L to the lower mass end up to 12 ppm around m/z 750 as well as significant peak broadening. METHODS We developed an approach for accurate mass searching by exploring one of the major features of shotgun lipidomics data that lipid species of a class are present in ion clusters where neighboring masses from different species differ by one or a few double bonds. In the approach, a mass-searching window of 18 ppm (from −15 to 3 ppm) was first searched for an entire group of species of a lipid class. Then accurate mass searching of the plus one 13C isotopologue of individual species was used to eliminate the potential false positive. RESULTS The approach was extensively validated through comparing with the species determined by the multi-dimensional MS-based shotgun lipidomics platform. The newly developed strategy of accurate mass search enables identifying the overlapped L species and acquiring the corresponding peak intensities. CONCLUSIONS We believe that this novel approach could substantially broaden the applications of high mass accurate/resolution mass spectrometry for shotgun lipidomics. PMID:25178724

  6. The S-model: A highly accurate MOST model for CAD

    NASA Astrophysics Data System (ADS)

    Satter, J. H.

    1986-09-01

    A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.

  7. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    SciTech Connect

    Austin, T.M.; Cary, J.R.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  8. High-Performance Phylogeny Reconstruction

    SciTech Connect

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  9. Highly accurate servo control of reference beam angle in holographic memory with polarized servo beam

    NASA Astrophysics Data System (ADS)

    Hosaka, Makoto; Ogata, Takeshi; Yamada, Kenichiro; Yamazaki, Kazuyoshi; Shimada, Kenichi

    2016-09-01

    We propose a new servo technique for controlling the reference beam angle in angular multiplexing holographic memory to attain higher capacity and higher speed data archiving. An orthogonally polarized beam with an incident angle slightly different from that of the reference beam is newly applied to the optics. The control signal for the servo is generated as the difference between the diffracted light intensities of these two beams from a hologram. The incident angle difference between the beams to the medium was optimized as sufficient properties of the control signal were obtained. The high accuracy of the control signal with an angle error lower than 1.5 mdeg was successfully confirmed in the simulations and experiments.

  10. High expression of CD26 accurately identifies human bacteria-reactive MR1-restricted MAIT cells

    PubMed Central

    Sharma, Prabhat K; Wong, Emily B; Napier, Ruth J; Bishai, William R; Ndung'u, Thumbi; Kasprowicz, Victoria O; Lewinsohn, Deborah A; Lewinsohn, David M; Gold, Marielle C

    2015-01-01

    Mucosa-associated invariant T (MAIT) cells express the semi-invariant T-cell receptor TRAV1–2 and detect a range of bacteria and fungi through the MHC-like molecule MR1. However, knowledge of the function and phenotype of bacteria-reactive MR1-restricted TRAV1–2+ MAIT cells from human blood is limited. We broadly characterized the function of MR1-restricted MAIT cells in response to bacteria-infected targets and defined a phenotypic panel to identify these cells in the circulation. We demonstrated that bacteria-reactive MR1-restricted T cells shared effector functions of cytolytic effector CD8+ T cells. By analysing an extensive panel of phenotypic markers, we determined that CD26 and CD161 were most strongly associated with these T cells. Using FACS to sort phenotypically defined CD8+ subsets we demonstrated that high expression of CD26 on CD8+ TRAV1–2+ cells identified with high specificity and sensitivity, bacteria-reactive MR1-restricted T cells from human blood. CD161hi was also specific for but lacked sensitivity in identifying all bacteria-reactive MR1-restricted T cells, some of which were CD161dim. Using cell surface expression of CD8, TRAV1–2, and CD26hi in the absence of stimulation we confirm that bacteria-reactive T cells are lacking in the blood of individuals with active tuberculosis and are restored in the blood of individuals undergoing treatment for tuberculosis. PMID:25752900

  11. A new direct absorption measurement for high precision and accurate measurement of water vapor in the UT/LS

    NASA Astrophysics Data System (ADS)

    Sargent, M. R.; Sayres, D. S.; Smith, J. B.; Anderson, J.

    2011-12-01

    Highly accurate and precise water vapor measurements in the upper troposphere and lower stratosphere are critical to understanding the climate feedbacks of water vapor and clouds in that region. However, the continued disagreement among water vapor measurements (~1 - 2 ppmv) are too large to constrain the role of different hydration and dehydration mechanisms operating in the UT/LS, with model validation dependent upon which dataset is chosen. In response to these issues, we present a new instrument for measurement of water vapor in the UT/LS that was flown during the April 2011 MACPEX mission out of Houston, TX. The dual axis instrument combines the heritage and validated accuracy of the Harvard Lyman-alpha instrument with a newly designed direct IR absorption instrument, the Harvard Herriott Hygrometer (HHH). The Lyman-alpha detection axis has flown aboard NASA's WB-57 and ER2 aircraft since 1994, and provides a requisite link between the new HHH instrument and the long history of Harvard water vapor measurements. The instrument utilizes the highly sensitive Lyman-alpha photo-fragment fluorescence detection method; its accuracy has been demonstrated though rigorous laboratory calibrations and in situ diagnostic procedures. The Harvard Herriott Hygrometer employs a fiber coupled near-IR laser with state-of-the-art electronics to measure water vapor via direct absorption in a spherical Herriott cell of 10 cm length. The instrument demonstrated in-flight precision of 0.1 ppmv (1-sec, 1-sigma) at mixing ratios as low as 5 ppmv with accuracies of 10% based on careful laboratory calibrations and in-flight performance. We present a description of the measurement technique along with our methodology for calibration and details of the measurement uncertainties. The simultaneous utilization of radically different measurement techniques in a single duct in the new Harvard Water Vapor (HWV) instrument allows for the constraint of systematic errors inherent in each technique

  12. High Performance Torso Cooling Garment

    NASA Technical Reports Server (NTRS)

    Conger, Bruce; Makinen, Janice

    2016-01-01

    The concept proposed in this paper is to improve thermal efficiencies of the liquid cooling and ventilation garment (LCVG) in the torso area, which could facilitate removal of LCVG tubing from the arms and legs, thereby increasing suited crew member mobility. EVA space suit mobility in micro-gravity is challenging, and it becomes even more challenging in the gravity of Mars. By using shaped water tubes that greatly increase the contact area with the skin in the torso region of the body, the heat transfer efficiency can be increased. This increase in efficiency could provide the required liquid cooling via torso tubing only; no arm or leg LCVG tubing would be required. Benefits of this approach include increased crewmember mobility, enhanced evaporation cooling, increased comfort during Mars EVA tasks, and easing of the overly dry condition in the helmet associated with the Advanced Extravehicular Mobility Unit (EMU) ventilation loop currently under development. This report describes analysis and test activities performed to evaluate the potential improvements to the thermal performance of the LCVG. Analyses evaluated potential tube shapes for improving the thermal performance of the LCVG. The analysis results fed into the selection of flat flow strips to improve thermal contact with the skin of the suited test subject. Testing of small segments was performed to compare thermal performance of the tubing approach of the current LCVG to the flat flow strips proposed as the new concept. Results of the testing is presented along with recommendations for future development of this new concept.

  13. Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data

    PubMed Central

    White, Amelia G.; Cipriani, Patricia G.; Kao, Huey-Ling; Lees, Brandon; Geiger, Davi; Sontag, Eduardo; Gunsalus, Kristin C.; Piano, Fabio

    2011-01-01

    We present a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed stages. The object recognition machine consists of four hierarchical layers, each composed of units upon which evaluation functions output a label score, followed by a grouping mechanism that resolves ambiguities in the score by imposing local consistency constraints. Each layer then outputs groups of units, from which the units of the next layer are derived. Using this hierarchical principle, the machine builds up successively more sophisticated representations of the objects to be classified. The algorithm segments large and small objects, decomposes objects into parts, extracts features from these parts, and classifies them by SVM. We are using this system to analyze phenotypic data from C. elegans high-throughput genetic screens, and our system overcomes a previous bottleneck in image analysis by achieving near real-time scoring of image data. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users. PMID:22053146

  14. Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1991-01-01

    A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.

  15. Material response mechanisms are needed to obtain highly accurate experimental shock wave data

    NASA Astrophysics Data System (ADS)

    Forbes, Jerry

    2015-06-01

    The field of shock wave compression of matter has provided a simple set of equations relating thermodynamic and kinematic parameters that describe the conservation of mass, momentum and energy across a steady shock wave with one-dimensional flow. Well-known condensed matter shock wave experimental results will be reviewed to see whether the assumptions required for deriving these simple R-H equations are met. Note that the material compression model is not required for deriving the 1-D conservation flow equations across a steady shock front. However, this statement is misleading from a practical experimental viewpoint since obtaining small systematic errors in shock wave measured parameters requires the material compression and release mechanisms to be known. A brief review will be presented on systematic errors in shock wave data from common experimental techniques for fluids, elastic-plastic solids, materials with negative volume phase transitions, glass and ceramic materials, and high explosives. Issues related to time scales of experiments and quasi-steady flow will also be presented.

  16. Obtaining Accurate Change Detection Results from High-Resolution Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Bryant, N.; Bunch, W.; Fretz, R.; Kim, P.; Logan, T.; Smyth, M.; Zobrist, A.

    2012-01-01

    Multi-date acquisitions of high-resolution imaging satellites (e.g. GeoEye and WorldView), can display local changes of current economic interest. However, their large data volume precludes effective manual analysis, requiring image co-registration followed by image-to-image change detection, preferably with minimal analyst attention. We have recently developed an automatic change detection procedure that minimizes false-positives. The processing steps include: (a) Conversion of both the pre- and post- images to reflectance values (this step is of critical importance when different sensors are involved); reflectance values can be either top-of-atmosphere units or have full aerosol optical depth calibration applied using bi-directional reflectance knowledge. (b) Panchromatic band image-to-image co-registration, using an orthorectified base reference image (e.g. Digital Orthophoto Quadrangle) and a digital elevation model; this step can be improved if a stereo-pair of images have been acquired on one of the image dates. (c) Pan-sharpening of the multispectral data to assure recognition of change objects at the highest resolution. (d) Characterization of multispectral data in the post-image ( i.e. the background) using unsupervised cluster analysis. (e) Band ratio selection in the post-image to separate surface materials of interest from the background. (f) Preparing a pre-to-post change image. (g) Identifying locations where change has occurred involving materials of interest.

  17. A high performance thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Tijani, M. E. H.; Spoelstra, S.

    2011-11-01

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  18. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  19. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  20. Accurate glass forming for high-temperature solar applications. Final report

    SciTech Connect

    1980-10-01

    Development work was undertaken to thermally form glass for solar concentrators. Sagging and pressing glass to parabolic shapes was investigated with goal of achieving slope errors less than 2.0 mr RMS and costs of $1.25/ft/sup 2/. In addition, a laminating process was investigated to overcome the problem of silvering of a curved surface and to reduce corrosion of the silver. Thermal sagging is a process in which glass is shaped by heating the glass until it is sufficiently soft to deform under its own weight and conform to a mold. For cylindrical parabolic shapes, a method for producing low cost high accuracy molds was developed using castable ceramics and a grinder. Thermal conditions were established for a commercial glass bending furnace to obtain good replication of the mold. The accuracy and cost goals were met for glass size up to 30 x 30 x 0.125 inches and for low iron and regular iron float and sheet glasses. Lamination of two curved pieces of glass using automotive technology was investigated. A silver film was placed between two layers of polyvinyl and butyral (PVB) and this was used to bond two sheets of glass. Economically, and technically, the process appears feasible. However, the non-uniform thickness of PBV cause distortion in the reflected image. More work is needed to assess accuracy of curved laminated composites. Thermal pressing of glass is accomplished by heating the glass until it is soft and mechanically stamping the shape. Equipment was built and operated to determine important parameters in pressing. Control of thermal stresses in the glass is critical to preventing cracks. No glass pieces were produced without cracks.

  1. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  2. High-Performance Composite Chocolate

    ERIC Educational Resources Information Center

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  3. Toward High-Performance Organizations.

    ERIC Educational Resources Information Center

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  4. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  5. High-Performance Schools Make Cents.

    ERIC Educational Resources Information Center

    Nielsen-Palacios, Christian

    2003-01-01

    Describes the educational benefits of high-performance schools, buildings that are efficient, healthy, safe, and easy to operate and maintain. Also briefly describes how to create a high-performance school drawn from volume I (Planning) of the three-volume Collaborative for High Performance Schools (CHPS) "Best Practices Manual." (For more…

  6. High performance, high density hydrocarbon fuels

    NASA Technical Reports Server (NTRS)

    Frankenfeld, J. W.; Hastings, T. W.; Lieberman, M.; Taylor, W. F.

    1978-01-01

    The fuels were selected from 77 original candidates on the basis of estimated merit index and cost effectiveness. The ten candidates consisted of 3 pure compounds, 4 chemical plant streams and 3 refinery streams. Critical physical and chemical properties of the candidate fuels were measured including heat of combustion, density, and viscosity as a function of temperature, freezing points, vapor pressure, boiling point, thermal stability. The best all around candidate was found to be a chemical plant olefin stream rich in dicyclopentadiene. This material has a high merit index and is available at low cost. Possible problem areas were identified as low temperature flow properties and thermal stability. An economic analysis was carried out to determine the production costs of top candidates. The chemical plant and refinery streams were all less than 44 cent/kg while the pure compounds were greater than 44 cent/kg. A literature survey was conducted on the state of the art of advanced hydrocarbon fuel technology as applied to high energy propellents. Several areas for additional research were identified.

  7. High-Performance Wireless Telemetry

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  8. High-Performance Miniature Hygrometer

    NASA Technical Reports Server (NTRS)

    Van Zandt, Thomas R.; Kaiser, William J.; Kenny, Thomas W.; Crisp, David

    1994-01-01

    Relatively inexpensive hygrometer that occupies volume less than 4 in.(3) measures dewpoints as much as 100 degrees C below ambient temperatures, with accuracy of 0.1 degrees C. Field tests indicate accuracy and repeatability identical to those of state-of-the-art larger dewpoint hygrometers. Operates up to 100 times as fast as older hygrometers, and offers simplicity and small size needed to meet cost and performance requirements of many applications.

  9. High performance fibers. Final report

    SciTech Connect

    Economy, J.

    1994-01-01

    A two and a half year ONR/ARPA funded program to develop a low cost process for manufacture of a high strength/high modulus sigma/E boron nitride (BN) fiber was initiated on 7/1/90 and ended on 12/31/92. The preparation of high sigma/E BN fibers had been demonstrated in the late 1960's by the PI using a batch nitriding of B2O3 fiber with NH3 followed by stress graphitization at approx. 2000 deg C. Such fibers displayed values comparable to PAN based carbon fibers but the mechanicals were variable most likely because of redeposition of volatiles at 2000 deg C. In addition, the cost of the fibers was very high due to the need for many hours of nitriding necessary to convert the B2O3 fibers. The use of batch nitriding negated two possible cost advantages of this concept, namely, the ease of drawing very fine, multi-filament yarn of B2O3 and more importantly the very low cost of the starting materials.

  10. Accurate and robust registration of high-speed railway viaduct point clouds using closing conditions and external geometric constraints

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Song, Mengxiao; Guan, Haiyan; Yu, Yongtao

    2015-08-01

    This paper proposes an automatic method for registering multiple laser scans without a control network. The proposed registration method first uses artificial targets to pair-wise register adjacent scans for initial transformation estimates; the proposed registration method then employs combined adjustments with closing conditions and external triangle constraints to globally register all scans along a long-range, high-speed railway corridor. The proposed registration method uses (1) closing conditions to eliminate registration errors that gradually accumulate as the length of a corridor (the number of scan stations) increases, and (2) external geometric constraints to ensure the shape correctness of an elongated high-speed railway. A 640-m high-speed railway viaduct with twenty-one piers is used to conduct experiments using our proposed registration method. A group of comparative experiments is undertaken to evaluate the robustness and efficiency of the proposed registration method to accurately register long-range corridors.

  11. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    NASA Astrophysics Data System (ADS)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  12. Accurate calculation and assignment of highly excited vibrational levels of floppy triatomic molecules in a basis of adiabatic vibrational eigenstates

    NASA Astrophysics Data System (ADS)

    Bačić, Z.

    1991-09-01

    We show that the triatomic adiabatic vibrational eigenstates (AVES) provide a convenient basis for accurate discrete variable representation (DVR) calculation and automatic assignment of highly excited, large amplitude motion vibrational states of floppy triatomic molecules. The DVR-AVES states are eigenvectors of the diagonal (in the stretch states) blocks of the adiabatically rearranged triatomic DVR-ray eigenvector (DVR-REV) Hamiltonian [J. C. Light and Z. Bačić, J. Chem. Phys. 87, 4008 (1987)]. The transformation of the full triatomic vibrational Hamiltonian from the DVR-REV basis to the new DVR-AVES basis is simple, and does not involve calculation of any new matrix elements. No dynamical approximation is made in the energy level calculation by the DVR-AVES approach; its accuracy and efficiency are identical to those of the DVR-REV method. The DVR-AVES states, as the adiabatic approximation to the vibrational states of a triatomic molecule, are labeled by three vibrational quantum numbers. Consequently, accurate large amplitude motion vibrational levels obtained by diagonalizing the full vibrational Hamiltonian transformed to the DVR-AVES basis, can be assigned automatically by the code, with the three quantum numbers of the dominant DVR-AVES state associated with the largest (by modulus) eigenvector element in the DVR-AVES basis. The DVR-AVES approach is used to calculate accurate highly excited localized and delocalized vibrational levels of HCN/HNC and LiCN/LiNC. A significant fraction of localized states of both systems, below and above the isomerization barrier, is assigned automatically, without inspection of wave function plots or separate approximate calculations.

  13. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  14. High performance rotational vibration isolator

    NASA Astrophysics Data System (ADS)

    Sunderland, Andrew; Blair, David G.; Ju, Li; Golden, Howard; Torres, Francis; Chen, Xu; Lockwood, Ray; Wolfgram, Peter

    2013-10-01

    We present a new rotational vibration isolator with an extremely low resonant frequency of 0.055 ± 0.002 Hz. The isolator consists of two concentric spheres separated by a layer of water and joined by very soft silicone springs. The isolator reduces rotation noise at all frequencies above its resonance which is very important for airborne mineral detection. We show that more than 40 dB of isolation is achieved in a helicopter survey for rotations at frequencies between 2 Hz and 20 Hz. Issues affecting performance such as translation to rotation coupling and temperature are discussed. The isolator contains almost no metal, making it particularly suitable for electromagnetic sensors.

  15. A high performance magnetoplasmadynamic thruster

    NASA Technical Reports Server (NTRS)

    Wolff, M.; Kelly, A. J.; Jahn, R. G.

    1984-01-01

    A flared-anode MPD thruster has been modified to permit injection of propellant through the backplate near the anode wall. At 6 g/sec argon, this thruster displays an onset current of 41.4 kA, almost double the value observed for propellant injection at the cathode and intermediate radial positions. A magnetic field survey of the interelectrode region shows current density is highest at the upstream and downstream ends of the chamber. The operating efficiency at onset current inferred from magnetic field data exceeds 50 percent, but swinging-gate thrust stand measurements reveal a progressive divergence between inferred and actual thrust with increasing power. Near onset, the measured thrust is approximately 20 percent lower than that inferred from magnetic probing. Explanations for this behavior have been explored with viscous drag emerging as the most probable cause of performance degradation.

  16. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  17. Highly accurate prediction of emotions surrounding the attacks of September 11, 2001 over 1-, 2-, and 7-year prediction intervals.

    PubMed

    Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N

    2016-06-01

    In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record PMID:27100309

  18. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda

    PubMed Central

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I.; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne MQ; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039–2.3438% by linear regression analysis (R2 = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  19. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda.

    PubMed

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne Mq; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039-2.3438% by linear regression analysis (R(2) = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  20. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  1. Designing high-performance jobs.

    PubMed

    Simons, Robert

    2005-01-01

    Tales of great strategies derailed by poor execution are all too common. That's because some organizations are designed to fail. For a company to achieve its potential, each employee's supply of organizational resources should equal the demand, and the same balance must apply to every business unit and to the company as a whole. To carry out his or her job, each employee has to know the answers to four basic questions: What resources do I control to accomplish my tasks? What measures will be used to evaluate my performance? Who do I need to interact with and influence to achieve my goals? And how much support can I expect when I reach out to others for help? The questions correspond to what the author calls the four basic spans of a job-control, accountability, influence, and support. Each span can be adjusted so that it is narrow or wide or somewhere in between. If you get the settings right, you can design a job in which a talented individual can successfully execute on your company's strategy. If you get the settings wrong, it will be difficult for an employee to be effective. The first step is to set the span of control to reflect the resources allocated to each position and unit that plays an important role in delivering customer value. This setting, like the others, is determined by how the business creates value for customers and differentiates its products and services. Next, you can dial in different levels of entrepreneurial behavior and creative tension by widening or narrowing spans of accountability and influence. Finally, you must adjust the span of support to ensure that the job or unit will get the informal help it needs. PMID:16028816

  2. HIGH-PERFORMANCE COATING MATERIALS

    SciTech Connect

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  3. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  4. High Concentrations of Measles Neutralizing Antibodies and High-Avidity Measles IgG Accurately Identify Measles Reinfection Cases.

    PubMed

    Sowers, Sun B; Rota, Jennifer S; Hickman, Carole J; Mercader, Sara; Redd, Susan; McNall, Rebecca J; Williams, Nobia; McGrew, Marcia; Walls, M Laura; Rota, Paul A; Bellini, William J

    2016-08-01

    In the United States, approximately 9% of the measles cases reported from 2012 to 2014 occurred in vaccinated individuals. Laboratory confirmation of measles in vaccinated individuals is challenging since IgM assays can give inconclusive results. Although a positive reverse transcription (RT)-PCR assay result from an appropriately timed specimen can provide confirmation, negative results may not rule out a highly suspicious case. Detection of high-avidity measles IgG in serum samples provides laboratory evidence of a past immunologic response to measles from natural infection or immunization. High concentrations of measles neutralizing antibody have been observed by plaque reduction neutralization (PRN) assays among confirmed measles cases with high-avidity IgG, referred to here as reinfection cases (RICs). In this study, we evaluated the utility of measuring levels of measles neutralizing antibody to distinguish RICs from noncases by receiver operating characteristic curve analysis. Single and paired serum samples with high-avidity measles IgG from suspected measles cases submitted to the CDC for routine surveillance were used for the analysis. The RICs were confirmed by a 4-fold rise in PRN titer or by RT-quantitative PCR (RT-qPCR) assay, while the noncases were negative by both assays. Discrimination accuracy was high with serum samples collected ≥3 days after rash onset (area under the curve, 0.953; 95% confidence interval [CI], 0.854 to 0.993). Measles neutralizing antibody concentrations of ≥40,000 mIU/ml identified RICs with 90% sensitivity (95% CI, 74 to 98%) and 100% specificity (95% CI, 82 to 100%). Therefore, when serological or RT-qPCR results are unavailable or inconclusive, suspected measles cases with high-avidity measles IgG can be confirmed as RICs by measles neutralizing antibody concentrations of ≥40,000 mIU/ml. PMID:27335386

  5. High Concentrations of Measles Neutralizing Antibodies and High-Avidity Measles IgG Accurately Identify Measles Reinfection Cases

    PubMed Central

    Rota, Jennifer S.; Hickman, Carole J.; Mercader, Sara; Redd, Susan; McNall, Rebecca J.; Williams, Nobia; McGrew, Marcia; Walls, M. Laura; Rota, Paul A.; Bellini, William J.

    2016-01-01

    In the United States, approximately 9% of the measles cases reported from 2012 to 2014 occurred in vaccinated individuals. Laboratory confirmation of measles in vaccinated individuals is challenging since IgM assays can give inconclusive results. Although a positive reverse transcription (RT)-PCR assay result from an appropriately timed specimen can provide confirmation, negative results may not rule out a highly suspicious case. Detection of high-avidity measles IgG in serum samples provides laboratory evidence of a past immunologic response to measles from natural infection or immunization. High concentrations of measles neutralizing antibody have been observed by plaque reduction neutralization (PRN) assays among confirmed measles cases with high-avidity IgG, referred to here as reinfection cases (RICs). In this study, we evaluated the utility of measuring levels of measles neutralizing antibody to distinguish RICs from noncases by receiver operating characteristic curve analysis. Single and paired serum samples with high-avidity measles IgG from suspected measles cases submitted to the CDC for routine surveillance were used for the analysis. The RICs were confirmed by a 4-fold rise in PRN titer or by RT-quantitative PCR (RT-qPCR) assay, while the noncases were negative by both assays. Discrimination accuracy was high with serum samples collected ≥3 days after rash onset (area under the curve, 0.953; 95% confidence interval [CI], 0.854 to 0.993). Measles neutralizing antibody concentrations of ≥40,000 mIU/ml identified RICs with 90% sensitivity (95% CI, 74 to 98%) and 100% specificity (95% CI, 82 to 100%). Therefore, when serological or RT-qPCR results are unavailable or inconclusive, suspected measles cases with high-avidity measles IgG can be confirmed as RICs by measles neutralizing antibody concentrations of ≥40,000 mIU/ml. PMID:27335386

  6. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  7. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  8. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  9. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  10. Method of making a high performance ultracapacitor

    SciTech Connect

    Farahmandi, C.J.; Dispennette, J.M.

    2000-05-09

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  11. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  12. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  13. Combining Theory and Experiment to Compute Highly Accurate Line Lists for Stable Molecules, and Purely AB Initio Theory to Compute Accurate Rotational and Rovibrational Line Lists for Transient Molecules

    NASA Astrophysics Data System (ADS)

    Lee, Timothy J.; Huang, Xinchuan; Fortenberry, Ryan C.; Schwenke, David W.

    2013-06-01

    Theoretical chemists have been computing vibrational and rovibrational spectra of small molecules for more than 40 years, but over the last decade the interest in this application has grown significantly. The increased interest in computing accurate rotational and rovibrational spectra for small molecules could not come at a better time, as NASA and ESA have begun to acquire a mountain of high-resolution spectra from the Herschel mission, and soon will from the SOFIA and JWST missions. In addition, the ground-based telescope, ALMA, has begun to acquire high-resolution spectra in the same time frame. Hence the need for highly accurate line lists for many small molecules, including their minor isotopologues, will only continue to increase. I will present the latest developments from our group on using the "Best Theory + High-Resolution Experimental Data" strategy to compute highly accurate rotational and rovibrational spectra for small molecules, including NH3, CO2, and SO2. I will also present the latest work from our group in producing purely ab initio line lists and spectroscopic constants for small molecules thought to exist in various astrophysical environments, but for which there is either limited or no high-resolution experimental data available. These more limited line lists include purely rotational transitions as well as rovibrational transitions for bands up through a few combination/overtones.

  14. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information.

    PubMed

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W; Jin, Ke; Du, Yingge; Neeway, James J; Ryan, Joseph V; Hu, Dehong; Zhang, Kelvin H L; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest. PMID:25953490

  15. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information

    NASA Astrophysics Data System (ADS)

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W.; Jin, Ke; Du, Yingge; Neeway, James J.; Ryan, Joseph V.; Hu, Dehong; Zhang, Kelvin H. L.; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest.

  16. Accurate high-resolution measurements of 3-D tissue dynamics with registration-enhanced displacement encoded MRI.

    PubMed

    Gomez, Arnold D; Merchant, Samer S; Hsu, Edward W

    2014-06-01

    Displacement fields are important to analyze deformation, which is associated with functional and material tissue properties often used as indicators of health. Magnetic resonance imaging (MRI) techniques like DENSE and image registration methods like Hyperelastic Warping have been used to produce pixel-level deformation fields that are desirable in high-resolution analysis. However, DENSE can be complicated by challenges associated with image phase unwrapping, in particular offset determination. On the other hand, Hyperelastic Warping can be hampered by low local image contrast. The current work proposes a novel approach for measuring tissue displacement with both DENSE and Hyperelastic Warping, incorporating physically accurate displacements obtained by the latter to improve phase characterization in DENSE. The validity of the proposed technique is demonstrated using numerical and physical phantoms, and in vivo small animal cardiac MRI. PMID:24771572

  17. Accurate High-Resolution Measurements of 3-D Tissue Dynamics With Registration-Enhanced Displacement Encoded MRI

    PubMed Central

    Merchant, Samer S.; Hsu, Edward W.

    2014-01-01

    Displacement fields are important to analyze deformation, which is associated with functional and material tissue properties often used as indicators of health. Magnetic resonance imaging (MRI) techniques like DENSE and image registration methods like Hyperelastic Warping have been used to produce pixel-level deformation fields that are desirable in high-resolution analysis. However, DENSE can be complicated by challenges associated with image phase unwrapping, in particular offset determination. On the other hand, Hyperelastic Warping can be hampered by low local image contrast. The current work proposes a novel approach for measuring tissue displacement with both DENSE and Hyperelastic Warping, incorporating physically accurate displacements obtained by the latter to improve phase characterization in DENSE. The validity of the proposed technique is demonstrated using numerical and physical phantoms, and in vivo small animal cardiac MRI. PMID:24771572

  18. Distribution of high-stability 10 GHz local oscillator over 100 km optical fiber with accurate phase-correction system.

    PubMed

    Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng

    2014-02-15

    We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70  dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π. PMID:24562233

  19. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  20. Accurate Transmittance Measurements of Thick, High-Index, High- Dispersion, IR Windows, Using a Fourier Transform IR Spectrometer

    NASA Astrophysics Data System (ADS)

    Kupferberg, Lenn C.

    1996-03-01

    Fourier transform IR [FT-IR] spectrometers have virtually replaced scanned grating IR spectrometers in the commercial market. While FTIR spectrometers have been a boon for the chemist, they present problems for the measurement of transmittance of thick, high-index, high-dispersion, IR windows. Reflection and refraction of light by the windows introduce measurement errors. The principles of the FT-IR spectrometer will be briefly reviewed. The origins of the measurement errors will be discussed. Simple modifications to the operation of commercially available instruments will be presented. These include using strategically placed apertures and the use of collimated vs. focused beams at the sample position. They are essential for removing the effects of reflected light entering the interferometer and limiting the divergence angle of light in the interferometer. The latter minimizes refractive effects and insures consistent underfilling of the detector. Data will be shown from FT-IR spectrometers made by four manufactures and compared to measurements from a dispersive spectrometer.

  1. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  2. Highly accurate quartic force fields, vibrational frequencies, and spectroscopic constants for cyclic and linear C3H3(+).

    PubMed

    Huang, Xinchuan; Taylor, Peter R; Lee, Timothy J

    2011-05-19

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C(3)H(3)(+) molecular cation, referred to as c-C(3)H(3)(+) and l-C(3)H(3)(+). Specifically, the singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit, and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants by use of both vibrational second-order perturbation theory and variational methods to solve the nuclear Schrödinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C(3)H(3)(+), obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C(3)H(3)(+) and l-C(3)H(3)(+) are the most reliable available for the free gas-phase species, and it is hoped that these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations. PMID:21510653

  3. Highly Accurate Quartic Force Fields, Vibrational Frequencies, and Spectroscopic Constants for Cyclic and Linear C3H3(+)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.

    2011-01-01

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.

  4. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  5. LiF TLD-100 as a Dosimeter in High Energy Proton Beam Therapy-Can It Yield Accurate Results?

    SciTech Connect

    Zullo, John R. Kudchadker, Rajat J.; Zhu, X. Ronald; Sahoo, Narayan; Gillin, Michael T.

    2010-04-01

    In the region of high-dose gradients at the end of the proton range, the stopping power ratio of the protons undergoes significant changes, allowing for a broad spectrum of proton energies to be deposited within a relatively small volume. Because of the potential linear energy transfer dependence of LiF TLD-100 (thermolumescent dosimeter), dose measurements made in the distal fall-off region of a proton beam may be less accurate than those made in regions of low-dose gradients. The purpose of this study is to determine the accuracy and precision of dose measured using TLD-100 for a pristine Bragg peak, particularly in the distal fall-off region. All measurements were made along the central axis of an unmodulated 200-MeV proton beam from a Probeat passive beam-scattering proton accelerator (Hitachi, Ltd., Tokyo, Japan) at varying depths along the Bragg peak. Measurements were made using TLD-100 powder flat packs, placed in a virtual water slab phantom. The measurements were repeated using a parallel plate ionization chamber. The dose measurements using TLD-100 in a proton beam were accurate to within {+-}5.0% of the expected dose, previously seen in our past photon and electron measurements. The ionization chamber and the TLD relative dose measurements agreed well with each other. Absolute dose measurements using TLD agreed with ionization chamber measurements to within {+-} 3.0 cGy, for an exposure of 100 cGy. In our study, the differences in the dose measured by the ionization chamber and those measured by TLD-100 were minimal, indicating that the accuracy and precision of measurements made in the distal fall-off region of a pristine Bragg peak is within the expected range. Thus, the rapid change in stopping power ratios at the end of the range should not affect such measurements, and TLD-100 may be used with confidence as an in vivo dosimeter for proton beam therapy.

  6. High performance computing at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  7. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    NASA Astrophysics Data System (ADS)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaO x ) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  8. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    NASA Astrophysics Data System (ADS)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  9. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  10. Highly Accurate Semi-Empirical IR Line Lists of Asymmetric SO2 Isotopologues: SO18O and SO17O

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2015-12-01

    Atmosphere models and simulations of Venus, Mars, and Exo-planets will greatly benefit from complete and accurate Infrared spectra data of important molecules such as SO2 and CO2. Currently, high resolution spectra data for SO2 is very limited at 296K and mainly for the primary isotopologue 626. It cannot effectively support the observed data analysis and simulations. Recently we published a semi-empirically refined potential energy surface, denoted Ames-1, and Ames-296K IR line lists for SO2 626 and a few symmetric isotopologues including 646, 636, 666 and 828. The accuracy of line positions is around 0.01 - 0.03 cm-1 for most transitions. For intensities, most deviations are less than 5-15%. Now we have carried out new potential energy surface refinements by including latest experimental data and those of isotopologues. On the newly fitted surface, for the first time we have computed 296K line lists for the two most abundant asymmetric isotopologues, SO2 628 and SO2 627. We will present the spectra simulations of SO2 628 and SO2 627, and compare it with latest high resolution experimental spectroscopy of SO2 628. A composite "natural" line list at 296K is also available with terrestial abundances. These line lists will be available to download at http://huang.seti.org.

  11. Accurate and High-Coverage Immune Repertoire Sequencing Reveals Characteristics of Antibody Repertoire Diversification in Young Children with Malaria

    NASA Astrophysics Data System (ADS)

    Jiang, Ning

    Accurately measuring the immune repertoire sequence composition, diversity, and abundance is important in studying repertoire response in infections, vaccinations, and cancer immunology. Using molecular identifiers (MIDs) to tag mRNA molecules is an effective method in improving the accuracy of immune repertoire sequencing (IR-seq). However, it is still difficult to use IR-seq on small amount of clinical samples to achieve a high coverage of the repertoire diversities. This is especially challenging in studying infections and vaccinations where B cell subpopulations with fewer cells, such as memory B cells or plasmablasts, are often of great interest to study somatic mutation patterns and diversity changes. Here, we describe an approach of IR-seq based on the use of MIDs in combination with a clustering method that can reveal more than 80% of the antibody diversity in a sample and can be applied to as few as 1,000 B cells. We applied this to study the antibody repertoires of young children before and during an acute malaria infection. We discovered unexpectedly high levels of somatic hypermutation (SHM) in infants and revealed characteristics of antibody repertoire development in young children that would have a profound impact on immunization in children.

  12. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  13. Intra-Auditory Integration Improves Motor Performance and Synergy in an Accurate Multi-Finger Pressing Task

    PubMed Central

    Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H.; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun

    2016-01-01

    Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly

  14. Intra-Auditory Integration Improves Motor Performance and Synergy in an Accurate Multi-Finger Pressing Task.

    PubMed

    Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun

    2016-01-01

    Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly

  15. A novel, integrated PET-guided MRS technique resulting in more accurate initial diagnosis of high-grade glioma.

    PubMed

    Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash

    2016-06-01

    Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas. PMID:27122050

  16. Scalable implementations of accurate excited-state coupled cluster theories: application of high-level methods to porphyrin based systems

    SciTech Connect

    Kowalski, Karol; Krishnamoorthy, Sriram; Olson, Ryan M.; Tipparaju, Vinod; Apra, Edoardo

    2011-11-30

    The development of reliable tools for excited-state simulations is emerging as an extremely powerful computational chemistry tool for understanding complex processes in the broad class of light harvesting systems and optoelectronic devices. Over the last years we have been developing equation of motion coupled cluster (EOMCC) methods capable of tackling these problems. In this paper we discuss the parallel performance of EOMCC codes which provide accurate description of the excited-state correlation effects. Two aspects are discuss in details: (1) a new algorithm for the iterative EOMCC methods based on the novel task scheduling algorithms, and (2) parallel algorithms for the non-iterative methods describing the effect of triply excited configurations. We demonstrate that the most computationally intensive non-iterative part can take advantage of 210,000 cores of the Cray XT5 system at OLCF. In particular, we demonstrate the importance of non-iterative many-body methods for achieving experimental level of accuracy for several porphyrin-based system.

  17. Common Elements of High Performing, High Poverty Middle Schools.

    ERIC Educational Resources Information Center

    Trimble, Susan

    2002-01-01

    Examined over 3 years high-achieving high-poverty middle schools to determine school practices and policies associated with higher student achievement. Found that high-poverty middle schools that are high performing acquire grants and manage money well, use a variety of teaming configurations, and use data-based goals to improve student…

  18. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  19. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  20. FTS Studies of the 17O Enriched Isotopologues of CO_2 Toward Creating a Complete and Highly Accurate Reference Standard

    NASA Astrophysics Data System (ADS)

    Elliott, Ben; Sung, Keeyoon; Brown, Linda; Miller, Charles

    2014-06-01

    The proliferation and increased abilities of remote sensing missions for the monitoring of planetary atmospheric gas species has spurred the need for complete and accurate spectroscopic reference standards. As a part of our ongoing effort toward creating a global carbon dioxide (CO2) frequency reference standard, we report new FTS measurements of the 17O enriched isotopologues of CO2. The first measurements were taken in the ν3 region (2200 - 2450 cm-1, 65 - 75 THz), have absolute calibration accuracies of 100 kHz (3E-6 cm-1), comparable to the uncertainties for typical sub-millimeter/THz spectroscopy. Such high absolute calibration accuracy has become regular procedure for the cases of linear molecules such as CO2 and CO for FTS measurements at JPL, and enables us to produce measured transition frequencies for entire bands with accuracies that rival those of early heterodyne measurements for individual beat notes. Additionally, by acquiring spectra of multiple carbon dioxide isotopologues simultaneously, we have begun to construct a self-consistent frequency grid based on CO2 that extends from 20 - 200 THz. These new spectroscopic reference standards are a significant step towards minimizing CO2 retrieval errors from remote sensing applications, especially those involving targets with predominantly CO2 atmospheres such as Mars, Venus and candidate terrestrial exoplanets where minor isotopologues will make significant contributions to the radiance signals.

  1. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  2. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1

    PubMed Central

    Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.

    2016-01-01

    Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria

  3. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  4. High-resolution accurate mass measurements of biomolecules using a new electrospray ionization ion cyclotron resonance mass spectrometer.

    PubMed

    Winger, B E; Hofstadler, S A; Bruce, J E; Udseth, H R; Smith, R D

    1993-07-01

    A novel electrospray ionization/Fourier transform ion cyclotron resonance mass spectrometer based on a 7-T superconducting magnet was developed for high-resolution accurate mass measurements of large biomolecules. Ions formed at atmospheric pressure using electrospray ionization (ESI) were transmitted (through six differential pumping stages) to the trapped ion cell maintained below 10(-9) torr. The increased pumping speed attainable with cryopumping (> 10(5) L/s) allowed brief pressure excursions to above 10(-4) torr, with greatly enhanced trapping efficiencies and subsequent short pumpdown times, facilitating high-resolution mass measurements. A set of electromechanical shutters were also used to minimize the effect of the directed molecular beam produced by the ES1 source and were open only during ion injection. Coupled with the use of the pulsed-valve gas inlet, the trapped ion cell was generally filled to the space charge limit within 100 ms. The use of 10-25 ms ion injection times allowed mass spectra to be obtained from 4 fmol of bovine insulin (Mr 5734) and ubiquitin (Mr 8565, with resolution sufficient to easily resolve the isotopic envelopes and determine the charge states. The microheterogeneity of the glycoprotein ribonuclease B was examined, giving a measured mass of 14,898.74 Da for the most abundant peak in the isotopic envelope of the normally glycosylated protein (i.e., with five mannose and two N-acetylglucosamine residues (an error of approximately 2 ppm) and an average error of approximately 1 ppm for the higher glycosylated and various H3PO4 adducted forms of the protein. Time-domain signals lasting in excess of 80 s were obtained for smaller proteins, producing, for example, a mass resolution of more than 700,000 for the 4(+) charge state (m/z 1434) of insulin. PMID:24227643

  5. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  6. Highly Accurate Antibody Assays for Early and Rapid Detection of Tuberculosis in African and Asian Elephants ▿

    PubMed Central

    Greenwald, Rena; Lyashchenko, Olena; Esfandiari, Javan; Miller, Michele; Mikota, Susan; Olsen, John H.; Ball, Ray; Dumonceaux, Genevieve; Schmitt, Dennis; Moller, Torsten; Payeur, Janet B.; Harris, Beth; Sofranko, Denise; Waters, W. Ray; Lyashchenko, Konstantin P.

    2009-01-01

    Tuberculosis (TB) in elephants is a reemerging zoonotic disease caused primarily by Mycobacterium tuberculosis. Current methods for screening and diagnosis rely on trunk wash culture, which has serious limitations due to low test sensitivity, slow turnaround time, and variable sample quality. Innovative and more efficient diagnostic tools are urgently needed. We describe three novel serologic techniques, the ElephantTB Stat-Pak kit, multiantigen print immunoassay, and dual-path platform VetTB test, for rapid antibody detection in elephants. The study was performed with serum samples from 236 captive African and Asian elephants from 53 different locations in the United States and Europe. The elephants were divided into three groups based on disease status and history of exposure: (i) 26 animals with culture-confirmed TB due to M. tuberculosis or Mycobacterium bovis, (ii) 63 exposed elephants from known-infected herds that had never produced a culture-positive result from trunk wash samples, and (iii) 147 elephants without clinical symptoms suggestive of TB, with consistently negative trunk wash culture results, and with no history of potential exposure to TB in the past 5 years. Elephants with culture-confirmed TB and a proportion of exposed but trunk wash culture-negative elephants produced robust antibody responses to multiple antigens of M. tuberculosis, with seroconversions detectable years before TB-positive cultures were obtained from trunk wash specimens. ESAT-6 and CFP10 proteins were immunodominant antigens recognized by elephant antibodies during disease. The serologic assays demonstrated 100% sensitivity and 95 to 100% specificity. Rapid and accurate antibody tests to identify infected elephants will likely allow earlier and more efficient treatment, thus limiting transmission of infection to other susceptible animals and to humans. PMID:19261770

  7. Highly accurate isotope composition measurements by a miniature laser ablation mass spectrometer designed for in situ investigations on planetary surfaces

    NASA Astrophysics Data System (ADS)

    Riedo, A.; Meyer, S.; Heredia, B.; Neuland, M. B.; Bieler, A.; Tulej, M.; Leya, I.; Iakovleva, M.; Mezger, K.; Wurz, P.

    2013-10-01

    An experimental procedure for precise and accurate measurements of isotope abundances by a miniature laser ablation mass spectrometer for space research is described. The measurements were conducted on different untreated NIST standards and galena samples by applying pulsed UV laser radiation (266 nm, 3 ns and 20 Hz) for ablation, atomisation, and ionisation of the sample material. Mass spectra of released ions are measured by a reflectron-type time-of-flight mass analyser. A computer controlled performance optimiser was used to operate the system at maximum ion transmission and mass resolution. At optimal experimental conditions, the best relative accuracy and precision achieved for Pb isotope compositions are at the per mill level and were obtained in a range of applied laser irradiances and a defined number of accumulated spectra. A similar relative accuracy and precision was achieved in the study of Pb isotope compositions in terrestrial galena samples. The results for the galena samples are similar to those obtained with a thermal ionisation mass spectrometer (TIMS). The studies of the isotope composition of other elements yielded relative accuracy and precision at the per mill level too, with characteristic instrument parameters for each element. The relative accuracy and precision of the measurements is degrading with lower element/isotope concentration in a sample. For the elements with abundances below 100 ppm these values drop to the percent level. Depending on the isotopic abundances of Pb in minerals, 207Pb/206Pb ages with accuracy in the range of tens of millions of years can be achieved.

  8. Evaluation of high-definition television for remote task performance

    SciTech Connect

    Draper, J.V.; Fujita, Y.; Herndon, J.N.

    1987-04-01

    High-definition television (HDTV) transmits a video image with more than twice the number (1125 for HDTV to 525 for standard-resolution TV) of horizontal scan lines that standard-resolution TV provides. The improvement in picture quality (compared to standard-resolution TV) that the extra scan lines provide is impressive. Objects in the HDTV picture have more sharply defined edges, better contrast, and more accurate reproduction of shading and color patterns than do those in the standard-resolution TV picture. Because the TV viewing system is a key component for teleoperator performance, an improvement in TV picture quality could mean an improvement in the speed and accuracy with which teleoperators perform tasks. This report describes three experiments designed to evaluate the impact of HDTV on the performance of typical remote tasks. The performance of HDTV was compared to that of standard-resolution, monochromatic TV and standard-resolution, stereoscopic, monochromatic TV in the context of judgment of depth in a televised scene, visual inspection of an object, and performance of a typical remote handling task. The results of the three experiments show that in some areas HDTV can lead to improvement in teleoperator performance. Observers inspecting a small object for a flaw were more accurate with HDTV than with either of the standard-resolution systems. High resolution is critical for detection of small-scale flaws of the type in the experiment (a scratch on a glass bottle). These experiments provided an evaluation of HDTV television for use in tasks that must be routinely performed to remotely maintain a nuclear fuel reprocessing facility. 5 refs., 7 figs., 9 tabs.

  9. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  10. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  11. Development of a high performance peristaltic micropump

    NASA Astrophysics Data System (ADS)

    Pham, My; Goo, Nam Seo

    2008-03-01

    In this study, a high performance peristaltic micropump has been developed and investigated. The micropump has three cylinder chambers which are connected through micro-channels for high pumping pressure performance. A circular-shaped mini LIPCA has been designed and manufactured for actuating diaphragm. In this LIPCA, a 0.1mm thickness PZT ceramic is used as an active layer. As a result, the actuator has shown to produce large out of plane deflection and consumed low power. During the design process, a coupled field analysis was conducted to predict the actuating behavior of a diaphragm and pumping performance. MEMS technique was used to fabricate the peristaltic micropump. Pumping performance of the present micropump was investigated both numerically and experimentally. The present peristaltic micropump was shown to have higher performance than the same kind of micropump developed else where.

  12. Appraisal of Artificial Screening Techniques of Tomato to Accurately Reflect Field Performance of the Late Blight Resistance

    PubMed Central

    Nowakowska, Marzena; Nowicki, Marcin; Kłosińska, Urszula; Maciorowski, Robert; Kozik, Elżbieta U.

    2014-01-01

    Late blight (LB) caused by the oomycete Phytophthora infestans continues to thwart global tomato production, while only few resistant cultivars have been introduced locally. In order to gain from the released tomato germplasm with LB resistance, we compared the 5-year field performance of LB resistance in several tomato cultigens, with the results of controlled conditions testing (i.e., detached leaflet/leaf, whole plant). In case of these artificial screening techniques, the effects of plant age and inoculum concentration were additionally considered. In the field trials, LA 1033, L 3707, L 3708 displayed the highest LB resistance, and could be used for cultivar development under Polish conditions. Of the three methods using controlled conditions, the detached leaf and the whole plant tests had the highest correlation with thefield experiments. The plant age effect on LB resistance in tomato reported here, irrespective of the cultigen tested or inoculum concentration used, makes it important to standardize the test parameters when screening for resistance. Our results help show why other reports disagree on LB resistance in tomato. PMID:25279467

  13. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  14. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  15. High Performance Computing in Solid Earth Sciences

    NASA Astrophysics Data System (ADS)

    Manea, V. C.; Manea, M.; Pomeran, M.; Besutiu, L.; Zlagnean, L.

    2012-04-01

    Presently, the solid earth sciences started to move towards implementing high performance computational (HPC) research facilities. One of the key tenants of HPC is performance, and designing a HPC solution tailored to a specific research field as solid earth that represents an optimum price/performance ratio is often a challenge. The HPC system performance strongly depends on the software-hardware interaction, and therefore prior knowledge on how well specific parallelized software performs on different HPC architectures can weight significantly on choosing the final configuration. In this paper we present benchmark results from two different HPC systems: one low-end HPCC (Horus) with 300 cores and 1.6 TFlops theoretical peak performance, and one high-end HPCC (CyberDyn) with 1344 cores and 11.2 TFlops theoretical peak performance. The software benchmark used in this paper is the open source package CitcomS, which is widely used in the solid earth community (www.geodynamics.org). Testing a CFD code specific for earth sciences, the HPC system Horus based on Gigabit Ethernet performed remarkably well compared with its counterpart Cyeberdyn which is based on Infiniband QDR fabric, but only for a relatively small number of computing cores (96). However, increasing the mesh size and the number of computing cores the HPCC CyberDyn starts outperforming the HPCC Horus because of the low-latency high-speed QDR network dedicated to MPI traffic. Since presently we are moving towards high-resolution simulations for geodynamic predictions that require the same scale as observations, HPC facilities used in earth sciences should benefit from larger up-front investment in future systems that are based on high-speed interconnects.

  16. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    PubMed Central

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These

  17. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing. PMID:27479054

  18. High performance bio-integrated devices

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  19. Engineering high-performance vertical cavity lasers

    SciTech Connect

    Lear, K.L.; Hou, H.Q.; Hietala, V.M.; Choquette, K.D.; Schneider, R.P. Jr.

    1996-12-31

    The cw and high-speed performance of vertical cavity surface emitting laser diodes (VCSELs) are affected by both electrical and optical issues arising from the geometry and fabrication of these devices. Structures with low resistance semiconductor mirrors and Al-oxide confinement layers address these issues and have produced record performance including 50% power conversion efficiency and modulation bandwidths up to 20 GHz at small bias currents.

  20. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers.

    PubMed

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class. PMID:23902112

  1. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    SciTech Connect

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-15

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  2. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  3. Performance variability of highly parallel architectures

    SciTech Connect

    Kramer, William T.C.; Ryan, Clint

    2003-05-01

    The design and evaluation of high performance computers has concentrated on increasing computational speed for applications. This performance is often measured on a well configured dedicated system to show the best case. In the real environment, resources are not always dedicated to a single task, and systems run tasks that may influence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner. It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance.

  4. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  5. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  6. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  7. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  8. Performance analysis of memory hierachies in high performance systems

    SciTech Connect

    Yogesh, A.

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  9. A high-throughput screening strategy for accurate quantification of menaquinone based on fluorescence-activated cell sorting.

    PubMed

    Liu, Yan; Xue, Zheng-Lian; Chen, Shao-Peng; Wang, Zhou; Zhang, Yong; Gong, Wei-Liang; Zheng, Zhi-Ming

    2016-06-01

    To enhance the screening efficiency and accuracy of a high-yield menaquinone (vitamin K2, MK) bacterial strain, a novel, quantitative method by fluorescence-activated cell sorting (FACS) was developed. The staining technique was optimized to maximize the differences in fluorescence signals between spontaneous and MK-accumulating cells. The fluorescence carrier rhodamine 123 (Rh123), with its ability to reflect membrane potential, proved to be an appropriate fluorescent dye to connect the MK content with fluorescence signal quantitatively. To promote adequate access of the fluorescent molecule to the target and maintain higher cell survival rates, staining and incubation conditions were optimized. The results showed that 10 % sucrose facilitated uptake of Rh123, while maintaining a certain level of cell viability. The pre-treatment of cells with MgCl2 before staining with Rh123 also improved cell viability. Using FACS, 50 thousands cells can easily be assayed in less than 1 h. The optimized staining protocol yielded a linear response for the mean fluorescence against high performance liquid chromatography-measured MK content. We have developed a novel and useful staining protocol in the high-throughput evaluation of Flavobacterium sp. mutant libraries, using FACS to identify mutants with increased MK-accumulating properties. This study also provides reference for the screening of other industrial microbial strains. PMID:27001261

  10. Accurate electro-optical characterization of high power density GaAs-based laser diodes for screening strategies improvement

    NASA Astrophysics Data System (ADS)

    Del Vecchio, Pamela; Deshayes, Y.; Joly, Simon; Bettiati, M.; Laruelle, F.; Béchou, L.

    2014-05-01

    In this study, we report on a methodology based on reverse and forward current-voltage curves (I-V) and on Degree of Polarization (DoP) of electroluminescence measurements on 980 nm laser diodes chip-on-submount (CoS) for the improvement of screening tests. Current-voltage curves are performed at reverse bias up to breakdown voltage (VBR) using both a high current accuracy (< 1 pA) and high voltage resolution (< 10 mV) at different submount-temperatures (20-50°C). The DoP of luminescence of such devices, related to strains in materials and effect of shear strain on the birefringence, is calculated from the simultaneous measurement of TE (LTE) and TM (LTM) polarized light emissions. We observe that application of high reverse voltages occasionally produces significant micro-plasma (MP) pre-breakdown on reverse I-V characteristics as recently observed in InGaN/GaN LEDs and assumed to be a response of electrically active defects. Comparisons between breakdown voltages and number of MP, and changes of leakage current at low forward voltage (< 0.1 V) are considered. DoP measurements are also analyzed versus temperature. Finally the usefulness of these measurements for effective screening of devices is discussed.

  11. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  12. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  13. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  14. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  15. Co-design for high performance computing.

    SciTech Connect

    Dosanjh, Sudip Singh; Hemmert, Karl Scott; Rodrigues, Arun F.

    2010-07-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  16. Performing Arts High Schools: A Burgeoning Movement.

    ERIC Educational Resources Information Center

    Curtis, Thomas E.

    1987-01-01

    Discusses performing arts high schools that train students in general education and music, visual arts, theater, and dance. Enumerates purposes, advantages (mainly, a challenging and motivating atmosphere with opportunities to concentrate in one area), and problems (funding, understaffing, academic standards, and admission criteria). Advises…

  17. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  19. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  20. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  1. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems. PMID:26872796

  2. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  3. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  4. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  5. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  6. Highly precise and accurate terahertz polarization measurements based on electro-optic sampling with polarization modulation of probe pulses.

    PubMed

    Nemoto, Natsuki; Higuchi, Takuya; Kanda, Natsuki; Konishi, Kuniaki; Kuwata-Gonokami, Makoto

    2014-07-28

    We have developed an electro-optic (EO) sampling method with polarization modulation of probe pulses; this method allows us to measure the direction of a terahertz (THz) electric-field vector with a precision of 0.1 mrad in a data acquisition time of 660 ms using a 14.0-kHz repetition rate pulsed light source. Through combination with a THz time-domain spectroscopy technique, a time-dependent two-dimensional THz electric field was obtained. We used a photoelastic modulator for probe-polarization modulation and a (111)-oriented zincblende crystal as the EO crystal. Using the tilted pulse front excitation method with stable regeneratively amplified pulses, we prepared stable and intense THz pulses and performed pulse-by-pulse analog-to-digital conversion of the signals. These techniques significantly reduced statistical errors and enabled sub-mrad THz polarization measurements. We examined the performance of this method by measuring a wire-grid polarizer as a sample. The present method will open a new frontier of high-precision THz polarization sensitive measurements. PMID:25089412

  7. Liquid Hybridization and Solid Phase Detection: A Highly Sensitive and Accurate Strategy for MicroRNA Detection in Plants and Animals.

    PubMed

    Li, Fosheng; Mei, Lanju; Zhan, Cheng; Mao, Qiang; Yao, Min; Wang, Shenghua; Tang, Lin; Chen, Fang

    2016-01-01

    MicroRNAs (miRNAs) play important roles in nearly every aspect of biology, including physiological, biochemical, developmental and pathological processes. Therefore, a highly sensitive and accurate method of detection of miRNAs has great potential in research on theory and application, such as the clinical approach to medicine, animal and plant production, as well as stress response. Here, we report a strategic method to detect miRNAs from multicellular organisms, which mainly includes liquid hybridization and solid phase detection (LHSPD); it has been verified in various species and is much more sensitive than traditional biotin-labeled Northern blots. By using this strategy and chemiluminescent detection with digoxigenin (DIG)-labeled or biotin-labeled oligonucleotide probes, as low as 0.01-0.25 fmol [for DIG-CDP Star (disodium2-chloro-5-(4-methoxyspiro{1,2-dioxetane-3,2'-(5'-chloro)tricyclo[3.3.1.13,7]decan}-4-yl)phenyl phosphate) system], 0.005-0.1 fmol (for biotin-CDP Star system), or 0.05-0.5 fmol (for biotin-luminol system) of miRNA can be detected and one-base difference can be distinguished between miRNA sequences. Moreover, LHSPD performed very well in the quantitative analysis of miRNAs, and the whole process can be completed within about 9 h. The strategy of LHSPD provides an effective solution for rapid, accurate, and sensitive detection and quantitative analysis of miRNAs in plants and animals. PMID:27598139

  8. Can single empirical algorithms accurately predict inland shallow water quality status from high resolution, multi-sensor, multi-temporal satellite data?

    NASA Astrophysics Data System (ADS)

    Theologou, I.; Patelaki, M.; Karantzalos, K.

    2015-04-01

    Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.

  9. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  10. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  11. High performance RGB LED backlight in high temperature environment

    NASA Astrophysics Data System (ADS)

    Grabski, Grzegorz; Gurr, Walter; Green, John

    2008-04-01

    The Aerospace and Defense display industry is in the midst of converting light the sources used in AMLCD backlighting technology from fluorescent lamps to LEDs. Although challenging, the fluorescent backlighting technology delivered good product in high end applications. LEDs, however, have the promise of even greater efficiency and lower cost. The history of LED backlighting is short and very dynamic; expectations are high and promises are many. It appears that for engineers developing backlights for high performance displays life has not become easier with the change of the technology. This paper will discuss just one of many challenges engineer's face: operation of LED backlights in high temperature environments. It will present experimental data showing several advantages of the RGB LED technology over other lamp technologies for high performance commercial and military application.

  12. A Low-Cost and High-Performance Conductivity Meter.

    ERIC Educational Resources Information Center

    da Rocha, Rogerio T.; And Others

    1997-01-01

    Describes an apparatus that is stable and accurate enough for quantitative conductivity experiments but maintains the simplicity of construction and use as well as low cost. Discusses principles and implementation and the performance of the assembled apparatus. (JRH)

  13. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  14. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  15. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  16. High thermoelectric performance of the distorted bismuth(110) layer.

    PubMed

    Cheng, L; Liu, H J; Zhang, J; Wei, J; Liang, J H; Jiang, P H; Fan, D D; Sun, L; Shi, J

    2016-07-14

    The thermoelectric properties of the distorted bismuth(110) layer are investigated using first-principles calculations combined with the Boltzmann transport equation for both electrons and phonons. To accurately predict the electronic and transport properties, the quasiparticle corrections with the GW approximation of many-body effects have been explicitly included. It is found that a maximum ZT value of 6.4 can be achieved for n-type systems, which essentially stemmed from the weak scattering of electrons. Moreover, we demonstrate that the distorted Bi layer retains high ZT values in relatively broad regions of both temperature and carrier concentration. Our theoretical work emphasizes that the deformation potential constant characterizing the electron-phonon scattering strength is an important paradigm for searching high thermoelectric performance materials. PMID:27302907

  17. High speed, high performance /Hg,Cd/Te photodiode detectors.

    NASA Technical Reports Server (NTRS)

    Soderman, D. A.; Pinkston, W. H.

    1972-01-01

    The current performance of high speed photodiode detectors for the 1 to 10 micron spectral region is discussed. The (Hg,Cd)Te photodiode configuration, detector properties, integration in laser receiver modules, and frequency response are considered for near infrared and far infrared wavelengths. The recent advances in (Hg,Cd)Te material and device development are indicated by the realization not only of exceptionally high speed detectors but of detectors that exhibit excellent detectivities. The performance improves substantially when the detector is cooled. This detector junction technology has been extended to other compositions of (Hg,Cd)Te for peak spectral responses at 5 and 10 micron.

  18. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  19. Assessment of a sponge layer as a non-reflective boundary treatment with highly accurate gust–airfoil interaction results

    NASA Astrophysics Data System (ADS)

    Crivellini, A.

    2016-02-01

    This paper deals with the numerical performance of a sponge layer as a non-reflective boundary condition. This technique is well known and widely adopted, but only recently have the reasons for a sponge failure been recognised, in analysis by Mani. For multidimensional problems, the ineffectiveness of the method is due to the self-reflections of the sponge occurring when it interacts with an oblique acoustic wave. Based on his theoretical investigations, Mani gives some useful guidelines for implementing effective sponge layers. However, in our opinion, some practical indications are still missing from the current literature. Here, an extensive numerical study of the performance of this technique is presented. Moreover, we analyse a reduced sponge implementation characterised by undamped partial differential equations for the velocity components. The main aim of this paper relies on the determination of the minimal width of the layer, as well as of the corresponding strength, required to obtain a reflection error of no more than a few per cent of that observed when solving the same problem on the same grid, but without employing the sponge layer term. For this purpose, a test case of computational aeroacoustics, the single airfoil gust response problem, has been addressed in several configurations. As a direct consequence of our investigation, we present a well documented and highly validated reference solution for the far-field acoustic intensity, a result that is not well established in the literature. Lastly, the proof of the accuracy of an algorithm for coupling sub-domains solved by the linear and non-liner Euler governing equations is given. This result is here exploited to adopt a linear-based sponge layer even in a non-linear computation.

  20. Poisson's ratio of high-performance concrete

    SciTech Connect

    Persson, B.

    1999-10-01

    This article outlines an experimental and numerical study on Poisson's ratio of high-performance concrete subjected to air or sealed curing. Eight qualities of concrete (about 100 cylinders and 900 cubes) were studied, both young and in the mature state. The concretes contained between 5 and 10% silica fume, and two concretes in addition contained air-entrainment. Parallel studies of strength and internal relative humidity were carried out. The results indicate that Poisson's ratio of high-performance concrete is slightly smaller than that of normal-strength concrete. Analyses of the influence of maturity, type of aggregate, and moisture on Poisson's ratio are also presented. The project was carried out from 1991 to 1998.

  1. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  2. High-performance parallel input device

    NASA Astrophysics Data System (ADS)

    Daniel, R. W.; Fischer, Patrick J.; Hunter, B.

    1993-12-01

    Research into force reflecting remote manipulation has recently started to move away from common error systems towards explicit force control. In order to maximize the benefit provided by explicit force reflection the designer has to take into account the asymmetry of the bandwidths of the forward and reflecting loops. This paper reports on a high performance system designed and built at Oxford University and Harwell Laboratories and on the preliminary results achieved when performing simple force reflecting tasks. The input device is based on a modified Stewart Platform, which offers the potential of very high bandwidth force reflection, well above the normal 2 - 10 Hz range achieved with common error systems. The slave is a nuclear hardened Puma industrial robot, offering a low cost, reliable solution to remote manipulation tasks.

  3. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  4. High performance forward swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Koenig, David G. (Inventor); Aoyagi, Kiyoshi (Inventor); Dudley, Michael R. (Inventor); Schmidt, Susan B. (Inventor)

    1988-01-01

    A high performance aircraft capable of subsonic, transonic and supersonic speeds employs a forward swept wing planform and at least one first and second solution ejector located on the inboard section of the wing. A high degree of flow control on the inboard sections of the wing is achieved along with improved maneuverability and control of pitch, roll and yaw. Lift loss is delayed to higher angles of attack than in conventional aircraft. In one embodiment the ejectors may be advantageously positioned spanwise on the wing while the ductwork is kept to a minimum.

  5. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  6. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  7. Tough, High-Performance, Thermoplastic Addition Polymers

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Proctor, K. Mason; Gleason, John; Morgan, Cassandra; Partos, Richard

    1991-01-01

    Series of addition-type thermoplastics (ATT's) exhibit useful properties. Because of their addition curing and linear structure, ATT polymers have toughness, like thermoplastics, and easily processed, like thermosets. Work undertaken to develop chemical reaction forming stable aromatic rings in backbone of ATT polymer, combining high-temperature performance and thermo-oxidative stability with toughness and easy processibility, and minimizing or eliminating necessity for tradeoffs among properties often observed in conventional polymer syntheses.

  8. High Performance Databases For Scientific Applications

    NASA Technical Reports Server (NTRS)

    French, James C.; Grimshaw, Andrew S.

    1997-01-01

    The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.

  9. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  10. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  11. SOAR Telescope: 4-meter high-performance-mount performance results

    NASA Astrophysics Data System (ADS)

    Warner, Michael; Krabbendam, Victor; Schumacher, German; Delgadillo, Juan C.

    2004-09-01

    The 4.1-meter SOuthern Astrophysical Research (SOAR) Telescope mount and drive systems have been commissioned and are in routine operation. The telescope mount, the structure and its full drive systems, was fully erected and tested at the factory prior to reassembly and commissioning at the observatory. This successful approach enabled complete integration, from a concrete pier to a pointing and tracking telescope, on the mountain, in a rapid 3-month period. The telescope mount with its high instrument payload and demanding efficiency requirements is an important component for the success of the SOAR scientific mission. The SOAR mount utilizes rolling element bearings for both azimuth and elevation support, counter torqued sets of gear motors on azimuth and two frameless torque motors built into the elevation axles. Tracking jitter and its associated spectra, pointing errors and their sources, bearing friction and servo performances are critical criteria for this mount concept and are important factors in achieving the mission. This paper addresses the performance results obtained during the integration, commissioning, and first light periods of the telescope mount system.

  12. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  13. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  14. Dense and accurate motion and strain estimation in high resolution speckle images using an image-adaptive approach

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2011-09-01

    Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman- McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques.

  15. A simple method for the accurate determination of the Henry's law constant for highly sorptive, semivolatile organic compounds.

    PubMed

    Kim, Yong-Hyun; Kim, Ki-Hyun

    2016-01-01

    A novel technique is developed to determine the Henry's law constants (HLCs) of seven volatile fatty acids (VFAs) with significantly high solubility using a combined application of thermal desorber/gas chromatography/mass spectrometry (TD/GC/MS). In light of the strong sorptive properties of these semi-volatile organic compounds (SVOCs), their HLCs were determined by properly evaluating the fraction lost on the surface of the materials used to induce equilibrium (vial, gas-tight syringe, and sorption tube). To this end, a total of nine repeated experiments were conducted in a closed (static) system at three different gas/liquid volume ratios. The best estimates for HLCs (M/atm) were thus 7,200 (propionic acid), 4,700 (i-butyric acid), 4,400 (n-butyric acid), 2,700 (i-valeric acid), 2,400 (n-valeric acid), 1,000 (hexanoic acid), and 1,500 (heptanoic acid). The differences in the HLC values between this study and previous studies, if assessed in terms of the percent difference, ranged from 9.2% (n-valeric acid) to 55.7% (i-valeric acid). We overcame the main cause of errors encountered in previous studies by performing the proper correction of the sorptive losses of the SVOCs that inevitably took place, particularly on the walls of the equilibration systems (mainly the headspace vial and/or the glass tight syringe). PMID:26577086

  16. Parameterization of an interfacial force field for accurate representation of peptide adsorption free energy on high-density polyethylene

    PubMed Central

    Abramyan, Tigran M.; Snyder, James A.; Yancey, Jeremy A.; Thyparambil, Aby A.; Wei, Yang; Stuart, Steven J.; Latour, Robert A.

    2015-01-01

    Interfacial force field (IFF) parameters for use with the CHARMM force field have been developed for interactions between peptides and high-density polyethylene (HDPE). Parameterization of the IFF was performed to achieve agreement between experimental and calculated adsorption free energies of small TGTG–X–GTGT host–guest peptides (T = threonine, G = glycine, and X = variable amino-acid residue) on HDPE, with ±0.5 kcal/mol agreement. This IFF parameter set consists of tuned nonbonded parameters (i.e., partial charges and Lennard–Jones parameters) for use with an in-house-modified CHARMM molecular dynamic program that enables the use of an independent set of force field parameters to control molecular behavior at a solid–liquid interface. The R correlation coefficient between the simulated and experimental peptide adsorption free energies increased from 0.00 for the standard CHARMM force field parameters to 0.88 for the tuned IFF parameters. Subsequent studies are planned to apply the tuned IFF parameter set for the simulation of protein adsorption behavior on an HDPE surface for comparison with experimental values of adsorbed protein orientation and conformation. PMID:25818122

  17. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  18. High Performance High-Tc Superconducting Wires

    SciTech Connect

    Kang, Sukill; Goyal, Amit; Li, Jing; Gapud, Albert Agcaoili; Martin, Patrick M; Heatherly Jr, Lee; Thompson, James R; Christen, David K; List III, Frederick Alyious; Paranthaman, Mariappan Parans; Lee, Dominic F

    2006-01-01

    We demonstrated short segments of a superconducting wire that meets or exceeds performance requirements for many large-scale applications of high-temperature superconducting materials, especially those requiring a high supercurrent and/or a high engineering critical current density in applied magnetic fields. The performance requirements for these varied applications were met in 3-micrometer-thick YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} films epitaxially grown via pulsed laser ablation on rolling assisted biaxially textured substrates. Enhancements of the critical current in self-field as well as excellent retention of this current in high applied magnetic fields were achieved in the thick films via incorporation of a periodic array of extended columnar defects, composed of self-aligned nanodots of nonsuperconducting material extending through the entire thickness of the film. These columnar defects are highly effective in pinning the superconducting vortices or flux lines, thereby resulting in the substantially enhanced performance of this wire.

  19. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  20. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  1. The Hall effect in the organic conductor TTF-TCNQ: choice of geometry for accurate measurements of a highly anisotropic system.

    PubMed

    Tafra, E; Culo, M; Basletić, M; Korin-Hamzić, B; Hamzić, A; Jacobsen, C S

    2012-02-01

    We have measured the Hall effect on recently synthesized single crystals of the quasi-one-dimensional organic conductor TTF-TCNQ (tetrathiafulvalene-tetracyanoquinodimethane), a well known charge transfer complex that has two kinds of conductive stacks: the donor (TTF) and the acceptor (TCNQ) chains. The measurements were performed in the temperature interval 30 K < T < 300 K and for several different magnetic field and current directions through the crystal. By applying the equivalent isotropic sample approach, we have demonstrated the importance of the choice of optimal geometry for accurate Hall effect measurements. Our results show, contrary to past belief, that the Hall coefficient does not depend on the geometry of measurements and that the Hall coefficient value is approximately zero in the high temperature region (T > 150 K), implying that there is no dominance of either the TTF or the TCNQ chain. At lower temperatures our measurements clearly prove that all three phase transitions of TTF-TCNQ could be identified from Hall effect measurements. PMID:22214728

  2. The Hall effect in the organic conductor TTF-TCNQ: choice of geometry for accurate measurements of a highly anisotropic system

    NASA Astrophysics Data System (ADS)

    Tafra, E.; Čulo, M.; Basletić, M.; Korin-Hamzić, B.; Hamzić, A.; Jacobsen, C. S.

    2012-02-01

    We have measured the Hall effect on recently synthesized single crystals of the quasi-one-dimensional organic conductor TTF-TCNQ (tetrathiafulvalene-tetracyanoquinodimethane), a well known charge transfer complex that has two kinds of conductive stacks: the donor (TTF) and the acceptor (TCNQ) chains. The measurements were performed in the temperature interval 30 K < T < 300 K and for several different magnetic field and current directions through the crystal. By applying the equivalent isotropic sample approach, we have demonstrated the importance of the choice of optimal geometry for accurate Hall effect measurements. Our results show, contrary to past belief, that the Hall coefficient does not depend on the geometry of measurements and that the Hall coefficient value is approximately zero in the high temperature region (T > 150 K), implying that there is no dominance of either the TTF or the TCNQ chain. At lower temperatures our measurements clearly prove that all three phase transitions of TTF-TCNQ could be identified from Hall effect measurements.

  3. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  4. High Performance Database Management for Earth Sciences

    NASA Technical Reports Server (NTRS)

    Rishe, Naphtali; Barton, David; Urban, Frank; Chekmasov, Maxim; Martinez, Maria; Alvarez, Elms; Gutierrez, Martha; Pardo, Philippe

    1998-01-01

    The High Performance Database Research Center at Florida International University is completing the development of a highly parallel database system based on the semantic/object-oriented approach. This system provides exceptional usability and flexibility. It allows shorter application design and programming cycles and gives the user control via an intuitive information structure. It empowers the end-user to pose complex ad hoc decision support queries. Superior efficiency is provided through a high level of optimization, which is transparent to the user. Manifold reduction in storage size is allowed for many applications. This system allows for operability via internet browsers. The system will be used for the NASA Applications Center program to store remote sensing data, as well as for Earth Science applications.

  5. Heavily Doped PBSE with High Thermoelectric Performance

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor); Wang, Heng (Inventor); Pei, Yanzhong (Inventor)

    2015-01-01

    The present invention discloses heavily doped PbSe with high thermoelectric performance. Thermoelectric property measurements disclosed herein indicated that PbSe is high zT material for mid-to-high temperature thermoelectric applications. At 850 K a peak zT (is) greater than 1.3 was observed when n(sub H) approximately 1.0 X 10(exp 20) cm(exp -3). The present invention also discloses that a number of strategies used to improve zT of PbTe, such as alloying with other elements, nanostructuring and band modification may also be used to further improve zT in PbSe.

  6. Performance of annular high frequency thermoacoustic engines

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ivan A.

    This thesis presents studies of the behavior of miniature annular thermoacoustic prime movers and the imaging of the complex sound fields using PIV inside the small acoustic wave guides when driven by a temperature gradient. Thermoacoustic engines operating in the standing wave mode are limited in their acoustic efficiency by a high degree of irreversibility that is inherent in how they work. Better performance can be achieved by using traveling waves in the thermoacoustic devices. This has led to the development of an annular high frequency thermoacoustic prime mover consisting of a regenerator, which is a random stack in-between a hot and cold heat exchanger, inside an annular waveguide. Miniature devices were developed and studied with operating frequencies in the range of 2-4 kHz. This corresponds to an average ring circumference of 11 cm for the 3 kHz device, the resonator bore being 6 mm. A similar device of 11 mm bore, length of 18 cm was also investigated; its resonant frequency was 2 kHz. Sound intensities as high as 166.8 dB were generated with limited heat input. Sound power was extracted from the annular structure by an impedance-matching side arm. The nature of the acoustic wave generated by heat was investigated using a high speed PIV instrument. Although the acoustic device appears symmetric, its performance is characterized by a broken symmetry and by perturbations that exist in its structure. Effects of these are observed in the PIV imaging; images show axial and radial components. Moreover, PIV studies show effects of streaming and instabilities which affect the devices' acoustic efficiency. The acoustic efficiency is high, being of 40% of Carnot. This type of device shows much promise as a high efficiency energy converter; it can be reduced in size for microcircuit applications.

  7. Small-Scale High-Performance Optics

    SciTech Connect

    WILSON, CHRISTOPHER W.; LEGER, CHRIS L.; SPLETZER, BARRY L.

    2002-06-01

    Historically, high resolution, high slew rate optics have been heavy, bulky, and expensive. Recent advances in MEMS (Micro Electro Mechanical Systems) technology and micro-machining may change this. Specifically, the advent of steerable sub-millimeter sized mirror arrays could provide the breakthrough technology for producing very small-scale high-performance optical systems. For example, an array of steerable MEMS mirrors could be the building blocks for a Fresnel mirror of controllable focal length and direction of view. When coupled with a convex parabolic mirror the steerable array could realize a micro-scale pan, tilt and zoom system that provides full CCD sensor resolution over the desired field of view with no moving parts (other than MEMS elements). This LDRD provided the first steps towards the goal of a new class of small-scale high-performance optics based on MEMS technology. A large-scale, proof of concept system was built to demonstrate the effectiveness of an optical configuration applicable to producing a small-scale (< 1cm) pan and tilt imaging system. This configuration consists of a color CCD imager with a narrow field of view lens, a steerable flat mirror, and a convex parabolic mirror. The steerable flat mirror directs the camera's narrow field of view to small areas of the convex mirror providing much higher pixel density in the region of interest than is possible with a full 360 deg. imaging system. Improved image correction (dewarping) software based on texture mapping images to geometric solids was developed. This approach takes advantage of modern graphics hardware and provides a great deal of flexibility for correcting images from various mirror shapes. An analytical evaluation of blur spot size and axi-symmetric reflector optimization were performed to address depth of focus issues that occurred in the proof of concept system. The resulting equations will provide the tools for developing future system designs.

  8. Management issues for high performance storage systems

    SciTech Connect

    Louis, S.; Burris, R.

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  9. High capacity heat pipe performance demonstration

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A high capacity heat pipe which will operate in one-g and in zero-g is investigated. An artery configuration which is self-priming in one-g was emphasized. Two artery modifications were evolved as candidates to achieve one-g priming and will provide the very high performance: the four artery and the eight artery configurations. These were each evaluated analytically for performance and priming capability. The eight artery configuration was found to be inadequate from a performance standpoint. The four artery showed promise of working. A five-inch long priming element test article was fabricated using the four artery design. Plexiglas viewing windows were made on each end of the heat pipe to permit viewing of the priming activity. The five-inch primary element would not successfully prime in one-g. Difficulties on priming in one-g raised questions about zero-g priming. Therefore a small test element heat pipe for verifying that the proposed configuration will self-prime in zero-g was fabricated and delivered.

  10. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  11. Accurate dipole moment curve and non-adiabatic effects on the high resolution spectroscopic properties of the LiH molecule

    NASA Astrophysics Data System (ADS)

    Diniz, Leonardo G.; Kirnosov, Nikita; Alijah, Alexander; Mohallem, José R.; Adamowicz, Ludwik

    2016-04-01

    A very accurate dipole moment curve (DMC) for the ground X1Σ+ electronic state of the 7LiH molecule is reported. It is calculated with the use of all-particle explicitly correlated Gaussian functions with shifted centers. The DMC - the most accurate to our knowledge - and the corresponding highly accurate potential energy curve are used to calculate the transition energies, the transition dipole moments, and the Einstein coefficients for the rovibrational transitions with ΔJ = - 1 and Δv ⩽ 5 . The importance of the non-adiabatic effects in determining these properties is evaluated using the model of a vibrational R-dependent effective reduced mass in the rovibrational calculations introduced earlier (Diniz et al., 2015). The results of the present calculations are used to assess the quality of the two complete linelists of 7LiH available in the literature.

  12. Performance of the CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Perrotta, Andrea

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved tracking and vertexing algorithms, discussing their impact on the b-tagging performance as well as on the jet and missing energy reconstruction.

  13. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  14. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  15. Limited rotational and rovibrational line lists computed with highly accurate quartic force fields and ab initio dipole surfaces.

    PubMed

    Fortenberry, Ryan C; Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2014-02-01

    In this work, computational procedures are employed to compute the rotational and rovibrational spectra and line lists for H2O, CO2, and SO2. Building on the established use of quartic force fields, MP2 and CCSD(T) Dipole Moment Surfaces (DMSs) are computed for each system of study in order to produce line intensities as well as the transition energies. The computed results exhibit a clear correlation to reference data available in the HITRAN database. Additionally, even though CCSD(T) DMSs produce more accurate intensities as compared to experiment, the use of MP2 DMSs results in reliable line lists that are still comparable to experiment. The use of the less computationally costly MP2 method is beneficial in the study of larger systems where use of CCSD(T) would be more costly. PMID:23692860

  16. A new approach based on embedding Green's functions into fixed-point iterations for highly accurate solution to Troesch's problem

    NASA Astrophysics Data System (ADS)

    Kafri, H. Q.; Khuri, S. A.; Sayfy, A.

    2016-03-01

    In this paper, a novel approach is introduced for the solution of the non-linear Troesch's boundary value problem. The underlying strategy is based on Green's functions and fixed-point iterations, including Picard's and Krasnoselskii-Mann's schemes. The resulting numerical solutions are compared with both the analytical solutions and numerical solutions that exist in the literature. Convergence of the iterative schemes is proved via manipulation of the contraction principle. It is observed that the method handles the boundary layer very efficiently, reduces lengthy calculations, provides rapid convergence, and yields accurate results particularly for large eigenvalues. Indeed, to our knowledge, this is the first time that this problem is solved successfully for very large eigenvalues, actually the rate of convergence increases as the magnitude of the eigenvalues increases.

  17. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  18. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  19. High Power MPD Thruster Performance Measurements

    NASA Technical Reports Server (NTRS)

    LaPointe, Michael R.; Strzempkowski, Eugene; Pencil, Eric

    2004-01-01

    High power magnetoplasmadynamic (MPD) thrusters are being developed as cost effective propulsion systems for cargo transport to lunar and Mars bases, crewed missions to Mars and the outer planets, and robotic deep space exploration missions. Electromagnetic MPD thrusters have demonstrated, at the laboratory level, the ability to process megawatts of electrical power while providing significantly higher thrust densities than electrostatic electric propulsion systems. The ability to generate higher thrust densities permits a reduction in the number of thrusters required to perform a given mission, and alleviates the system complexity associated with multiple thruster arrays. The specific impulse of an MPD thruster can be optimized to meet given mission requirements, from a few thousand seconds with heavier gas propellants up to 10,000 seconds with hydrogen propellant. In support of programs envisioned by the NASA Office of Exploration Systems, Glenn Research Center is developing and testing quasi-steady MW-class MPD thrusters as a prelude to steady state high power thruster tests. This paper provides an overview of the GRC high power pulsed thruster test facility, and presents preliminary performance data for a quasi-steady baseline MPD thruster geometry.

  20. High performance Split-Stirling cooler program

    NASA Astrophysics Data System (ADS)

    Meeker, R. P.

    1982-09-01

    This report describes the physical characteristics of the final design configuration of the 1 Watt Split-Stirling Cryogenic Cooler. Qualification testing included evaluation of the 1.0 Watt Cryogenic Cooler under the following conditions: Performance tests over the temperature range of -40 degrees C to +55 degrees C; High and Low Temperature Shock Tests; Mechanical Shock Tests; Sinsoidal Vibration; Self-Induced Vibration; Acoustical Testing; and 1000-Hour Mean Time Between Failure Life Tests. The cryogenic cooler design consists of a motor and compressor assembly, and a remote expander assembly, interconnected by a stainless steel capillary line. The system is nominally a 18.5 VDC, 60 Watt cooler and has a maximum weight of 4 pounds. This report summarizes the results of the performance, environmental and life qualification tests conducted under the contract and provides supporting data for each of the test categories.

  1. Roof support performance in high stress conditions

    SciTech Connect

    Mucho, T.P.; Mark, C.; Zelanko, J.C.; Compton, C.S.

    1995-11-01

    To document the performance of mine roof and roof support systems the US Bureau of Mines (USBM) has been installing instrumentation at selected sites in US coal mines. Much of this support and roof instrumentation has been installed in high stress conditions to maximize differences in performance. Summarized in this paper are the results of four such site investigations. The roof geology at the four sites is quite different and is quantified using the USBM`s Coal Mine Roof Rating (CMRR). The studies included detailed measurements of roof movement using multi-point extensometers, as well as measurements and monitoring of bolt loading. The investigations include developmental loading and abutment loading from longwall and room-and-pillar mining operations. Some of the issues examined are the effect of the horizontal stress field, the effect of installed tension (torque-tension versus resin bolts), the effect of reduced annulus bolt holes, and the differences between similar bolts supplied by different manufacturers.

  2. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  3. High performance railgun barrels for laboratory use

    NASA Astrophysics Data System (ADS)

    Bauer, David P.; Newman, Duane C.

    1993-01-01

    High performance low-cost, laboratory railgun barrels are now available, comprised of an inherently stiff containment structure which surrounds the bore components machined from 'off the-shelf' materials. The shape of the containment structure was selected to make the barrel inherently stiff. The structure consists of stainless steel laminations which do not compromise the electrical efficiency of the railgun. The modular design enhances the utility of the barrel, as it is easy to service between shots, and can be 're-cored' to produce different configurations and sizes using the same structure. We have produced barrels ranging from 15 mm to 90 mm square bore, a 30 mm round bore, and in lengths varying from 0.25 meters to 10 meters long. Successful tests with both plasma and solid metal armatures have demonstrated the versatility and performance of this design.

  4. Advanced solidification system using high performance cement

    SciTech Connect

    Kikuchi, Makoto; Matsuda, Masami; Nishi, Takashi; Tsuchiya, Hiroyuki; Izumida, Tatsuo

    1995-12-31

    Advanced cement solidification is proposed for the solidification of radioactive waste such as spent ion exchange resin, incineration ash and liquid waste. A new, high performance cement has been developed to raise volume reduction efficiency and lower radioactivity release into the environment. It consists of slag cement, reinforcing fiber, natural zeolite and lithium nitrate (LiNO{sub 3}). The fiber allows waste loading to be increased from 20 to 55kg-dry resin/200L. The zeolite, whose main constituent is clinoptilolite, reduces cesium leachability from the waste form to about 1/10. Lithium nitrate prevents alkaline corrosion of the aluminum, contained in ash, and reduces hydrogen gas generation. Laboratory and full-scale pilot plant experiments were performed to evaluate properties of the waste form, using simulated wastes. Emphasis was laid on improvement of solidification of spent resin and ash.

  5. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  6. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-01

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive. PMID:24353390

  7. A Generic Scheduling Simulator for High Performance Parallel Computers

    SciTech Connect

    Yoo, B S; Choi, G S; Jette, M A

    2001-08-01

    It is well known that efficient job scheduling plays a crucial role in achieving high system utilization in large-scale high performance computing environments. A good scheduling algorithm should schedule jobs to achieve high system utilization while satisfying various user demands in an equitable fashion. Designing such a scheduling algorithm is a non-trivial task even in a static environment. In practice, the computing environment and workload are constantly changing. There are several reasons for this. First, the computing platforms constantly evolve as the technology advances. For example, the availability of relatively powerful commodity off-the-shelf (COTS) components at steadily diminishing prices have made it feasible to construct ever larger massively parallel computers in recent years [1, 4]. Second, the workload imposed on the system also changes constantly. The rapidly increasing compute resources have provided many applications developers with the opportunity to radically alter program characteristics and take advantage of these additional resources. New developments in software technology may also trigger changes in user applications. Finally, political climate change may alter user priorities or the mission of the organization. System designers in such dynamic environments must be able to accurately forecast the effect of changes in the hardware, software, and/or policies under consideration. If the environmental changes are significant, one must also reassess scheduling algorithms. Simulation has frequently been relied upon for this analysis, because other methods such as analytical modeling or actual measurements are usually too difficult or costly. A drawback of the simulation approach, however, is that developing a simulator is a time-consuming process. Furthermore, an existing simulator cannot be easily adapted to a new environment. In this research, we attempt to develop a generic job-scheduling simulator, which facilitates the evaluation of

  8. High performance computing for domestic petroleum reservoir simulation

    SciTech Connect

    Zyvoloski, G.; Auer, L.; Dendy, J.

    1996-06-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory. High-performance computing offers the prospect of greatly increasing the resolution at which petroleum reservoirs can be represented in simulation models. The increases in resolution can be achieved through large increases in computational speed and memory, if machine architecture and numerical methods for solution of the multiphase flow equations can be used to advantage. Perhaps more importantly, the increased speed and size of today`s computers make it possible to add physical processes to simulation codes that heretofore were too expensive in terms of computer time and memory to be practical. These factors combine to allow the development of new, more accurate methods for optimizing petroleum reservoir production.

  9. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  10. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  11. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  12. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  13. Portability Support for High Performance Computing

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    While a large number of tools have been developed to support application portability, high performance application developers often prefer to use vendor-provided, non-portable programming interfaces. This phenomena indicates the mismatch between user priorities and tool capabilities. This paper summarizes the results of a user survey and a developer survey. The user survey has revealed the user priorities and resulted in three criteria for evaluating tool support for portability. The developer survey has resulted in the evaluation of portability support and indicated the possibilities and difficulties of improvements.

  14. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  15. Performance Evaluation of Emerging High Performance Computing Technologies using WRF

    NASA Astrophysics Data System (ADS)

    Newby, G. B.; Morton, D.

    2008-12-01

    The Arctic Region Supercomputing Center (ARSC) has evaluated multicore processors and other emerging processor technologies for a variety of high performance computing applications in the earth and space sciences, especially climate and weather applications. A flagship effort has been to assess dual core processor nodes on ARSC's Midnight supercomputer, in which two-socket systems were compared to eight-socket systems. Midnight is utilized for ARSC's twice-daily weather research and forecasting (WRF) model runs, available at weather.arsc.edu. Among other findings on Midnight, it was found that the Hypertransport system for interconnecting Opteron processors, memory, and other subsystems does not scale as well on eight-socket (sixteen processor) systems as well as two-socket (four processor) systems. A fundamental limitation is the cache snooping operation performed whenever a computational thread accesses main memory. This increases memory latency as the number of processor sockets increases. This is particularly noticeable on applications such as WRF that are primarily CPU-bound, versus applications that are bound by input/output or communication. The new Cray XT5 supercomputer at ARSC features quad core processors, and will host a variety of scaling experiments for WRF, CCSM4, and other models. Early results will be presented, including a series of WRF runs for Alaska with grid resolutions under 2km. ARSC will discuss a set of standardized test cases for the Alaska domain, similar to existing test cases for CONUS. These test cases will provide different configuration sizes and resolutions, suitable for single processors up to thousands. Beyond multi-core Opteron-based supercomputers, ARSC has examined WRF and other applications on additional emerging technologies. One such technology is the graphics processing unit, or GPU. The 9800-series nVidia GPU was evaluated with the cuBLAS software library. While in-socket GPUs might be forthcoming in the future, current

  16. High-temperature testing of high performance fiber reinforced concrete

    NASA Astrophysics Data System (ADS)

    Fořt, Jan; Vejmelková, Eva; Pavlíková, Milena; Trník, Anton; Čítek, David; Kolísko, Jiří; Černý, Robert; Pavlík, Zbyšek

    2016-06-01

    The effect of high-temperature exposure on properties of High Performance Fiber Reinforced Concrete (HPFRC) is researched in the paper. At first, reference measurements are done on HPFRC samples without high-temperature loading. Then, the HPFRC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000 °C. For the temperature loaded samples, measurement of residual mechanical and basic physical properties is done. Linear thermal expansion coefficient as function of temperature is accessed on the basis of measured thermal strain data. Additionally, simultaneous difference scanning calorimetry (DSC) and thermogravimetry (TG) analysis is performed in order to observe and explain material changes at elevated temperature. It is found that the applied high temperature loading significantly increases material porosity due to the physical, chemical and combined damage of material inner structure, and negatively affects also the mechanical strength. Linear thermal expansion coefficient exhibits significant dependence on temperature and changes of material structure. The obtained data will find use as input material parameters for modelling the damage of HPFRC structures exposed to the fire and high temperature action.

  17. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1988-01-01

    Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.

  18. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  19. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  20. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGESBeta

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  1. High Performance Data Distribution for Scientific Community

    NASA Astrophysics Data System (ADS)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  2. High-performance deployable structures for the support of high-concentration ratio solar array modules

    NASA Technical Reports Server (NTRS)

    Mobrem, M.

    1985-01-01

    A study conducted on high-performance deployable structures for the support of high-concentration ratio solar array modules is discussed. Serious consideration is being given to the use of high-concentration ratio solar array modules or applications such as space stations. These concentrator solar array designs offer the potential of reduced cost, reduced electrical complexity, higher power per unit area, and improved survivability. Arrays of concentrators, such as the miniaturized Cassegrainian concentrator modules, present a serious challenge to the structural design because their mass per unit area (5.7 kg/square meters) is higher than that of flexible solar array blankets, and the requirement for accurate orientation towards the Sun (plus or minus 0.5 degree) requires structures with improved accuracy potentials. In addition, use on a space station requires relatively high structural natural frequencies to avoid deleterious interactions with control systems and other large structural components. The objective here is to identify and evaluate conceptual designs of structures suitable for deploying and accurately supporting high-concentration ratio solar array modules.

  3. Accurate detection for a wide range of mutation and editing sites of microRNAs from small RNA high-throughput sequencing profiles.

    PubMed

    Zheng, Yun; Ji, Bo; Song, Renhua; Wang, Shengpeng; Li, Ting; Zhang, Xiaotuo; Chen, Kun; Li, Tianqing; Li, Jinyan

    2016-08-19

    Various types of mutation and editing (M/E) events in microRNAs (miRNAs) can change the stabilities of pre-miRNAs and/or complementarities between miRNAs and their targets. Small RNA (sRNA) high-throughput sequencing (HTS) profiles can contain many mutated and edited miRNAs. Systematic detection of miRNA mutation and editing sites from the huge volume of sRNA HTS profiles is computationally difficult, as high sensitivity and low false positive rate (FPR) are both required. We propose a novel method (named MiRME) for an accurate and fast detection of miRNA M/E sites using a progressive sequence alignment approach which refines sensitivity and improves FPR step-by-step. From 70 sRNA HTS profiles with over 1.3 billion reads, MiRME has detected thousands of statistically significant M/E sites, including 3'-editing sites, 57 A-to-I editing sites (of which 32 are novel), as well as some putative non-canonical editing sites. We demonstrated that a few non-canonical editing sites were not resulted from mutations in genome by integrating the analysis of genome HTS profiles of two human cell lines, suggesting the existence of new editing types to further diversify the functions of miRNAs. Compared with six existing studies or methods, MiRME has shown much superior performance for the identification and visualization of the M/E sites of miRNAs from the ever-increasing sRNA HTS profiles. PMID:27229138

  4. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  5. High performance amorphous selenium lateral photodetector

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, Shiva; Allec, Nicholas; Karim, Karim S.

    2012-03-01

    Lateral amorphous selenium (a-Se) detectors based on the metal-semiconductor-metal (MSM) device structure have been studied for indirect detector medical imaging applications. These detectors have raised interest due to their simple structure, ease of fabrication, high-speed, low dark current, low capacitance per unit area and better light utilization. The lateral device structure has a benefit that the electrode spacing may be easily controlled to reduce the required bias for a given desired electric field. In indirect conversion x-ray imaging, the scintillator is coupled to the top of the a-Se MSM photodetector, which itself is integrated on top of the thin-film-transistor (TFT) array. The carriers generated at the top surface of the a-Se layer experience a field that is parallel to the surface, and does not initially sweep them away from the surface. Therefore these carriers may recombine or get trapped in surface states and change the field at the surface, which may degrade the performance of the photodetector. In addition, due to the finite width of the electrodes, the fill factor of the device is less than unity. In this study we examine the effect of lateral drift of carriers and the fill factor on the photodetector performance. The impact of field magnitude on the performance is also investigated.

  6. High-performance holographic technologies for fluid-dynamics experiments

    PubMed Central

    Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.

    2010-01-01

    Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881

  7. Achieving high performance on the Intel Paragon

    SciTech Connect

    Greenberg, D.S.; Maccabe, B.; Riesen, R.; Wheat, S.; Womble, D.

    1993-11-01

    When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from the nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.

  8. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  9. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  10. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    NASA Astrophysics Data System (ADS)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  11. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy.

    PubMed

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the 'thin plate splines-robust point matching' (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  12. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  13. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). PMID:24274986

  14. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  15. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  16. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  17. Low-Cost High-Performance MRI.

    PubMed

    Sarracanie, Mathieu; LaPierre, Cristen D; Salameh, Najat; Waddington, David E J; Witzel, Thomas; Rosen, Matthew S

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm(3) imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  18. Low-Cost High-Performance MRI

    PubMed Central

    Sarracanie, Mathieu; LaPierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5–3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  19. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  20. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes. PMID:22340891

  1. A robust MRI-compatible system to facilitate highly accurate stereotactic administration of therapeutic agents to targets within the brain of a large animal model

    PubMed Central

    White, E.; Woolley, M.; Bienemann, A.; Johnson, D.E.; Wyatt, M.; Murray, G.; Taylor, H.; Gill, S.S.

    2011-01-01

    Achieving accurate intracranial electrode or catheter placement is critical in clinical practice in order to maximise the efficacy of deep brain stimulation and drug delivery respectively as well as to minimise side-effects. We have developed a highly accurate and robust method for MRI-guided, stereotactic delivery of catheters and electrodes to deep target structures in the brain of pigs. This study outlines the development of this equipment and animal model. Specifically this system enables reliable head immobilisation, acquisition of high-resolution MR images, precise co-registration of MRI and stereotactic spaces and overall rigidity to facilitate accurate burr hole-generation and catheter implantation. To demonstrate the utility of this system, in this study a total of twelve catheters were implanted into the putamen of six Large White Landrace pigs. All implants were accurately placed into the putamen. Target accuracy had a mean Euclidean distance of 0.623 mm (standard deviation of 0.33 mm). This method has allowed us to accurately insert fine cannulae, suitable for the administration of therapeutic agents by convection-enhanced delivery (CED), into the brain of pigs. This study provides summary evidence of a robust system for catheter implantation into the brain of a large animal model. We are currently using this stereotactic system, implantation procedure and animal model to develop catheter-based drug delivery systems that will be translated into human clinical trials, as well as to model the distribution of therapeutic agents administered by CED over large volumes of brain. PMID:21074564

  2. Thermal interface pastes nanostructured for high performance

    NASA Astrophysics Data System (ADS)

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  3. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  4. High performance bonded neo magnets using high density compaction

    NASA Astrophysics Data System (ADS)

    Herchenroeder, J.; Miller, D.; Sheth, N. K.; Foo, M. C.; Nagarathnam, K.

    2011-04-01

    This paper presents a manufacturing method called Combustion Driven Compaction (CDC) for the manufacture of isotropic bonded NdFeB magnets (bonded Neo). Magnets produced by the CDC method have density up to 6.5 g/cm3 which is 7-10% higher compared to commercially available bonded Neo magnets of the same shape. The performance of an actual seat motor with a representative CDC ring magnet is presented and compared with the seat motor performance with both commercial isotropic bonded Neo and anisotropic NdFeB rings of the same geometry. The comparisons are made at both room and elevated temperatures. The airgap flux for the magnet produced by the proposed method is 6% more compared to the commercial isotropic bonded Neo magnet. After exposure to high temperature due to the superior thermal aging stability of isotropic NdFeB powders the motor performance with this material is comparable to the motor performance with an anisotropic NdFeB magnet.

  5. Accurate and Precise in Situ Zircon U-Pb age Dating With High Sample Throughput by Automated LA-SF-ICP-MS

    NASA Astrophysics Data System (ADS)

    Frei, D.; Gerdes, A.; Schersten, A.; Hollis, J. A.; Martina, F.; Knudsen, C.

    2006-12-01

    Zircon is an ubiquitous mineral in most crystalline rocks as well as clastic sediments. The high resistance to thermal resetting and physical erosion makes zircon an exceptionally useful mineral for precise and accurate dating of thermal geological events. For example, the analysis of the U-Pb ages of detrital zircon grains in clastic sediments is a powerful tool in sedimentary provenance studies. Accurate and precise U-Pb ages of > 100 zircon grains in a sample usually allow to detect all major sedimentary source age components with statistical confidence. U-Pb age dating of detrital zircons is generally the domain of high resolution ion microprobe techniques (high resolution SIMS), where relatively rapid in situ analysis can be achieved. The major limitations of these techniques are sample throughput (about 75 zircon age dates per 24 hours), the very high purchasing and operating costs of the equipment and the need for highly specialised personnel, resulting in high cost. These high costs usually impose uncomfortable restrictions on the number of samples that can be analysed in a provenance study. Here, we present a high sample throughput technique for highly accurate and precise U-Pb dating of zircons by laser ablation magnetic sectorfield inductively coupled plasma mass spectrometry (LA-SF-ICP-MS). This technique takes advantage of recent progress in laser technology and the introduction of magnetic sectorfield ICP-MS instruments. Based on a ThermoFinnigan Element2 magnetic sctorfield ICP-MS and a New Wave UP 213 laser ablation system, this techniques allows U-Pb dating of zircon grains with precision, accuray and spatial resolution comparable to high resolution SIMS. Because an individual analysis is carried out in less than two minutes and all data is acquired automated in pre-set mode with only minimal operator presence, the sample throughput is an order of magnitude higher compared to high resolution SIMS. Furthermore, the purchasing and operating costs of

  6. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  7. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  8. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  9. Quantum image with high retrieval performance

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Liu, Zhihao; Tan, Jianing

    2016-02-01

    Quantum image retrieval is an exhaustive work due to exponential measurements. Casting aside the background of image processing, quantum image is a pure many-body state, and the retrieval task is a physical process named as quantum state tomography. Tomography of a special class of states, permutationally symmetric states, just needs quadratic measurement scales with the number of qubits. In order to take advantage of this result, we propose a method to map the main energy of the image to these states. First, we deduce that n+1 permutationally symmetric states can be constructed as bases of 2^n Hilbert space ( n qubits) at least. Second, we execute Schmidt decomposition by continually bipartite splitting of the quantum image (state). At last, we select n+1 maximum coefficients, do base transformation to map these coefficients to new bases (permutationally symmetric states). By these means, the quantum image with high retrieval performance can be gotten.

  10. An isotopic-independent highly accurate potential energy surface for CO2 isotopologues and an initial (12)C(16)O2 infrared line list.

    PubMed

    Huang, Xinchuan; Schwenke, David W; Tashkun, Sergey A; Lee, Timothy J

    2012-03-28

    An isotopic-independent, highly accurate potential energy surface (PES) has been determined for CO(2) by refining a purely ab initio PES with selected, purely experimentally determined rovibrational energy levels. The purely ab initio PES is denoted Ames-0, while the refined PES is denoted Ames-1. Detailed tests are performed to demonstrate the spectroscopic accuracy of the Ames-1 PES. It is shown that Ames-1 yields σ(rms) (root-mean-squares error) = 0.0156 cm(-1) for 6873 J = 0-117 (12)C(16)O(2) experimental energy levels, even though less than 500 (12)C(16)O(2) energy levels were included in the refinement procedure. It is also demonstrated that, without any additional refinement, Ames-1 yields very good agreement for isotopologues. Specifically, for the (12)C(16)O(2) and (13)C(16)O(2) isotopologues, spectroscopic constants G(v) computed from Ames-1 are within ±0.01 and 0.02 cm(-1) of reliable experimentally derived values, while for the (16)O(12)C(18)O, (16)O(12)C(17)O, (16)O(13)C(18)O, (16)O(13)C(17)O, (12)C(18)O(2), (17)O(12)C(18)O, (12)C(17)O(2), (13)C(18)O(2), (13)C(17)O(2), (17)O(13)C(18)O, and (14)C(16)O(2) isotopologues, the differences are between ±0.10 and 0.15 cm(-1). To our knowledge, this is the first time a polyatomic PES has been refined using such high J values, and this has led to new challenges in the refinement procedure. An initial high quality, purely ab initio dipole moment surface (DMS) is constructed and used to generate a 296 K line list. For most bands, experimental IR intensities are well reproduced for (12)C(16)O(2) using Ames-1 and the DMS. For more than 80% of the bands, the experimental intensities are reproduced with σ(rms)(ΔI) < 20% or σ(rms)(ΔI∕δ(obs)) < 5. A few exceptions are analyzed and discussed. Directions for future improvements are discussed, though it is concluded that the current Ames-1 and the DMS should be useful in analyzing and assigning high-resolution laboratory or astronomical spectra. PMID:22462861

  11. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  12. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  13. A high performance thin film thermoelectric cooler

    SciTech Connect

    Rowe, D.M.; Min, G.; Volklein, F.

    1998-07-01

    Thin film thermoelectric devices with small dimensions have been fabricated using microelectronics technology and operated successfully in the Seebeck mode as sensors or generators. However, they do not operate successfully in the Peltier mode as coolers, because of the thermal bypass provided by the relatively thick substrate upon which the thermoelectric device is fabricated. In this paper a processing sequence is described which dramatically reduces this thermal bypass and facilitates the fabrication of high performance integrated thin film thermoelectric coolers. In the processing sequence a very thin amorphous SiC (or SiO{sub 2}SiN{sub 4}) film is deposited on a silicon substrate using conventional thin film deposition and a membrane formed by removing the silicon substrate over a desired region using chemical etching or micro-machining. Thermoelements are deposited on the membrane using conventional thin film deposition and patterning techniques and configured so that the region which is to be cooled is abutted to the cold junctions of the Peltier thermoelements while the hot junctions are located at the outer peripheral area which rests on the silicon substrate rim. Heat is pumped laterally from the cooled region to the silicon substrate rim and then dissipated vertically through it to an external heat sink. Theoretical calculations of the performance of a cooler described above indicate that a maximum temperature difference of about 40--50K can be achieved with a maximum heat pumping capacity of around 10 milliwatts.

  14. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  15. The design of high-performance gliders

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Heuermann, V.

    1985-01-01

    A high-performance glider is defined as a glider which has been designed to carry the pilot in a minimum of time a given distance, taking into account conditions which are as conveniently as possible. The present investigation has the objective to show approaches for enhancing the cross-country flight cruising speed, giving attention to the difficulties which the design engineer will have to overcome. The characteristics of the cross-country flight and their relation to the cruising speed are discussed, and a description is provided of mathematical expressions concerning the cruising speed, the sinking speed, and the optimum gliding speed. The effect of aspect ratio and wing loading on the cruising speed is illustrated with the aid of a graph. Trends in glider development are explored, taking into consideration the design of laminar profiles, the reduction of profile-related drag by plain flaps, and the variation of wing loading during the flight. A number of suggestions are made for obtaining gliders with improved performance.

  16. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  17. Ion-pair high-performance liquid chromatographic analysis of aspartame and related products.

    PubMed

    Verzella, G; Bagnasco, G; Mangia, A

    1985-12-01

    A simple and accurate quantitative determination of aspartame (L-alpha-aspartyl-L-phenylalanine methyl ester), a new artificial sweetener, is described. The method, which is based on ion-pair high-performance liquid chromatography, allows the determination of aspartame in finished bulk and dosage forms, and the detection of a few related products at levels down to 0.1%. PMID:4086646

  18. High Temperature Oxidation Performance of Aluminide Coatings

    SciTech Connect

    Pint, Bruce A; Zhang, Ying; Haynes, James A; Wright, Ian G

    2004-01-01

    Aluminide coatings are of interest for many high temperature applications because of the possibility of improving the oxidation resistance of structural alloys by forming a protective external alumina scale. Steam and exhaust gas environments are of particular interest because alumina is less susceptible to the accelerated attack due to hydroxide formation observed for chromia- and silica-forming alloys and ceramics. For water vapor testing, one ferritic (Fe-9Cr-1Mo) and one austenitic alloy (304L) have been selected as substrate materials and CVD coatings have been used in order to have a well-controlled, high purity coating. It is anticipated that similar aluminide coatings could be made by a higher-volume, commercial process such as pack cementation. Previous work on this program has examined as-deposited coatings made by high and low Al activity CVD processes and the short-term performance of these coatings. The current work is focusing on the long term behavior in both diffusion tests16 and oxidation tests of the thicker, high Al activity coatings. For long-term coating durability, one area of concern has been the coefficient of thermal expansion (CTE) mismatch between coating and substrate. This difference could cause cracking or deformation that could reduce coating life. Corrosion testing using thermal cycling is of particular interest because of this potential problem and results are presented where a short exposure cycle (1h) severely degraded aluminide coatings on both types of substrates. To further study the potential role of aluminide coatings in fossil energy applications, several high creep strength Ni-base alloys were coated by CVD for testing in a high pressure (20atm) steam-CO{sub 2} environment for the ZEST (zero-emission steam turbine) program. Such alloys would be needed as structural and turbine materials in this concept. For Ni-base alloys, CVD produces a {approx}50{mu}m {beta}-NiAl outer layer with an underlying interdiffusion zone

  19. High-performance, high-volume fly ash concrete

    SciTech Connect

    2008-01-15

    This booklet offers the construction professional an in-depth description of the use of high-volume fly ash in concrete. Emphasis is placed on the need for increased utilization of coal-fired power plant byproducts in lieu of Portland cement materials to eliminate increased CO{sub 2} emissions during the production of cement. Also addressed is the dramatic increase in concrete performance with the use of 50+ percent fly ash volume. The booklet contains numerous color and black and white photos, charts of test results, mixtures and comparisons, and several HVFA case studies.

  20. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will