Science.gov

Sample records for accurate high performance

  1. High-performance computing and networking as tools for accurate emission computed tomography reconstruction.

    PubMed

    Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods.

  2. The determination of phenolic profiles of Serbian unifloral honeys using ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry.

    PubMed

    Kečkeš, Silvio; Gašić, Uroš; Veličković, Tanja Ćirković; Milojković-Opsenica, Dušanka; Natić, Maja; Tešić, Živoslav

    2013-05-01

    Polyphenolic profiles of 44 unifloral Serbian honeys were analyzed using ultra-high-performance liquid chromatography (UHPLC) coupled with hybrid mass spectrometer which combines the Linear Trap Quadrupole (LTQ) and OrbiTrap mass analyzer. Rapid UHPLC method was developed in combination with a high sensitivity accurate mass scan and a simultaneous data dependent scan. The honey samples were of different botanical origin: acacia (Robinia pseudoacacia), sunflower (Helianthus annuus), linden (Tilia cordata), basil (Ocimum basilicum), buckwheat (Fagopyrum esculentum), oilseed rape (Brassica napus), and goldenrod (Solidago virgaurea). The presence of 43 compounds, mainly flavonoids, was proven in all honey samples by their characteristic mass spectra and fragmentation pattern. Relatively high amounts of chrysin, pinocembrin and galangin were identified in all honey extracts. p-Coumaric acid was not detected in basil, buckwheat and goldenrod honey extracts. A larger amount of gallic acid (max value 1.45 mg/kg) was found in the sunflower honey, while a larger amount of apigenin (0.97 mg/kg) was determined in the buckwheat honey in comparison with other honeys. The samples were classified according to the botanical origin using pattern recognition technique, Principal Component Analysis (PCA). The LTQ OrbiTrap technique was proven to be reliable for the unambiguous detection of phenolic acids, their derivatives, and flavonoid aglycones based on their molecular masses and fragmentation pattern.

  3. Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection

    NASA Astrophysics Data System (ADS)

    Gorbachev, Alexey A.; Serikova, Mariya G.; Pantyushina, Ekaterina N.; Volkova, Daria A.

    2016-04-01

    Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of +/-1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.

  4. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  5. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, Imma; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750+/-0.0049 amu and 270.0786+/-0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098+/-0.0061 amu and 314.1153+/-0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  6. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  7. Performance of a Micro-Strip Gas Chamber for event wise, high rate thermal neutron detection with accurate 2D position determination

    NASA Astrophysics Data System (ADS)

    Mindur, B.; Alimov, S.; Fiutowski, T.; Schulz, C.; Wilpert, T.

    2014-12-01

    A two-dimensional (2D) position sensitive detector for neutron scattering applications based on low-pressure gas amplification and micro-strip technology was built and tested with an innovative readout electronics and data acquisition system. This detector contains a thin solid neutron converter and was developed for time- and thus wavelength-resolved neutron detection in single-event counting mode, which improves the image contrast in comparison with integrating detectors. The prototype detector of a Micro-Strip Gas Chamber (MSGC) was built with a solid natGd/CsI thermal neutron converter for spatial resolutions of about 100 μm and counting rates up to 107 neutrons/s. For attaining very high spatial resolutions and counting rates via micro-strip readout with centre-of-gravity evaluation of the signal amplitude distributions, a fast, channel-wise, self-triggering ASIC was developed. The front-end chips (MSGCROCs), which are very first signal processing components, are read out into powerful ADC-FPGA boards for on-line data processing and thereafter via Gigabit Ethernet link into the data receiving PC. The workstation PC is controlled by a modular, high performance dedicated software suite. Such a fast and accurate system is crucial for efficient radiography/tomography, diffraction or imaging applications based on high flux thermal neutron beam. In this paper a brief description of the detector concept with its operation principles, readout electronics requirements and design together with the signals processing stages performed in hardware and software are presented. In more detail the neutron test beam conditions and measurement results are reported. The focus of this paper is on the system integration, two dimensional spatial resolution, the time resolution of the readout system and the imaging capabilities of the overall setup. The detection efficiency of the detector prototype is estimated as well.

  8. Ultra-high-performance liquid chromatography electrospray ionization tandem mass spectrometry for accurate analysis of glycerophospholipids and sphingolipids in drug resistance tumor cells.

    PubMed

    Li, Lin; Wang, Linlin; Shangguan, Dihua; Wei, Yanbo; Han, Juanjuan; Xiong, Shaoxiang; Zhao, Zhenwen

    2015-02-13

    Glycerophospholipids and sphingolipids are important signaling molecules which are involved in many physiological and pathological processes. Here we reported an effective method for accurate analysis of these lipids by liquid chromatography electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). The methanol method was adopted for extraction of lipids due to its simplicity and high efficiency. It was found that two subclasses of sphingolipids, sulfatide (ST) and cerebroside (CB), were heat labile, so a decreased temperature in the ion source of MS might be necessary for these compounds analysis. In addition, it was found that the isobaric interferences were commonly existent, for example, the m/z of 16:0/18:1 PC containing two (13)C isotope being identical to that of 16:0/18:0 PC determined by a unit mass resolution mass spectrometer; therefore, a baseline separation of interferential species was required to maintain selectivity and accuracy of analysis. In this work, an ultra-high-performance liquid chromatography (UHPLC)-based method was developed for separation of interferential species. Moreover, in order to deal with the characteristics of different polarity and wide dynamic range of glycerophospholipids and sphingolipids in biological systems, three detecting conditions were combined together for comprehensive and rational analysis of glycerophospholipids and sphingolipids. The method was utilized to profile glycerophospholipids and sphingolipids in drug resistant tumor cells. Our results showed that many lipids were significantly changed in drug resistant tumor cells compared to paired drug sensitive tumor cells. This is a systematic report about the isobaric interferences and heat labile compounds interferences when analyzing glycerophospholipids and sphingolipids by ESI-MS/MS, which aids in ruling out one potential source of systematic error to ensure the accuracy of analysis.

  9. Sparse and accurate high resolution SAR imaging

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian

    2012-05-01

    We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.

  10. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  11. Sensitive, accurate and rapid detection of trace aliphatic amines in environmental samples with ultrasonic-assisted derivatization microextraction using a new fluorescent reagent for high performance liquid chromatography.

    PubMed

    Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao

    2014-07-25

    A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples.

  12. Development and validation of a high-performance liquid chromatography-fluorescence detection method for the accurate quantification of colistin in human plasma.

    PubMed

    Chepyala, Divyabharathi; Tsai, I-Lin; Sun, Hsin-Yun; Lin, Shu-Wen; Kuo, Ching-Hua

    2015-02-01

    Recently, colistin has become one of the most important drugs for treating infections caused by multidrug-resistant Gram-negative bacteria. Therapeutic drug monitoring is recommended to ensure the safety and efficacy of colistin and to improve clinical outcomes. This study developed an accurate and sensitive high-performance liquid chromatography-fluorescence detection (HPLC-FLD) method for the quantification of colistin in human plasma. The sample preparation included protein precipitation using trichloroacetic acid (TCA) and methanol, followed by in-solid phase extraction (In-SPE) derivatization with 9-fluorenylmethyl chloroformate (FMOC-Cl). A Poroshell 120 EC-C18 2.1×100mm (2.7μm) column was used in the HPLC method with a mobile phase composed of acetonitrile (ACN), tetrahydrofuran (THF), and deionized (DI) water (82%, 2%, 16% (v/v), respectively). Polymyxin B1 was used as the internal standard. The total analysis time was 22min under optimal separation conditions. The HPLC-FLD method was validated over a therapeutic range of 0.3-6.0μgmL(-1). The intra-day and inter-day precisions for colistin A and colistin B were below 9.9% and 4.5% relative standard deviations, respectively. The accuracy test results were between 100.2 and 118.4%. The extraction recoveries were between 81.6 and 94.1%. The method was linear over the test range, with a 0.9991 coefficient of determination. The limit of detection was 0.1μgmL(-1). The validated HPLC-FLD method was successfully applied to quantify the colistin concentrations in 2 patient samples for therapeutic drug monitoring.

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  15. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  16. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  17. Can Scores on an Interim High School Reading Assessment Accurately Predict Low Performance on College Readiness Exams? REL 2016-124

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2016-01-01

    During the 2013/14 school year two Florida school districts sought to develop an early warning system to identify students at risk of low performance on college readiness measures in grade 11 or 12 (such as the SAT or ACT) in order to support them with remedial coursework prior to high school graduation. The study presented in this report provides…

  18. Accurate torque-speed performance prediction for brushless dc motors

    NASA Astrophysics Data System (ADS)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  19. Design and performance investigation of a highly accurate apodized fiber Bragg grating-based strain sensor in single and quasi-distributed systems.

    PubMed

    Ali, Taha A; Shehata, Mohamed I; Mohamed, Nazmi A

    2015-06-01

    In this work, fiber Bragg grating (FBG) strain sensors in single and quasi-distributed systems are investigated, seeking high-accuracy measurement. Since FBG-based strain sensors of small lengths are preferred in medical applications, and that causes the full width at half-maximum (FWHM) to be larger, a new apodization profile is introduced for the first time, to the best of our knowledge, with a remarkable FWHM at small sensor lengths compared to the Gaussian and Nuttall profiles, in addition to a higher mainlobe slope at these lengths. A careful selection of apodization profiles with detailed investigation is performed-using sidelobe analysis and the FWHM, which are primary judgment factors especially in a quasi-distributed configuration. A comparison between the elite selection of apodization profiles (extracted from related literature) and the proposed new profile is carried out covering the reflectivity peak, FWHM, and sidelobe analysis. The optimization process concludes that the proposed new profile with a chosen small length (L) of 10 mm and Δnac of 1.4×10-4 is the optimum choice for single stage and quasi-distributed strain-sensor networks, even better than the Gaussian profile at small sensor lengths. The proposed profile achieves the smallest FWHM of 15 GHz (suitable for UDWDM), and the highest mainlobe slope of 130 dB/nm. For the quasi-distributed scenario, a noteworthy high isolation of 6.953 dB is achieved while applying a high strain value of 1500 μstrain (με) for a five-stage strain-sensing network. Further investigation was undertaken, proving that consistency in choosing the apodization profile in the quasi-distributed network is mandatory. A test was made of the inclusion of a uniform apodized sensor among other apodized sensors with the proposed profile in an FBG strain-sensor network.

  20. Countercurrent chromatography separation of saponins by skeleton type from Ampelozizyphus amazonicus for off-line ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry analysis and characterisation.

    PubMed

    de Souza Figueiredo, Fabiana; Celano, Rita; de Sousa Silva, Danila; das Neves Costa, Fernanda; Hewitson, Peter; Ignatova, Svetlana; Piccinelli, Anna Lisa; Rastrelli, Luca; Guimarães Leitão, Suzana; Guimarães Leitão, Gilda

    2017-01-20

    Ampelozizyphus amazonicus Ducke (Rhamnaceae), a medicinal plant used to prevent malaria, is a climbing shrub, native to the Amazonian region, with jujubogenin glycoside saponins as main compounds. The crude extract of this plant is too complex for any kind of structural identification, and HPLC separation was not sufficient to resolve this issue. Therefore, the aim of this work was to obtain saponin enriched fractions from the bark ethanol extract by countercurrent chromatography (CCC) for further isolation and identification/characterisation of the major saponins by HPLC and MS. The butanol extract was fractionated by CCC with hexane - ethyl acetate - butanol - ethanol - water (1:6:1:1:6; v/v) solvent system yielding 4 group fractions. The collected fractions were analysed by UHPLC-HRMS (ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry) and MS(n). Group 1 presented mainly oleane type saponins, and group 3 showed mainly jujubogenin glycosides, keto-dammarane type triterpene saponins and saponins with C31 skeleton. Thus, CCC separated saponins from the butanol-rich extract by skeleton type. A further purification of group 3 by CCC (ethyl acetate - ethanol - water (1:0.2:1; v/v)) and HPLC-RI was performed in order to obtain these unusual aglycones in pure form.

  1. Accurate and sensitive high-performance liquid chromatographic method for geometrical and structural photoisomers of bilirubin IX alpha using the relative molar absorptivity values.

    PubMed

    Itoh, S; Isobe, K; Onishi, S

    1999-07-02

    It has been reported that considerable differences exist between the relative molar absorptivity values of the geometrical and structural photoisomers of bilirubin. We have devised an accurate HPLC method for photoisomer quantification based on the following principle: the sum of both the integrated peak areas corrected by each factor for each photoisomer, and the integrated peak area of unchanged (ZZ)-bilirubin [(ZZ)-B] after an anaerobic photoirradiation, should be constant and equal to the integrated peak area of initial (ZZ)-bilirubin [(ZZ)-Bi] before photoirradiation. On this basis, the following equation can be used to determine each factor. [equation: see text] alpha, beta, gamma and delta represent the factors used to correct the integrated peak areas of individual bilirubin photoisomers, and they are arranged in the order of the formula. It was demonstrated that the relative 455 nm molar absorptivity values for (ZZ)-bilirubin and all its geometrical and structural photoisomers, i.e., (ZZ)-bilirubin, (ZE)-bilirubin (EZ)-bilirubin, (EZ)-cyclobilirubin (= lumirubin) and (EE)-cyclobilirubin in the HPLC eluent, are, respectively, 1.0, 0.81 (= alpha), 0.54 (= beta), 0.47 (= gamma) and 0.39 (= delta).

  2. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  3. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  4. High order accurate finite difference schemes based on symmetry preservation

    NASA Astrophysics Data System (ADS)

    Ozbenli, Ersin; Vedula, Prakash

    2016-11-01

    A new algorithm for development of high order accurate finite difference schemes for numerical solution of partial differential equations using Lie symmetries is presented. Considering applicable symmetry groups (such as those relevant to space/time translations, Galilean transformation, scaling, rotation and projection) of a partial differential equation, invariant numerical schemes are constructed based on the notions of moving frames and modified equations. Several strategies for construction of invariant numerical schemes with a desired order of accuracy are analyzed. Performance of the proposed algorithm is demonstrated using analysis of one-dimensional partial differential equations, such as linear advection diffusion equations inviscid Burgers equation and viscous Burgers equation, as our test cases. Through numerical simulations based on these examples, the expected improvement in accuracy of invariant numerical schemes (up to fourth order) is demonstrated. Advantages due to implementation and enhanced computational efficiency inherent in our proposed algorithm are presented. Extension of the basic framework to multidimensional partial differential equations is also discussed.

  5. Accurate tracking of high dynamic vehicles with translated GPS

    NASA Astrophysics Data System (ADS)

    Blankshain, Kenneth M.

    The GPS concept and the translator processing system (TPS) which were developed for accurate and cost-effective tracking of various types of high dynamic expendable vehicles are described. A technique used by the translator processing system (TPS) to accomplish very accurate high dynamic tracking is presented. Automatic frequency control and fast Fourier transform processes are combined to track 100 g acceleration and 100 g/s jerk with 1-sigma velocity measurement error less than 1 ft/sec.

  6. Fast and accurate methods for the performance testing of highly-efficient c-Si photovoltaic modules using a 10 ms single-pulse solar simulator and customized voltage profiles

    NASA Astrophysics Data System (ADS)

    Virtuani, A.; Rigamonti, G.; Friesen, G.; Chianese, D.; Beljean, P.

    2012-11-01

    Performance testing of highly efficient, highly capacitive c-Si modules with pulsed solar simulators requires particular care. These devices in fact usually require a steady-state solar simulator or pulse durations longer than 100-200 ms in order to avoid measurement artifacts. The aim of this work was to validate an alternative method for the testing of highly capacitive c-Si modules using a 10 ms single pulse solar simulator. Our approach attempts to reconstruct a quasi-steady-state I-V (current-voltage) curve of a highly capacitive device during one single 10 ms flash by applying customized voltage profiles--in place of a conventional V ramp—to the terminals of the device under test. The most promising results were obtained by using V profiles which we name ‘dragon-back’ (DB) profiles. When compared to the reference I-V measurement (obtained by using a multi-flash approach with approximately 20 flashes), the DB V profile method provides excellent results with differences in the estimation of Pmax (as well as of Isc, Voc and FF) below ±0.5%. For the testing of highly capacitive devices the method is accurate, fast (two flashes—possibly one—required), cost-effective and has proven its validity with several technologies making it particularly interesting for in-line testing.

  7. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    SciTech Connect

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  8. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  9. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    SciTech Connect

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  10. Uniformly high order accurate essentially non-oscillatory schemes 3

    NASA Technical Reports Server (NTRS)

    Harten, A.; Engquist, B.; Osher, S.; Chakravarthy, S. R.

    1986-01-01

    In this paper (a third in a series) the construction and the analysis of essentially non-oscillatory shock capturing methods for the approximation of hyperbolic conservation laws are presented. Also presented is a hierarchy of high order accurate schemes which generalizes Godunov's scheme and its second order accurate MUSCL extension to arbitrary order of accuracy. The design involves an essentially non-oscillatory piecewise polynomial reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is derived from a new interpolation technique that when applied to piecewise smooth data gives high-order accuracy whenever the function is smooth but avoids a Gibbs phenomenon at discontinuities. Unlike standard finite difference methods this procedure uses an adaptive stencil of grid points and consequently the resulting schemes are highly nonlinear.

  11. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation

  12. Library preparation for highly accurate population sequencing of RNA viruses

    PubMed Central

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  13. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  14. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    SciTech Connect

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented.

  15. RTbox: a device for highly accurate response time measurements.

    PubMed

    Li, Xiangrui; Liang, Zhen; Kleiner, Mario; Lu, Zhong-Lin

    2010-02-01

    Although computer keyboards and mice are frequently used in measuring response times (RTs), the accuracy of these measurements is quite low. Specialized RT collection devices must be used to obtain more accurate measurements. However, all the existing devices have some shortcomings. We have developed and implemented a new, commercially available device, the RTbox, for highly accurate RT measurements. The RTbox has its own microprocessor and high-resolution clock. It can record the identities and timing of button events with high accuracy, unaffected by potential timing uncertainty or biases during data transmission and processing in the host computer. It stores button events until the host computer chooses to retrieve them. The asynchronous storage greatly simplifies the design of user programs. The RTbox can also receive and record external signals as triggers and can measure RTs with respect to external events. The internal clock of the RTbox can be synchronized with the computer clock, so the device can be used without external triggers. A simple USB connection is sufficient to integrate the RTbox with any standard computer and operating system.

  16. Simple and accurate sum rules for highly relativistic systems

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2005-03-01

    In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.

  17. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  18. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. A sensitive and accurate method for the determination of perfluoroalkyl and polyfluoroalkyl substances in human serum using a high performance liquid chromatography-online solid phase extraction-tandem mass spectrometry.

    PubMed

    Yu, Chang Ho; Patel, Bhupendra; Palencia, Marilou; Fan, Zhihua Tina

    2017-01-13

    A selective, sensitive, and accurate analytical method for the measurement of perfluoroalkyl and polyfluoroalkyl substances (PFASs) in human serum, utilizing LC-MS/MS (liquid chromatography-tandem mass spectrometry), was developed and validated according to the Centers for Disease Control and Prevention (CDC) guidelines for biological sample analysis. Tests were conducted to determine the optimal analytical column, mobile phase composition and pH, gradient program, and cleaning procedure. The final analytical column selected for analysis was an extra densely bonded silica-packed reverse-phase column (Agilent XDB-C8, 3.0×100mm, 3.5μm). Mobile phase A was an aqueous buffer solution containing 10mM ammonium acetate (pH=4.3). Mobile phase B was a mixture of methanol and acetonitrile (1:1, v/v). The gradient program was programmed by initiating a fast elution (%B, from 40 to 65%) between 1.0 and 1.5min, followed by a slow elution (%B: 65-80%) in the period of 1.5-7.5min. The cleanup procedures were augmented by cleaning with (1) various solvents (isopropyl alcohol, methanol, acetonitrile, and reverse osmosis-purified water); (2) extensive washing steps for the autosampler and solid phase extraction (SPE) cartridge; and (3) a post-analysis cleaning step for the whole system. Under the above conditions, the resolution and sensitivity were significantly improved. Twelve target PFASs were baseline-separated (2.5-7.0min) within a 10-min of acquisition time. The limits of detection (LODs) were 0.01ng/mL or lower for all of the target compounds, making this method 5 times more sensitive than previously published methods. The newly developed method was validated in the linear range of 0.01-50ng/mL, and the accuracy (recovery between 80 and 120%) and precision (RSD<20%) were acceptable at three spiked levels (0.25, 2.5, and 25ng/mL). The method development and validation results demonstrated that this method was precise, accurate, and robust, with high-throughput (∼10min per

  20. CgWind: A high-order accurate simulation tool for wind turbines and wind farms

    SciTech Connect

    Chand, K K; Henshaw, W D; Lundquist, K A; Singer, M A

    2010-02-22

    CgWind is a high-fidelity large eddy simulation (LES) tool designed to meet the modeling needs of wind turbine and wind park engineers. This tool combines several advanced computational technologies in order to model accurately the complex and dynamic nature of wind energy applications. The composite grid approach provides high-quality structured grids for the efficient implementation of high-order accurate discretizations of the incompressible Navier-Stokes equations. Composite grids also provide a natural mechanism for modeling bodies in relative motion and complex geometry. Advanced algorithms such as matrix-free multigrid, compact discretizations and approximate factorization will allow CgWind to perform highly resolved calculations efficiently on a wide class of computing resources. Also in development are nonlinear LES subgrid-scale models required to simulate the many interacting scales present in large wind turbine applications. This paper outlines our approach, the current status of CgWind and future development plans.

  1. A highly accurate ab initio potential energy surface for methane

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-01

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  2. Accurate protein crystallography at ultra-high resolution: Valence electron distribution in crambin

    PubMed Central

    Jelsch, Christian; Teeter, Martha M.; Lamzin, Victor; Pichon-Pesme, Virginie; Blessing, Robert H.; Lecomte, Claude

    2000-01-01

    The charge density distribution of a protein has been refined experimentally. Diffraction data for a crambin crystal were measured to ultra-high resolution (0.54 Å) at low temperature by using short-wavelength synchrotron radiation. The crystal structure was refined with a model for charged, nonspherical, multipolar atoms to accurately describe the molecular electron density distribution. The refined parameters agree within 25% with our transferable electron density library derived from accurate single crystal diffraction analyses of several amino acids and small peptides. The resulting electron density maps of redistributed valence electrons (deformation maps) compare quantitatively well with a high-level quantum mechanical calculation performed on a monopeptide. This study provides validation for experimentally derived parameters and a window into charge density analysis of biological macromolecules. PMID:10737790

  3. Accurate protein crystallography at ultra-high resolution: valence electron distribution in crambin.

    PubMed

    Jelsch, C; Teeter, M M; Lamzin, V; Pichon-Pesme, V; Blessing, R H; Lecomte, C

    2000-03-28

    The charge density distribution of a protein has been refined experimentally. Diffraction data for a crambin crystal were measured to ultra-high resolution (0.54 A) at low temperature by using short-wavelength synchrotron radiation. The crystal structure was refined with a model for charged, nonspherical, multipolar atoms to accurately describe the molecular electron density distribution. The refined parameters agree within 25% with our transferable electron density library derived from accurate single crystal diffraction analyses of several amino acids and small peptides. The resulting electron density maps of redistributed valence electrons (deformation maps) compare quantitatively well with a high-level quantum mechanical calculation performed on a monopeptide. This study provides validation for experimentally derived parameters and a window into charge density analysis of biological macromolecules.

  4. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  5. Highly accurate boronimeter assay of concentrated boric acid solutions

    SciTech Connect

    Ball, R.M. )

    1992-01-01

    The Random-Walk Boronimeter has successfully been used as an on-line indicator of boric acid concentration in an operating commercial pressurized water reactor. The principle has been adapted for measurement of discrete samples to high accuracy and to concentrations up to 6000 ppm natural boron in light water. Boric acid concentration in an aqueous solution is a necessary measurement in many nuclear power plants, particularly those that use boric acid dissolved in the reactor coolant as a reactivity control system. Other nuclear plants use a high-concentration boric acid solution as a backup shutdown system. Such a shutdown system depends on rapid injection of the solution and frequent surveillance of the fluid to ensure the presence of the neutron absorber. The two methods typically used to measure boric acid are the chemical and the physical methods. The chemical method uses titration to determine the ionic concentration of the BO[sub 3] ions and infers the boron concentration. The physical method uses the attenuation of neutrons by the solution and infers the boron concentration from the neutron absorption properties. This paper describes the Random-Walk Boronimeter configured to measure discrete samples to high accuracy and high concentration.

  6. Generating highly accurate prediction hypotheses through collaborative ensemble learning

    PubMed Central

    Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco

    2017-01-01

    Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination. PMID:28304378

  7. Generating highly accurate prediction hypotheses through collaborative ensemble learning

    NASA Astrophysics Data System (ADS)

    Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco

    2017-03-01

    Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.

  8. An accurate continuous calibration system for high voltage current transformer

    SciTech Connect

    Tong Yue; Li Binhong

    2011-02-15

    A continuous calibration system for high voltage current transformers is presented in this paper. The sensor of this system is based on a kind of electronic instrument current transformer, which is a clamp-shape air core coil. This system uses an optical fiber transmission system for its signal transmission and power supply. Finally the digital integrator and fourth-order convolution window algorithm as error calculation methods are realized by the virtual instrument with a personal computer. It is found that this system can calibrate a high voltage current transformer while energized, which means avoiding a long calibrating period in the power system and the loss of power metering expense. At the same time, it has a wide dynamic range and frequency band, and it can achieve a high accuracy measurement in a complex electromagnetic field environment. The experimental results and the on-site operation results presented in the last part of the paper, prove that it can reach the 0.05 accuracy class and is easy to operate on site.

  9. Theory of High-TC Superconductivity: Accurate Predictions of TC

    NASA Astrophysics Data System (ADS)

    Harshman, Dale; Fiory, Anthony

    2012-02-01

    The superconducting transition temperatures of high-TC compounds based on copper, iron, ruthenium and certain organic molecules is discovered to be dependent on bond lengths, ionic valences, and Coulomb coupling between electronic bands in adjacent, spatially separated layers [1]. Optimal transition temperature, denoted as TC0, is given by the universal expression kBTC0 = e^2λ/lζ; l is the spacing between interacting charges within the layers, ζ is the distance between interacting layers and λ is a universal constant, equal to about twice the reduced electron Compton wavelength (suggesting that Compton scattering plays a role in pairing). Non-optimum compounds in which sample degradation is evident typically exhibit TC < TC0. For the 31+ optimum compounds tested, the theoretical and experimental TC0 agree statistically to within ± 1.4 K. The elemental high-TC building block comprises two adjacent and spatially separated charge layers; the factor e^2/ζ arises from Coulomb forces between them. The theoretical charge structure representing a room-temperature superconductor is also presented. * 1. doi:10.1088/0953-8984/23/29/295701

  10. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  11. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  12. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  13. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  14. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  15. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  16. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    SciTech Connect

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  17. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  18. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  19. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  20. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  1. Highly accurate nitrogen dioxide (NO2) in nitrogen standards based on permeation.

    PubMed

    Flores, Edgar; Viallon, Joële; Moussay, Philippe; Idrees, Faraz; Wielgosz, Robert Ian

    2012-12-04

    The development and operation of a highly accurate primary gas facility for the dynamic production of mixtures of nitrogen dioxide (NO(2)) in nitrogen (N(2)) based on continuous weighing of a permeation tube and accurate impurity quantification and correction of the gas mixtures using Fourier transform infrared spectroscopy (FT-IR) is described. NO(2) gas mixtures in the range of 5 μmol mol(-1) to 15 μmol mol(-1) with a standard relative uncertainty of 0.4% can be produced with this facility. To achieve an uncertainty at this level, significant efforts were made to reduce, identify and quantify potential impurities present in the gas mixtures, such as nitric acid (HNO(3)). A complete uncertainty budget, based on the analysis of the performance of the facility, including the use of a FT-IR spectrometer and a nondispersive UV analyzer as analytical techniques, is presented in this work. The mixtures produced by this facility were validated and then selected to provide reference values for an international comparison of the Consultative Committee for Amount of Substance (CCQM), number CCQM-K74, (1) which was designed to evaluate the consistency of primary NO(2) gas standards from 17 National Metrology Institutes.

  2. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  3. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  4. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli.

  5. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance

    PubMed Central

    Talamas, Sean N.; Mavor, Kenneth I.; Perrett, David I.

    2016-01-01

    Despite the old adage not to ‘judge a book by its cover’, facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone’s health or intelligence, but such cues are overshadowed by an ‘attractiveness halo’ whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students’ future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  6. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  7. Rapid infrared mapping for highly accurate automated histology in Barrett's oesophagus.

    PubMed

    Old, O J; Lloyd, G R; Nallala, J; Isabelle, M; Almond, L M; Shepherd, N A; Kendall, C A; Shore, A C; Barr, H; Stone, N

    2016-10-07

    Barrett's oesophagus (BE) is a premalignant condition that can progress to oesophageal adenocarcinoma. Endoscopic surveillance aims to identify potential progression at an early, treatable stage, but generates large numbers of tissue biopsies. Fourier transform infrared (FTIR) mapping was used to develop an automated histology tool for detection of BE and Barrett's neoplasia in tissue biopsies. 22 oesophageal tissue samples were collected from 19 patients. Contiguous frozen tissue sections were taken for pathology review and FTIR imaging. 45 mid-IR images were measured on an Agilent 620 FTIR microscope with an Agilent 670 spectrometer. Each image covering a 140 μm × 140 μm region was measured in 5 minutes, using a 1.1 μm(2) pixel size and 64 scans per pixel. Principal component fed linear discriminant analysis was used to build classification models based on spectral differences, which were then tested using leave-one-sample-out cross validation. Key biochemical differences were identified by their spectral signatures: high glycogen content was seen in normal squamous (NSQ) tissue, high glycoprotein content was observed in glandular BE tissue, and high DNA content in dysplasia/adenocarcinoma samples. Classification of normal squamous samples versus 'abnormal' samples (any stage of Barrett's) was performed with 100% sensitivity and specificity. Neoplastic Barrett's (dysplasia or adenocarcinoma) was identified with 95.6% sensitivity and 86.4% specificity. Highly accurate pathology classification can be achieved with FTIR measurement of frozen tissue sections in a clinically applicable timeframe.

  8. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  9. Robust high-resolution cloth using parallelism, history-based collisions, and accurate friction.

    PubMed

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2009-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high resolution and high-fidelity simulations.

  10. Accurate determination of succinimide degradation products using high fidelity trypsin digestion peptide map analysis.

    PubMed

    Yu, X Christopher; Joe, Koman; Zhang, Yu; Adriano, Andrea; Wang, Yaning; Gazzano-Santoro, Helene; Keck, Rodney G; Deperalta, Galahad; Ling, Victor

    2011-08-01

    We report an efficient, high fidelity trypsin digestion method for peptide map analysis. This method minimizes artifacts caused by the sample preparation process, and we show its utility for the accurate determination of succinimide formation in a degraded monoclonal antibody product. A basic charge variant was detected by imaged capillary isoelectric focusing and was shown with reduced antigen binding and biological activity. Samples were reduced under denaturing conditions at pH 5.0, and digestion of the reduced protein with porcine trypsin was performed at pH 7.0 for 1 h. Following reversed phase high-performance liquid chromatography and online mass spectrometric analysis, succinimide formation was identified at Asp30 in the light chain. This result contrasts with the observation of only iso-Asp and Asp residues under conventional sample preparation conditions, which are therefore concluded to be artificially generated. The Asp30 residue is seen in the cocrystal structure model to participate in favorable charge interaction with an antigen molecule. Formation of succinimide and the resulting loss of negative charge are therefore hypothesized to be the degradation mechanism. After treatment of the degraded antibody sample to mildly alkaline pH conditions, we observed only Asp residue as the succinimide hydrolysis product and concurrent recovery of biological activity.

  11. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  12. High Performance Work Practices and Firm Performance.

    ERIC Educational Resources Information Center

    Department of Labor, Washington, DC. Office of the American Workplace.

    A literature survey established that a substantial amount of research has been conducted on the relationship between productivity and the following specific high performance work practices: employee involvement in decision making, compensation linked to firm or worker performance, and training. According to these studies, high performance work…

  13. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  14. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  15. Development of highly accurate approximate scheme for computing the charge transfer integral

    NASA Astrophysics Data System (ADS)

    Pershin, Anton; Szalay, Péter G.

    2015-08-01

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  16. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  17. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  18. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  19. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner

    PubMed Central

    Lu, David V.; Brown, Randall H.; Arumugam, Manimozhiyan; Brent, Michael R.

    2009-01-01

    Motivation: The most accurate way to determine the intron–exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. Results: We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created ‘perfect’ simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Availability: Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/ Contact: davidlu@wustl.edu; brent@cse.wustl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19414532

  20. High performing micromachined retroreflector

    NASA Astrophysics Data System (ADS)

    Lundvall, Axel; Nikolajeff, Fredrik; Lindstrom, Tomas

    2003-10-01

    This paper reports on the realization of a type of micromachined retroreflecting sheeting material. The geometry presented has high reflection efficiency even at large incident angles, and it can be manufactured through polymer replication techniques. The paper consists of two parts: A theoretical section outlining the design parameters and their impact on the optical performance, and secondly, an experimental part comprising both manufacturing and optical evaluation for a candidate retroreflecting sheet material in traffic control devices. Experimental data show that the retroreflecting properties are promising. The retroreflector consists of a front layer of densely packed spherical microlenses, a back surface of densely packed spherical micromirrors, and a transparent spacer layer. The thickness of the spacer layer determines in part the optical characteristics of the retroreflector.

  1. Gold nanospikes based microsensor as a highly accurate mercury emission monitoring system

    PubMed Central

    Sabri, Ylias M.; Ippolito, Samuel J.; Tardio, James; Bansal, Vipul; O'Mullane, Anthony P.; Bhargava, Suresh K.

    2014-01-01

    Anthropogenic elemental mercury (Hg0) emission is a serious worldwide environmental problem due to the extreme toxicity of the heavy metal to humans, plants and wildlife. Development of an accurate and cheap microsensor based online monitoring system which can be integrated as part of Hg0 removal and control processes in industry is still a major challenge. Here, we demonstrate that forming Au nanospike structures directly onto the electrodes of a quartz crystal microbalance (QCM) using a novel electrochemical route results in a self-regenerating, highly robust, stable, sensitive and selective Hg0 vapor sensor. The data from a 127 day continuous test performed in the presence of volatile organic compounds and high humidity levels, showed that the sensor with an electrodeposted sensitive layer had 260% higher response magnitude, 3.4 times lower detection limit (~22 μg/m3 or ~2.46 ppbv) and higher accuracy (98% Vs 35%) over a Au control based QCM (unmodified) when exposed to a Hg0 vapor concentration of 10.55 mg/m3 at 101°C. Statistical analysis of the long term data showed that the nano-engineered Hg0 sorption sites on the developed Au nanospikes sensitive layer play a critical role in the enhanced sensitivity and selectivity of the developed sensor towards Hg0 vapor. PMID:25338965

  2. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  3. An Improved Method for Accurate and Rapid Measurement of Flight Performance in Drosophila

    PubMed Central

    Babcock, Daniel T.; Ganetzky, Barry

    2014-01-01

    Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810

  4. A comparison of two formulations for high-order accurate essentially non-oscillatory schemes

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Shu, Chi-Wang; Atkins, H. L.

    1993-01-01

    The finite-volume and finite-difference implementations of high-order accurate essentially non-oscillatory shock-capturing schemes are discussed and compared. Results obtained with fourth-order accurate algorithms based on both formulations are examined for accuracy, sensitivity to grid irregularities, resolution of waves that are oblique to the mesh, and computational efficiency. Some algorithm modifications that may be required for a given application are suggested. Conclusions that pertain to the relative merits of both formulations are drawn, and some circumstances for which each might be useful are noted.

  5. High-accurate nonlocal timing and positioning using entangled photon pairs

    NASA Astrophysics Data System (ADS)

    Valencia Gonzalez, Alejandra C.

    One of the most surprising consequences of quantum mechanics is the concept of entanglement. This concept has intrigued the scientific community since it was first proposed by Einstein, Podolsky and Rosen in 1935 because of its connection to fundamental aspects regarding our conception of the universe. Nowadays, there are still open questions about the fundamental issues of quantum mechanics. Nevertheless, the unique characteristics of entanglement have been proposed for practical applications in the last years. Spontaneous Parametric Down Conversion (SPDC) has been recognized as a convenient source of entangled photon pairs. SPDC is a nonlinear optical process in which a pump laser beam is shone into a nonlinear crystal and occasionally one pump photon is down-converted to a pair of lower frequency photons that are entangled. Two photons in an entangled state are characterized by a single two-photon effective wavefunction, or Biphoton. They cannot be considered as the simple juxtaposition of two individual systems. This is a consequence of the quantum correlations between the two photons and implies that a measurement in one of the subsystems affects the total state of the composite system and, therefore, affects the output of a measurement performed in the other photon. The purpose of this dissertation is to show the potential of entangled photon pairs for high-accurate timing and positioning measurements. The entangled nature of the two-photon states allows, in principle, precise space-time correlation measurements to the femtosecond level, providing the physical foundations for high-accurate nonlocal distant clock synchronization. In this dissertation, the proof-of-principle demonstration of a "one-way" distant clock synchronization protocol is presented. The novel method is based on the measurements of the second order correlation function of entangled photon pairs. An experimental study of the behavior of the Biphoton when it travels through a dispersive

  6. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  7. Teacher Performance Pay Signals and Student Achievement: Are Signals Accurate, and How well Do They Work?

    ERIC Educational Resources Information Center

    Manzeske, David; Garland, Marshall; Williams, Ryan; West, Benjamin; Kistner, Alexandra Manzella; Rapaport, Amie

    2016-01-01

    High-performing teachers tend to seek out positions at more affluent or academically challenging schools, which tend to hire more experienced, effective educators. Consequently, low-income and minority students are more likely to attend schools with less experienced and less effective educators (see, for example, DeMonte & Hanna, 2014; Office…

  8. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  9. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  10. Detailed and Highly Accurate 3d Models of High Mountain Areas by the Macs-Himalaya Aerial Camera Platform

    NASA Astrophysics Data System (ADS)

    Brauchle, J.; Hein, D.; Berger, R.

    2015-04-01

    Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not achieved. It is for this reason that the DLR Institute of Optical Sensor Systems has developed a new aerial camera, the MACS-Himalaya. The measuring unit comprises four camera modules with an overall aperture angle of 116° perpendicular to the direction of flight. A High Dynamic Range (HDR) mode was introduced so that within a scene, bright areas such as sun-flooded snow and dark areas such as shaded stone can be imaged. In 2014, a measuring survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at heights up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced. Special challenges and gaps in the investigation of high mountain areas, approaches for resolution of these problems, the camera system and the state of evaluation are presented with examples.

  11. Highly accurate spectral retardance characterization of a liquid crystal retarder including Fabry-Perot interference effects

    SciTech Connect

    Vargas, Asticio; Mar Sánchez-López, María del; García-Martínez, Pascuala; Arias, Julia; Moreno, Ignacio

    2014-01-21

    Multiple-beam Fabry-Perot (FP) interferences occur in liquid crystal retarders (LCR) devoid of an antireflective coating. In this work, a highly accurate method to obtain the spectral retardance of such devices is presented. On the basis of a simple model of the LCR that includes FP effects and by using a voltage transfer function, we show how the FP features in the transmission spectrum can be used to accurately retrieve the ordinary and extraordinary spectral phase delays, and the voltage dependence of the latter. As a consequence, the modulation characteristics of the device are fully determined with high accuracy by means of a few off-state physical parameters which are wavelength-dependent, and a single voltage transfer function that is valid within the spectral range of characterization.

  12. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    PubMed Central

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-01-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496

  13. Revisit to three-dimensional percolation theory: Accurate analysis for highly stretchable conductive composite materials

    PubMed Central

    Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek

    2016-01-01

    A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties. PMID:27694856

  14. Revisit to three-dimensional percolation theory: Accurate analysis for highly stretchable conductive composite materials

    NASA Astrophysics Data System (ADS)

    Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek

    2016-10-01

    A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties.

  15. High performance collectors

    NASA Astrophysics Data System (ADS)

    Ogawa, H.; Hozumi, S.; Mitsumata, T.; Yoshino, K.; Aso, S.; Ebisu, K.

    1983-04-01

    Materials and structures used for flat plate solar collectors and evacuated tubular collectors were examined relative to their overall performance to project effectiveness for building heating and cooling and the feasibility of use for generating industrial process heat. Thermal efficiencies were calculated for black paint single glazed, selective surface single glazed, and selective surface double glazed flat plate collectors. The efficiencies of a single tube and central tube accompanied by two side tube collectors were also studied. Techniques for extending the lifetimes of the collectors were defined. The selective surface collectors proved to have a performance superior to other collectors in terms of the average annual energy delivered. Addition of a black chrome-coated fin system to the evacuated collectors produced significant collection efficiency increases.

  16. High Performance Arcjet Engines

    NASA Technical Reports Server (NTRS)

    Kennel, Elliot B.; Ivanov, Alexey Nikolayevich; Nikolayev, Yuri Vyacheslavovich

    1994-01-01

    This effort sought to exploit advanced single crystal tungsten-tantalum alloy material for fabrication of a high strength, high temperature arcjet anode. The use of this material is expected to result in improved strength, temperature resistance, and lifetime compared to state of the art polycrystalline alloys. In addition, the use of high electrical and thermal conductivity carbon-carbon composites was considered, and is believed to be a feasible approach. Highly conductive carbon-carbon composite anode capability represents enabling technology for rotating-arc designs derived from the Russian Scientific Research Institute of Thermal Processes (NIITP) because of high heat fluxes at the anode surface. However, for US designs the anode heat flux is much smaller, and thus the benefits are not as great as in the case of NIITP-derived designs. Still, it does appear that the tensile properties of carbon-carbon can be even better than those of single crystal tungsten alloys, especially when nearly-single-crystal fibers such as vapor grown carbon fiber (VGCF) are used. Composites fabricated from such materials must be coated with a refractory carbide coating in order to ensure compatibility with high temperature hydrogen. Fabrication of tungsten alloy single crystals in the sizes required for fabrication of an arcjet anode has been shown to be feasible. Test data indicate that the material can be expected to be at least the equal of W-Re-HfC polycrystalline alloy in terms of its tensile properties, and possibly superior. We are also informed by our colleagues at Scientific Production Association Luch (NP0 Luch) that it is possible to use Russian technology to fabricate polycrystalline W-Re-HfC or other high strength alloys if desired. This is important because existing engines must rely on previously accumulated stocks of these materials, and a fabrication capability for future requirements is not assured.

  17. Performing Accurate Rigid Kinematics Measurements from 3D in vivo Image Sequences through Median Consensus Simultaneous Registration.

    PubMed

    Cresson, T; Jacq, J; Burdin, V; Roux, Ch

    2005-01-01

    While focusing at accurate 3D joint kinematics, this paper explores the problem of how to perform a robust rigid registration for a sequence of object surfaces observed using standard 3D medical imaging techniques. Each object instance is assumed to give access to a polyhedral encoding of its boundary. We consider the case where object instances are noised with significant truncations and segmentation errors. The proposed method aims to tackle this problem in a global way, fully exploiting the duality between redundancy and complementarity of the available instances set. The algorithm operates through robust and simultaneous registration of all geometrical instances on a virtual instance accounting for their median consensus. When compared with standard robust techniques, trials reveal significant gains, as much in robustness as in accuracy. The considered applications are mainly focused on generating highly accurate kinematics in relation to the bone structures of the most complex joints - the tarsus and the carpus - for which no alternative examination techniques exist, enabling fine morphological analysis as well as access to internal joint motions.

  18. High performance cyclone development

    SciTech Connect

    Giles, W.B.

    1981-01-01

    The results of cold flow experiments at atmospheric conditions of an air-shielded 18 in-dia electrocyclone with a central cusped electrode are reported using fine test dusts of both flyash and nickel powder. These results are found to confirm expectations of enhanced performance, similar to earlier work on a 12 in-dia model. An analysis of the combined inertial-electrostatic force field is also presented which identifies general design goals and scaling laws. From this, it is found that electrostatic enhancement will be particularly beneficial for fine dusts in large cyclones. Recommendations for further improvement in cyclone collection efficiency are proposed.

  19. High Performance Magnets

    DTIC Science & Technology

    2000-03-29

    Our efforts in this project were focused on three different materials, namely; interstitial Sm-Fe carbides and nitrides, high energy product Nd2Fe14B ...magnets with MgO addition, and nanocomposite Nd2Fe14B /alpha-Fe consisting of a fine mixture of hard and soft phases. In the Sm-Fe carbides and

  20. High Performance Biocomputation

    DTIC Science & Technology

    2005-03-01

    view, are failed grand challenges include the "War on Cancer " (circa 1970) and the "Decade of the Brain" in which an NIH report in 1990 argued that...ancestors possible. There have been claims made that DNA may be found in preserved ancient bacteria or even in dinosaur bones, but these claims remain highly

  1. A high-order accurate embedded boundary method for first order hyperbolic equations

    NASA Astrophysics Data System (ADS)

    Mattsson, Ken; Almquist, Martin

    2017-04-01

    A stable and high-order accurate embedded boundary method for first order hyperbolic equations is derived. Where the grid-boundaries and the physical boundaries do not coincide, high order interpolation is used. The boundary stencils are based on a summation-by-parts framework, and the boundary conditions are imposed by the SAT penalty method, which guarantees linear stability for one-dimensional problems. Second-, fourth-, and sixth-order finite difference schemes are considered. The resulting schemes are fully explicit. Accuracy and numerical stability of the proposed schemes are demonstrated for both linear and nonlinear hyperbolic systems in one and two spatial dimensions.

  2. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  3. Techniques for determining propulsion system forces for accurate high speed vehicle drag measurements in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.

    1975-01-01

    As part of a NASA program to evaluate current methods of predicting the performance of large, supersonic airplanes, the drag of the XB-70 airplane was measured accurately in flight at Mach numbers from 0.75 to 2.5. This paper describes the techniques used to determine engine net thrust and the drag forces charged to the propulsion system that were required for the in-flight drag measurements. The accuracy of the measurements and the application of the measurement techniques to aircraft with different propulsion systems are discussed. Examples of results obtained for the XB-70 airplane are presented.

  4. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-10-01

    Over 30 years ago U.S. industry introduced the world`s highest temperature (1200{degrees}F at 5000 psig) and most efficient power plant, the Eddystone coal-burning steam plant. The highest alloy material used in the plant was 316 stainless steel. Problems during the first few years of operation caused a reduction in operating temperature to 1100{degrees}F which has generally become the highest temperature used in plants around the world. Leadership in high temperature steam has moved to Japan and Europe over the last 30 years.

  5. High Performance YBCO Films

    DTIC Science & Technology

    1992-07-01

    growing high quality MgO films on SrF2 substrates is the oxygen partial pressure during the growth. The x-ray data presented in Fig. 13 indicates a...fluo-ide and quartz substrates. The best result with two buffer layers (MgO and YSZ) on SrF2 was an onset temperature (Tc) of 82K and a transition...With a YSZ buffer an onset temperature of 85K and a transition width of 5K was achieved. Recent success was demonstrated by Neocera ( under a NASA

  6. High Voltage SPT Performance

    NASA Technical Reports Server (NTRS)

    Manzella, David; Jacobson, David; Jankovsky, Robert

    2001-01-01

    A 2.3 kW stationary plasma thruster designed to operate at high voltage was tested at discharge voltages between 300 and 1250 V. Discharge specific impulses between 1600 and 3700 sec were demonstrated with thrust between 40 and 145 mN. Test data indicated that discharge voltage can be optimized for maximum discharge efficiency. The optimum discharge voltage was between 500 and 700 V for the various anode mass flow rates considered. The effect of operating voltage on optimal magnet field strength was investigated. The effect of cathode flow rate on thruster efficiency was considered for an 800 V discharge.

  7. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  8. Children Can Accurately Monitor and Control Their Number-Line Estimation Performance

    ERIC Educational Resources Information Center

    Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.

    2016-01-01

    Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…

  9. High performance alloy electroforming

    NASA Technical Reports Server (NTRS)

    Malone, G. A.; Winkelman, D. M.

    1989-01-01

    Electroformed copper and nickel are used in structural applications for advanced propellant combustion chambers. An improved process has been developed by Bell Aerospace Textron, Inc. wherein electroformed nickel-manganese alloy has demonstrated superior mechanical and thermal stability when compared to previously reported deposits from known nickel plating processes. Solution chemistry and parametric operating procedures are now established and material property data is established for deposition of thick, large complex shapes such as the Space Shuttle Main Engine. The critical operating variables are those governing the ratio of codeposited nickel and manganese. The deposition uniformity which in turn affects the manganese concentration distribution is affected by solution resistance and geometric effects as well as solution agitation. The manganese concentration in the deposit must be between 2000 and 3000 ppm for optimum physical properties to be realized. The study also includes data regarding deposition procedures for achieving excellent bond strength at an interface with copper, nickel-manganese or INCONEL 718. Applications for this electroformed material include fabrication of complex or re-entry shapes which would be difficult or impossible to form from high strength alloys such as INCONEL 718.

  10. Defining allowable physical property variations for high accurate measurements on polymer parts

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Sonne, M. R.; Madruga, D. G.; De Chiffre, L.; Hattel, J. H.

    2016-06-01

    Measurement conditions and material properties have a significant impact on the dimensions of a part, especially for polymers parts. Temperature variation causes part deformations that increase the uncertainty of the measurement process. Current industrial tolerances of a few micrometres demand high accurate measurements in non-controlled ambient. Most of polymer parts are manufactured by injection moulding and their inspection is carried out after stabilization, around 200 hours. The overall goal of this work is to reach ±5μm in uncertainty measurements a polymer products which is a challenge in today`s production and metrology environments. The residual deformations in polymer products at room temperature after injection molding are important when micrometer accuracy needs to be achieved. Numerical modelling can give a valuable insight to what is happening in the polymer during cooling down after injection molding. In order to obtain accurate simulations, accurate inputs to the model are crucial. In reality however, the material and physical properties will have some variations. Although these variations may be small, they can act as a source of uncertainty for the measurement. In this paper, we investigated how big the variation in material and physical properties are allowed in order to reach the 5 μm target on the uncertainty.

  11. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  12. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  13. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  14. A highly accurate wireless digital sun sensor based on profile detecting and detector multiplexing technologies

    NASA Astrophysics Data System (ADS)

    Wei, Minsong; Xing, Fei; You, Zheng

    2017-01-01

    The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.

  15. A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.

    PubMed

    Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C

    2017-05-01

    A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described.

  16. Highly Accurate Structure-Based Prediction of HIV-1 Coreceptor Usage Suggests Intermolecular Interactions Driving Tropism.

    PubMed

    Kieslich, Chris A; Tamamis, Phanourios; Guzman, Yannis A; Onel, Melis; Floudas, Christodoulos A

    2016-01-01

    HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/.

  17. Simplified yet highly accurate enzyme kinetics for cases of low substrate concentrations.

    PubMed

    Härdin, Hanna M; Zagaris, Antonios; Krab, Klaas; Westerhoff, Hans V

    2009-10-01

    Much of enzyme kinetics builds on simplifications enabled by the quasi-steady-state approximation and is highly useful when the concentration of the enzyme is much lower than that of its substrate. However, in vivo, this condition is often violated. In the present study, we show that, under conditions of realistic yet high enzyme concentrations, the quasi-steady-state approximation may readily be off by more than a factor of four when predicting concentrations. We then present a novel extension of the quasi-steady-state approximation based on the zero-derivative principle, which requires considerably less theoretical work than did previous such extensions. We show that the first-order zero-derivative principle, already describes much more accurately the true enzyme dynamics at enzyme concentrations close to the concentration of their substrates. This should be particularly relevant for enzyme kinetics where the substrate is an enzyme, such as in phosphorelay and mitogen-activated protein kinase pathways. We illustrate this for the important example of the phosphotransferase system involved in glucose uptake, metabolism and signaling. We find that this system, with a potential complexity of nine dimensions, can be understood accurately using the first-order zero-derivative principle in terms of the behavior of a single variable with all other concentrations constrained to follow that behavior.

  18. Highly accurate measurements of the spontaneous fission half-life of 240,242Pu

    NASA Astrophysics Data System (ADS)

    Salvador-Castiñeira, P.; Bryś, T.; Eykens, R.; Hambsch, F.-J.; Moens, A.; Oberstedt, S.; Sibbens, G.; Vanleeuw, D.; Vidali, M.; Pretel, C.

    2013-12-01

    Fast spectrum neutron-induced fission cross-section data for transuranic isotopes are of special demand from the nuclear data community. In particular highly accurate data are needed for the new generation IV nuclear applications. The aim is to obtain precise neutron-induced fission cross sections for 240Pu and 242Pu. To do so, accurate data on spontaneous fission half-lives must be available. Also, minimizing uncertainties in the detector efficiency is a key point. We studied both isotopes by means of a twin Frisch-grid ionization chamber with the goal of improving the present data on the neutron-induced fission cross section. For the two plutonium isotopes the high α-particle decay rates pose a particular problem to experiments due to piling-up events in the counting gas. Argon methane and methane were employed as counting gases, the latter showed considerable improvement in signal generation due to its higher drift velocity. The detection efficiency for both samples was determined, and improved spontaneous fission half-lives were obtained with very low statistical uncertainty (0.13% for 240Pu and 0.04% for 242Pu): for 240Pu, T1/2,SF=1.165×1011 yr (1.1%), and for 242Pu, T1/2,SF=6.74×1010 yr (1.3%). Systematic uncertainties are due to sample mass (0.4% for 240Pu and 0.9% for 242Pu) and efficiency (1%).

  19. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  20. High- and low-pressure pneumotachometers measure respiration rates accurately in adverse environments

    NASA Technical Reports Server (NTRS)

    Fagot, R. J.; Mc Donald, R. T.; Roman, J. A.

    1968-01-01

    Respiration-rate transducers in the form of pneumotachometers measure respiration rates of pilots operating high performance research aircraft. In each low pressure or high pressure oxygen system a sensor is placed in series with the pilots oxygen supply line to detect gas flow accompanying respiration.

  1. A fast and accurate algorithm for high-frequency trans-ionospheric path length determination

    NASA Astrophysics Data System (ADS)

    Wijaya, Dudy D.

    2015-12-01

    This paper presents a fast and accurate algorithm for high-frequency trans-ionospheric path length determination. The algorithm is merely based on the solution of the Eikonal equation that is solved using the conformal theory of refraction. The main advantages of the algorithm are summarized as follows. First, the algorithm can determine the optical path length without iteratively adjusting both elevation and azimuth angles and, hence, the computational time can be reduced. Second, for the same elevation and azimuth angles, the algorithm can simultaneously determine the phase and group of both ordinary and extra-ordinary optical path lengths for different frequencies. Results from numerical simulations show that the computational time required by the proposed algorithm to accurately determine 8 different optical path lengths is almost 17 times faster than that required by a 3D ionospheric ray-tracing algorithm. It is found that the computational time to determine multiple optical path lengths is the same with that for determining a single optical path length. It is also found that the proposed algorithm is capable of determining the optical path lengths with millimeter level of accuracies, if the magnitude of the squared ratio of the plasma frequency to the transmitted frequency is less than 1.33× 10^{-3}, and hence the proposed algorithm is applicable for geodetic applications.

  2. Use of Monocrystalline Silicon as Tool Material for Highly Accurate Blanking of Thin Metal Foils

    SciTech Connect

    Hildering, Sven; Engel, Ulf; Merklein, Marion

    2011-05-04

    The trend towards miniaturisation of metallic mass production components combined with increased component functionality is still unbroken. Manufacturing these components by forming and blanking offers economical and ecological advantages combined with the needed accuracy. The complexity of producing tools with geometries below 50 {mu}m by conventional manufacturing methods becomes disproportional higher. Expensive serial finishing operations are required to achieve an adequate surface roughness combined with accurate geometry details. A novel approach for producing such tools is the use of advanced etching technologies for monocrystalline silicon that are well-established in the microsystems technology. High-precision vertical geometries with a width down to 5 {mu}m are possible. The present study shows a novel concept using this potential for the blanking of thin copper foils with monocrystallline silicon as a tool material. A self-contained machine-tool with compact outer dimensions was designed to avoid tensile stresses in the brittle silicon punch by an accurate, careful alignment of the punch, die and metal foil. A microscopic analysis of the monocrystalline silicon punch shows appropriate properties regarding flank angle, edge geometry and surface quality for the blanking process. Using a monocrystalline silicon punch with a width of 70 {mu}m blanking experiments on as-rolled copper foils with a thickness of 20 {mu}m demonstrate the general applicability of this material for micro production processes.

  3. High Performance Flexible Thermal Link

    NASA Astrophysics Data System (ADS)

    Sauer, Arne; Preller, Fabian

    2014-06-01

    The paper deals with the design and performance verification of a high performance and flexible carbon fibre thermal link.Project goal was to design a space qualified thermal link combining low mass, flexibility and high thermal conductivity with new approaches regarding selected materials and processes. The idea was to combine the advantages of existing metallic links regarding flexibility and the thermal performance of high conductive carbon pitch fibres. Special focus is laid on the thermal performance improvement of matrix systems by means of nano-scaled carbon materials in order to improve the thermal performance also perpendicular to the direction of the unidirectional fibres.One of the main challenges was to establish a manufacturing process which allows handling the stiff and brittle fibres, applying the matrix and performing the implementation into an interface component using unconventional process steps like thermal bonding of fibres after metallisation.This research was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi).

  4. Can Young Children Be More Accurate Predictors of Their Recall Performance?

    ERIC Educational Resources Information Center

    Lipko-Speed, Amanda R.

    2013-01-01

    Preschoolers persistently predict that they will perform better than they actually can perform on a picture recall task. The current investigation sought to explore a condition under which young children might be able to improve their predictive accuracy. Namely, children were asked to predict their recall twice for the same set of items.…

  5. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  6. Accurate Event-Driven Motion Compensation in High-Resolution PET Incorporating Scattered and Random Events

    PubMed Central

    Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna

    2010-01-01

    With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420

  7. High-accurate optical vector analysis based on optical single-sideband modulation

    NASA Astrophysics Data System (ADS)

    Xue, Min; Pan, Shilong

    2016-11-01

    Most of the efforts devoted to the area of optical communications were on the improvement of the optical spectral efficiency. Varies innovative optical devices are thus developed to finely manipulate the optical spectrum. Knowing the spectral responses of these devices, including the magnitude, phase and polarization responses, is of great importance for their fabrication and application. To achieve high-resolution characterization, optical vector analyzers (OVAs) based on optical single-sideband (OSSB) modulation have been proposed and developed. Benefiting from the mature and highresolution microwave technologies, the OSSB-based OVA can potentially achieve a resolution of sub-Hz. However, the accuracy is restricted by the measurement errors induced by the unwanted first-order sideband and the high-order sidebands in the OSSB signal, since electrical-to-optical conversion and optical-to-electrical conversion are essentially required to achieve high-resolution frequency sweeping and extract the magnitude and phase information in the electrical domain. Recently, great efforts have been devoted to improve the accuracy of the OSSB-based OVA. In this paper, the influence of the unwanted-sideband induced measurement errors and techniques for implementing high-accurate OSSB-based OVAs are discussed.

  8. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  9. Multilayer high performance insulation materials

    NASA Technical Reports Server (NTRS)

    Stuckey, J. M.

    1971-01-01

    A number of tests are required to evaluate both multilayer high performance insulation samples and the materials that comprise them. Some of the techniques and tests being employed for these evaluations and some of the results obtained from thermal conductivity tests, outgassing studies, effect of pressure on layer density tests, hypervelocity impact tests, and a multilayer high performance insulation ambient storage program at the Kennedy Space Center are presented.

  10. Conservative high-order-accurate finite-difference methods for curvilinear grids

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Chakrvarthy, Sukumar

    1993-01-01

    Two fourth-order-accurate finite-difference methods for numerically solving hyperbolic systems of conservation equations on smooth curvilinear grids are presented. The first method uses the differential form of the conservation equations; the second method uses the integral form of the conservation equations. Modifications to these schemes, which are required near boundaries to maintain overall high-order accuracy, are discussed. An analysis that demonstrates the stability of the modified schemes is also provided. Modifications to one of the schemes to make it total variation diminishing (TVD) are also discussed. Results that demonstrate the high-order accuracy of both schemes are included in the paper. In particular, a Ringleb-flow computation demonstrates the high-order accuracy and the stability of the boundary and near-boundary procedures. A second computation of supersonic flow over a cylinder demonstrates the shock-capturing capability of the TVD methodology. An important contribution of this paper is the dear demonstration that higher order accuracy leads to increased computational efficiency.

  11. High-precision topography measurement through accurate in-focus plane detection with hybrid digital holographic microscope and white light interferometer module.

    PubMed

    Liżewski, Kamil; Tomczewski, Sławomir; Kozacki, Tomasz; Kostencka, Julianna

    2014-04-10

    High-precision topography measurement of micro-objects using interferometric and holographic techniques can be realized provided that the in-focus plane of an imaging system is very accurately determined. Therefore, in this paper we propose an accurate technique for in-focus plane determination, which is based on coherent and incoherent light. The proposed method consists of two major steps. First, a calibration of the imaging system with an amplitude object is performed with a common autofocusing method using coherent illumination, which allows for accurate localization of the in-focus plane position. In the second step, the position of the detected in-focus plane with respect to the imaging system is measured with white light interferometry. The obtained distance is used to accurately adjust a sample with the precision required for the measurement. The experimental validation of the proposed method is given for measurement of high-numerical-aperture microlenses with subwavelength accuracy.

  12. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  13. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  14. Using a highly accurate self-stop Cu-CMP model in the design flow

    NASA Astrophysics Data System (ADS)

    Izuha, Kyoko; Sakairi, Takashi; Shibuki, Shunichi; Bora, Monalisa; Hatem, Osama; Ghulghazaryan, Ruben; Strecker, Norbert; Wilson, Jeff; Takeshita, Noritsugu

    2010-03-01

    An accurate model for the self-stop copper chemical mechanical polishing (Cu-CMP) process has been developed using CMP modeling technology from Mentor Graphics. This technology was applied on data from Sony to create and optimize copper electroplating (ECD), Cu-CMP, and barrier metal polishing (BM-CMP) process models. These models take into account layout pattern dependency, long range diffusion and planarization effects, as well as microloading from local pattern density. The developed ECD model accurately predicted erosion and dishing over the entire range of width and space combinations present on the test chip. Then, the results of the ECD model were used as an initial structure to model the Cu-CMP step. Subsequently, the result of Cu-CMP was used for the BM-CMP model creation. The created model was successful in reproducing the measured data, including trends for a broad range of metal width and densities. Its robustness is demonstrated by the fact that it gives acceptable prediction of final copper thickness data although the calibration data included noise from line scan measurements. Accuracy of the Cu-CMP model has a great impact on the prediction results for BM-CMP. This is a critical feature for the modeling of high precision CMP such as self-stop Cu-CMP. Finally, the developed model could successfully extract planarity hotspots that helped identify potential problems in production chips before they were manufactured. The output thickness values of metal and dielectric can be used to drive layout enhancement tools and improve the accuracy of timing analysis.

  15. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  16. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  17. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  18. Can medical students accurately predict their learning? A study comparing perceived and actual performance in neuroanatomy.

    PubMed

    Hall, Samuel R; Stephens, Jonny R; Seaby, Eleanor G; Andrade, Matheus Gesteira; Lowry, Andrew F; Parton, Will J C; Smith, Claire F; Border, Scott

    2016-10-01

    It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second year medical students attending neuroanatomy revision sessions at the University of Southampton and the competitors of the National Undergraduate Neuroanatomy Competition were asked to rate their level of knowledge in neuroanatomy. The responses from the former group were compared to performance on a ten item multiple choice question examination and the latter group were compared to their performance within the competition. In both cohorts, self-assessments of perceived level of knowledge correlated weakly to their performance in their respective objective knowledge assessments (r = 0.30 and r = 0.44). Within the NUNC, this correlation improved when students were instead asked to rate their performance on a specific examination within the competition (spotter, rS = 0.68; MCQ, rS = 0.58). Despite its inherent difficulty, medical student self-assessment accuracy in neuroanatomy is comparable to other subjects within the medical curriculum. Anat Sci Educ 9: 488-495. © 2016 American Association of Anatomists.

  19. Highly sensitive capillary electrophoresis-mass spectrometry for rapid screening and accurate quantitation of drugs of abuse in urine.

    PubMed

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-05-30

    The combination of capillary electrophoresis (CE) and mass spectrometry (MS) is particularly well adapted to bioanalysis due to its high separation efficiency, selectivity, and sensitivity; its short analytical time; and its low solvent and sample consumption. For clinical and forensic toxicology, a two-step analysis is usually performed: first, a screening step for compound identification, and second, confirmation and/or accurate quantitation in cases of presumed positive results. In this study, a fast and sensitive CE-MS workflow was developed for the screening and quantitation of drugs of abuse in urine samples. A CE with a time-of-flight MS (CE-TOF/MS) screening method was developed using a simple urine dilution and on-line sample preconcentration with pH-mediated stacking. The sample stacking allowed for a high loading capacity (20.5% of the capillary length), leading to limits of detection as low as 2 ng mL(-1) for drugs of abuse. Compound quantitation of positive samples was performed by CE-MS/MS with a triple quadrupole MS equipped with an adapted triple-tube sprayer and an electrospray ionization (ESI) source. The CE-ESI-MS/MS method was validated for two model compounds, cocaine (COC) and methadone (MTD), according to the Guidance of the Food and Drug Administration. The quantitative performance was evaluated for selectivity, response function, the lower limit of quantitation, trueness, precision, and accuracy. COC and MTD detection in urine samples was determined to be accurate over the range of 10-1000 ng mL(-1) and 21-1000 ng mL(-1), respectively.

  20. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  1. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  2. Assessing temporal flux of plant hormones in stored processing potatoes using high definition accurate mass spectrometry

    PubMed Central

    Ordaz-Ortiz, José Juan; Foukaraki, Sofia; Terry, Leon Alexander

    2015-01-01

    Plant hormones are important molecules which at low concentration can regulate various physiological processes. Mass spectrometry has become a powerful technique for the quantification of multiple classes of plant hormones because of its high sensitivity and selectivity. We developed a new ultrahigh pressure liquid chromatography–full-scan high-definition accurate mass spectrometry method, for simultaneous determination of abscisic acid and four metabolites phaseic acid, dihydrophaseic acid, 7′-hydroxy-abscisic acid and abscisic acid glucose ester, cytokinins zeatin, zeatin riboside, gibberellins (GA1, GA3, GA4 and GA7) and indole-3-acetyl-L-aspartic acid. We measured the amount of plant hormones in the flesh and skin of two processing potato cvs. Sylvana and Russet Burbank stored for up to 30 weeks at 6 °C under ambient air conditions. Herein, we report for the first time that abscisic acid glucose ester seems to accumulate in the skin of potato tubers throughout storage time. The method achieved a lowest limit of detection of 0.22 ng g−1 of dry weight and a limit of quantification of 0.74 ng g−1 dry weight (zeatin riboside), and was able to recover, detect and quantify a total of 12 plant hormones spiked on flesh and skin of potato tubers. In addition, the mass accuracy for all compounds (<5 ppm) was evaluated. PMID:26504563

  3. Assessing temporal flux of plant hormones in stored processing potatoes using high definition accurate mass spectrometry.

    PubMed

    Ordaz-Ortiz, José Juan; Foukaraki, Sofia; Terry, Leon Alexander

    2015-01-01

    Plant hormones are important molecules which at low concentration can regulate various physiological processes. Mass spectrometry has become a powerful technique for the quantification of multiple classes of plant hormones because of its high sensitivity and selectivity. We developed a new ultrahigh pressure liquid chromatography-full-scan high-definition accurate mass spectrometry method, for simultaneous determination of abscisic acid and four metabolites phaseic acid, dihydrophaseic acid, 7'-hydroxy-abscisic acid and abscisic acid glucose ester, cytokinins zeatin, zeatin riboside, gibberellins (GA1, GA3, GA4 and GA7) and indole-3-acetyl-L-aspartic acid. We measured the amount of plant hormones in the flesh and skin of two processing potato cvs. Sylvana and Russet Burbank stored for up to 30 weeks at 6 °C under ambient air conditions. Herein, we report for the first time that abscisic acid glucose ester seems to accumulate in the skin of potato tubers throughout storage time. The method achieved a lowest limit of detection of 0.22 ng g(-1) of dry weight and a limit of quantification of 0.74 ng g(-1) dry weight (zeatin riboside), and was able to recover, detect and quantify a total of 12 plant hormones spiked on flesh and skin of potato tubers. In addition, the mass accuracy for all compounds (<5 ppm) was evaluated.

  4. High resolution DEM from Tandem-X interferometry: an accurate tool to characterize volcanic activity

    NASA Astrophysics Data System (ADS)

    Albino, Fabien; Kervyn, Francois

    2013-04-01

    Tandem-X mission was launched by the German agency (DLR) in June 2010. It is a new generation high resolution SAR sensor mainly dedicated to topographic applications. For the purpose of our researches focused on the study of the volcano-tectonic activity in the Kivu Rift area, a set of Tandem-X bistatic radar images were used to produce a high resolution InSAR DEM of the Virunga Volcanic Province (VVP). The VVP is part of the Western branch of the African rift, situated at the boundary between D.R. Congo, Rwanda and Uganda. It has two highly active volcanoes, Nyiragongo and Nyamulagira. A first task concerns the quantitative assessment of the vertical accuracy that can be achieved with these new data. The new DEMs are compared to other space borne datasets (SRTM, ASTER) but also to field measurements given by differential GPS. Multi-temporal radar acquisitions allow us to produce several DEM of the same area. This appeared to be very useful in the context of an active volcanic context where new geomorphological features (faults, fissures, volcanic cones and lava flows) appear continuously through time. For example, since the year 2000, time of the SRTM acquisition, we had one eruption at Nyiragongo (2002) and six eruptions at Nyamulagira (2001, 2002, 2004, 2006, 2010 and 2011) which all induce large changes in the landscape with the emplacement of new lava fields and scoria cones. From our repetitive Tandem-X DEM production, we have a tool to identify and also quantify in term of size and volume all the topographic changes relative to this past volcanic activity. These parameters are high value information to improve the understanding of the Virunga volcanoes; the accurate estimation of erupted volume and knowledge of structural features associated to past eruptions are key parameters to understand the volcanic system, to ameliorate the hazard assessment, and finally contribute to risk mitigation in a densely populated area.

  5. High performance dielectric materials development

    NASA Astrophysics Data System (ADS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-09-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  6. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  7. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  8. Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis

    NASA Astrophysics Data System (ADS)

    Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C.; Matarrese, G.; Del Guerra, A.; Bisogni, M. G.

    2013-08-01

    It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM-FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.

  9. Ion chromatography as highly suitable method for rapid and accurate determination of antibiotic fosfomycin in pharmaceutical wastewater.

    PubMed

    Zeng, Ping; Xie, Xiaolin; Song, Yonghui; Liu, Ruixia; Zhu, Chaowei; Galarneau, Anne; Pic, Jean-Stéphane

    2014-01-01

    A rapid and accurate ion chromatography (IC) method (limit of detection as low as 0.06 mg L(-1)) for fosfomycin concentration determination in pharmaceutical industrial wastewater was developed. This method was compared with the performance of high performance liquid chromatography determination (with a high detection limit of 96.0 mg L(-1)) and ultraviolet spectrometry after reacting with alizarin (difficult to perform in colored solutions). The accuracy of the IC method was established in the linear range of 1.0-15.0 mg L(-1) and a linear correlation was found with a correlation coefficient of 0.9998. The recoveries of fosfomycin from industrial pharmaceutical wastewater at spiking concentrations of 2.0, 5.0 and 8.0 mg L(-1) ranged from 81.91 to 94.74%, with a relative standard deviation (RSD) from 1 to 4%. The recoveries of effluent from a sequencing batch reactor treated fosfomycin with activated sludge at spiking concentrations of 5.0, 8.0, 10.0 mg L(-1) ranging from 98.25 to 99.91%, with a RSD from 1 to 2%. The developed IC procedure provided a rapid, reliable and sensitive method for the determination of fosfomycin concentration in industrial pharmaceutical wastewater and samples containing complex components.

  10. Highly accurate analytic formulae for projectile motion subjected to quadratic drag

    NASA Astrophysics Data System (ADS)

    Turkyilmazoglu, Mustafa

    2016-05-01

    The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.

  11. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  12. Honey bees can perform accurately directed waggle dances based solely on information from a homeward trip.

    PubMed

    Edrich, Wolfgang

    2015-10-01

    Honey bees were displaced several 100 m from their hive to an unfamiliar site and provisioned with honey. After feeding, almost two-thirds of the bees flew home to their hive within a 50 min observation time. About half of these returning, bees signalled the direction of the release site in waggle dances thus demonstrating that the dance can be guided entirely by information gathered on a single homeward trip. The likely reason for the bees' enthusiastic dancing on their initial return from this new site was the highly rewarding honeycomb that they were given there. The attractive nature of the site is confirmed by many of these bees revisiting the site and continuing to forage there.

  13. Automated construction of highly accurate meiotic mapping panels for human chromosome 7 using BINS

    SciTech Connect

    Liu, L.; Helms, C.; Dutchik, J.

    1994-09-01

    Development of a set of highly accurate meiotic breakpoint panels for the human genome based on CEPH reference pedigree genotypes and highly informative microsatellite markers will provide a valuable resource for the efficient mapping of new markers and will promote the rapid integration of physical and genetic map information. Key to the development of such a panel is the availability of a reliable set of genotypic data and automated methods for panel construction and verification. We have recently completed construction of comprehensive, microsatellite, and index linkage maps for human chromosome 7 using CEPH pedigree genotypes and CRI-MAP (with odds for marker order of 1000:1). A subset of markers used to build these maps that were typed on 40 CEPH families and rigorously checked for errors (e.g. using the Chrompics option of CRI-MAP) were selected for use to develop a set of meiotic breakpoint panels. The BINS programs has been developed to determine the locations of reliable crossovers using primary genotype data for every individual of each pedigree with the aim of creating crossover mapping panels. BINS utilizes a set of algorithms that parses out reliable and consistent data and uses these data to construct a crossover-based map. BINS has been utilized to construct a primary meiotic mapping panel for human chromosome 7. A graphical display of the breakpoint data provides an easily interpretable image and specifically highlights possible data inconsistencies (e.g. questionable double crossovers). We have used BINS and the CEPH genotypes to construct a preliminary set of panels for chromosome 7. Refinement of the panels is in progress.

  14. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    PubMed Central

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  15. Short-term retention of relational memory in amnesia revisited: accurate performance depends on hippocampal integrity

    PubMed Central

    Yee, Lydia T. S.; Hannula, Deborah E.; Tranel, Daniel; Cohen, Neal J.

    2014-01-01

    Traditionally, it has been proposed that the hippocampus and adjacent medial temporal lobe cortical structures are selectively critical for long-term declarative memory, which entails memory for inter-item and item-context relationships. Whether the hippocampus might also contribute to short-term retention of relational memory representations has remained controversial. In two experiments, we revisit this question by testing memory for relationships among items embedded in scenes using a standard working memory trial structure in which a sample stimulus is followed by a brief delay and the corresponding test stimulus. In each experimental block, eight trials using different exemplars of the same scene were presented. The exemplars contained the same items but with different spatial relationships among them. By repeating the pictures across trials, any potential contributions of item or scene memory to performance were minimized, and relational memory could be assessed more directly than has been done previously. When test displays were presented, participants indicated whether any of the item-location relationships had changed. Then, regardless of their responses (and whether any item did change its location), participants indicated on a forced-choice test, which item might have moved, guessing if necessary. Amnesic patients were impaired on the change detection test, and were frequently unable to specify the change after having reported correctly that a change had taken place. Comparison participants, by contrast, frequently identified the change even when they failed to report the mismatch, an outcome that speaks to the sensitivity of the change specification measure. These results confirm past reports of hippocampal contributions to short-term retention of relational memory representations, and suggest that the role of the hippocampus in memory has more to do with relational memory requirements than the length of a retention interval. PMID:24478681

  16. Short-term retention of relational memory in amnesia revisited: accurate performance depends on hippocampal integrity.

    PubMed

    Yee, Lydia T S; Hannula, Deborah E; Tranel, Daniel; Cohen, Neal J

    2014-01-01

    Traditionally, it has been proposed that the hippocampus and adjacent medial temporal lobe cortical structures are selectively critical for long-term declarative memory, which entails memory for inter-item and item-context relationships. Whether the hippocampus might also contribute to short-term retention of relational memory representations has remained controversial. In two experiments, we revisit this question by testing memory for relationships among items embedded in scenes using a standard working memory trial structure in which a sample stimulus is followed by a brief delay and the corresponding test stimulus. In each experimental block, eight trials using different exemplars of the same scene were presented. The exemplars contained the same items but with different spatial relationships among them. By repeating the pictures across trials, any potential contributions of item or scene memory to performance were minimized, and relational memory could be assessed more directly than has been done previously. When test displays were presented, participants indicated whether any of the item-location relationships had changed. Then, regardless of their responses (and whether any item did change its location), participants indicated on a forced-choice test, which item might have moved, guessing if necessary. Amnesic patients were impaired on the change detection test, and were frequently unable to specify the change after having reported correctly that a change had taken place. Comparison participants, by contrast, frequently identified the change even when they failed to report the mismatch, an outcome that speaks to the sensitivity of the change specification measure. These results confirm past reports of hippocampal contributions to short-term retention of relational memory representations, and suggest that the role of the hippocampus in memory has more to do with relational memory requirements than the length of a retention interval.

  17. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  18. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.

  19. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  20. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  1. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    NASA Astrophysics Data System (ADS)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  2. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  3. In-Depth Glycoproteomic Characterization of γ-Conglutin by High-Resolution Accurate Mass Spectrometry

    PubMed Central

    Schiarea, Silvia; Arnoldi, Lolita; Fanelli, Roberto; De Combarieu, Eric; Chiabrando, Chiara

    2013-01-01

    The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s), and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation) were analyzed by high-resolution accurate mass liquid chromatography–multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl) (Fuc) GlcNAc2, Man3(Xyl) (Fuc) GlcNAc2, GlcNAcMan3(Xyl) (Fuc) GlcNAc2 and GlcNAc 2Man3(Xyl) (Fuc) GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants), but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit. PMID:24069245

  4. Highly accurate spatial mode generation using spatial cross modulation method for mode division multiplexing

    NASA Astrophysics Data System (ADS)

    Sakuma, Hiroki; Okamoto, Atsushi; Shibukawa, Atsushi; Goto, Yuta; Tomita, Akihisa

    2016-02-01

    We propose a spatial mode generation technology using spatial cross modulation (SCM) for mode division multiplexing (MDM). The most well-known method for generating arbitrary complex amplitude fields is to display an off-axis computer-generated hologram (CGH) on a spatial light modulator (SLM). However, in this method, a desired complex amplitude field is obtained with first order diffraction light. This critically lowers the light utilization efficiency. On the other hand, in the SCM, the desired complex field is provided with zeroth order diffraction light. For this reason, our technology can generate spatial modes with large light utilization efficiency in addition to high accuracy. In this study, first, a numerical simulation was performed to verify that the SCM is applicable for spatial mode generation. Next, we made a comparison from two view points of the coupling efficiency and the light utilization between our technology and the technology using an off-axis amplitude hologram as a representative complex amplitude generation method. The simulation results showed that our technology can achieve considerably high light utilization efficiency while maintaining the enough coupling efficiency comparable to the technology using an off-axis amplitude hologram. Finally, we performed an experiment on spatial modes generation using the SCM. Experimental results showed that our technology has the great potential to realize the spatial mode generation with high accuracy.

  5. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  6. Accurate crab cavity modeling for the high luminosity Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Brett, D. R.; Appleby, R. B.; De Maria, R.; Garcia, J. Barranco; Garcia, R. Tomás; Hall, B.; Burt, G.

    2014-10-01

    As part of the Large Hadron Collider high luminosity upgrade it is proposed to include crab cavities in the lattice in order to enhance the luminosity. For one proposed cavity design the dynamics of the cavity is considered in terms of its impact upon the dynamic aperture of the machine. Taylor maps of the cavity are created and used to perform this analysis with a full assessment of their validity. Furthermore from these Taylor maps, symplectic methods are developed further, guided by the knowledge gained in the study of the physics contained in them.

  7. Highly-accurate metabolomic detection of early-stage ovarian cancer

    PubMed Central

    Gaul, David A.; Mezencev, Roman; Long, Tran Q.; Jones, Christina M.; Benigno, Benedict B.; Gray, Alexander; Fernández, Facundo M.; McDonald, John F.

    2015-01-01

    High performance mass spectrometry was employed to interrogate the serum metabolome of early-stage ovarian cancer (OC) patients and age-matched control women. The resulting spectral features were used to establish a linear support vector machine (SVM) model of sixteen diagnostic metabolites that are able to identify early-stage OC with 100% accuracy in our patient cohort. The results provide evidence for the importance of lipid and fatty acid metabolism in OC and serve as the foundation of a clinically significant diagnostic test. PMID:26573008

  8. High performance pyroelectric infrared detector

    NASA Astrophysics Data System (ADS)

    Hu, Xu; Luo, Haosu; Ji, Yulong; Yang, Chunli

    2015-10-01

    Single infrared detector made with Relaxative ferroelectric crystal(PMNT) present excellence performance. In this paper include detector capacitance, characteristic of frequency--response, characteristic of detectivity. The measure result show that detectivity of detector made with relaxative ferroelectric crystal(PMNT) exceed three times than made with LT, the D*achieved than 1*109cmHz0.5W-1. The detector will be applied on NDIR spectrograph, FFT spectrograph and so on. The high performance pyroelectric infrared detector be developed that will be broadened application area of infrared detector.

  9. Toward high performance graphene fibers.

    PubMed

    Chen, Li; He, Yuling; Chai, Songgang; Qiang, Hong; Chen, Feng; Fu, Qiang

    2013-07-07

    Two-dimensional graphene and graphene-based materials have attracted tremendous interest, hence much attention has been drawn to exploring and applying their exceptional characteristics and properties. Integration of graphene sheets into macroscopic fibers is a very important way for their application and has received increasing interest. In this study, neat and macroscopic graphene fibers were continuously spun from graphene oxide (GO) suspensions followed by chemical reduction. By varying wet-spinning conditions, a series of graphene fibers were prepared, then, the structural features, mechanical and electrical performances of the fibers were investigated. We found the orientation of graphene sheets, the interaction between inter-fiber graphene sheets and the defects in the fibers have a pronounced effect on the properties of the fibers. Graphene fibers with excellent mechanical and electrical properties will yield great advances in high-tech applications. These findings provide guidance for the future production of high performance graphene fibers.

  10. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  11. High-performance sports medicine.

    PubMed

    Speed, Cathy

    2013-02-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition. The physician has a broad remit and acts as a 'medical guardian' to optimise health while minimising risks. This review describes this interesting field of medicine, its unique challenges and priorities for the physician in delivering best healthcare.

  12. Reduced Toxicity High Performance Monopropellant

    DTIC Science & Technology

    2011-09-01

    distribution unlimited Propellant Performance Characteristics LMP - 103S AF-M315E Hydrazine Flame Temperature 1600ºC 1900ºC 600 oC Isp 252 (theor)235 sec...public release; distribution unlimited Compatibility and Handling Propellant LMP - 103S AF-M315E Thruster Materials Compatibility High combustion...detonation Bikini gauges indicate > 103 kPa @ 50ft Fragments thrown > 185 m Punched hole in end cap 12 Distribution A: Approved for public

  13. High-performance permanent magnets.

    PubMed

    Goll, D; Kronmüller, H

    2000-10-01

    High-performance permanent magnets (pms) are based on compounds with outstanding intrinsic magnetic properties as well as on optimized microstructures and alloy compositions. The most powerful pm materials at present are RE-TM intermetallic alloys which derive their exceptional magnetic properties from the favourable combination of rare earth metals (RE = Nd, Pr, Sm) with transition metals (TM = Fe, Co), in particular magnets based on (Nd.Pr)2Fe14B and Sm2(Co,Cu,Fe,Zr)17. Their development during the last 20 years has involved a dramatic improvement in their performance by a factor of > 15 compared with conventional ferrite pms therefore contributing positively to the ever-increasing demand for pms in many (including new) application fields, to the extent that RE-TM pms now account for nearly half of the worldwide market. This review article first gives a brief introduction to the basics of ferromagnetism to confer an insight into the variety of (permanent) magnets, their manufacture and application fields. We then examine the rather complex relationship between the microstructure and the magnetic properties for the two highest-performance and most promising pm materials mentioned. By using numerical micromagnetic simulations on the basis of the Finite Element technique the correlation can be quantitatively predicted, thus providing a powerful tool for the further development of optimized high-performance pms.

  14. High-performance permanent magnets

    NASA Astrophysics Data System (ADS)

    Goll, D.; Kronmüller, H.

    High-performance permanent magnets (pms) are based on compounds with outstanding intrinsic magnetic properties as well as on optimized microstructures and alloy compositions. The most powerful pm materials at present are RE-TM intermetallic alloys which derive their exceptional magnetic properties from the favourable combination of rare earth metals (RE=Nd, Pr, Sm) with transition metals (TM=Fe, Co), in particular magnets based on (Nd,Pr)2Fe14B and Sm2(Co,Cu,Fe,Zr)17. Their development during the last 20 years has involved a dramatic improvement in their performance by a factor of >15 compared with conventional ferrite pms therefore contributing positively to the ever-increasing demand for pms in many (including new) application fields, to the extent that RE-TM pms now account for nearly half of the worldwide market. This review article first gives a brief introduction to the basics of ferromagnetism to confer an insight into the variety of (permanent) magnets, their manufacture and application fields. We then examine the rather complex relationship between the microstructure and the magnetic properties for the two highest-performance and most promising pm materials mentioned. By using numerical micromagnetic simulations on the basis of the Finite Element technique the correlation can be quantitatively predicted, thus providing a powerful tool for the further development of optimized high-performance pms.

  15. Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices

    NASA Astrophysics Data System (ADS)

    Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun

    2014-04-01

    Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection

  16. Design and highly accurate 3D displacement characterization of monolithic SMA microgripper using computer vision

    NASA Astrophysics Data System (ADS)

    Bellouard, Yves; Sulzmann, Armin; Jacot, Jacques; Clavel, Reymond

    1998-01-01

    In the robotics field, several grippers have been developed using SMA technologies, but, so far, SMA is only used as the actuating part of the mechanical device. However mechanical device requires assembly and in some cases this means friction. In the case of micro-grippers, this becomes a major problem due to the small size of the components. In this paper, a new monolithic concept of micro-gripper is presented. This concept is applied to the grasping of sub- millimeter optical elements such as Selfoc lenses and the fastening of optical fibers. Measurements are performed using a newly developed high precision 3D-computer vision tracking system to characterize the spatial positions of the micro-gripper in action. To characterize relative motion of the micro-gripper the natural texture of the micro-gripper is used to compute 3D displacement. The microscope image CCD receivers high frequency changes in light intensity from the surface of the ripper. Using high resolution camera calibration, passive auto focus algorithms and 2D object recognition, the position of the micro-gripper can be characterized in the 3D workspace and can be guided in future micro assembly tasks.

  17. On high-order accurate weighted essentially non-oscillatory and discontinuous Galerkin schemes for compressible turbulence simulations.

    PubMed

    Shu, Chi-Wang

    2013-01-13

    In this article, we give a brief overview on high-order accurate shock capturing schemes with the aim of applications in compressible turbulence simulations. The emphasis is on the basic methodology and recent algorithm developments for two classes of high-order methods: the weighted essentially non-oscillatory and discontinuous Galerkin methods.

  18. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  19. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    NASA Astrophysics Data System (ADS)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  20. The Influence Relevance Voter: An Accurate And Interpretable Virtual High Throughput Screening Method

    PubMed Central

    Swamidass, S. Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Sheryl; Baldi, Pierre

    2009-01-01

    Given activity training data from Hight-Throughput Screening (HTS) experiments, virtual High-Throughput Screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by non-linearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also non-linearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house Support Vector Machine (SVM) method. On these benchmark datasets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS. PMID:19391629

  1. High Performance Pulse Tube Cryocoolers

    NASA Astrophysics Data System (ADS)

    Olson, J. R.; Roth, E.; Champagne, P.; Evtimov, B.; Nast, T. C.

    2008-03-01

    Lockheed Martin's Advanced Technology Center has been developing pulse tube cryocoolers for more than ten years. Recent innovations include successful testing of four-stage coldheads, no-load temperature below 4 K, and the recent development of a high-efficiency compressor. This paper discusses the predicted performance of single and multiple stage pulse tube coldheads driven by our new 6 kg "M5Midi" compressor, which is capable of 90% efficiency with 200 W input power, and a maximum input power of 1000 W. This compressor retains the simplicity of earlier LM-ATC compressors: it has a moving magnet and an external electrical coil, minimizing organics in the working gas and requiring no electrical penetrations through the pressure wall. Motor losses were minimized during design, resulting in a simple, easily-manufactured compressor with state-of-the-art motor efficiency. The predicted cryocooler performance is presented as simple formulae, allowing an engineer to include the impact of a highly-optimized cryocooler into a full system analysis. Performance is given as a function of the heat rejection temperature and the cold tip temperatures and cooling loads.

  2. Aptamer-Conjugated Graphene Oxide Membranes for Highly Efficient Capture and Accurate Identification of Multiple Types of Circulating Tumor Cells

    PubMed Central

    2016-01-01

    Tumor metastasis is responsible for 1 in 4 deaths in the United States. Though it has been well-documented over past two decades that circulating tumor cells (CTCs) in blood can be used as a biomarker for metastatic cancer, there are enormous challenges in capturing and identifying CTCs with sufficient sensitivity and specificity. Because of the heterogeneous expression of CTC markers, it is now well understood that a single CTC marker is insufficient to capture all CTCs from the blood. Driven by the clear need, this study reports for the first time highly efficient capture and accurate identification of multiple types of CTCs from infected blood using aptamer-modified porous graphene oxide membranes. The results demonstrate that dye-modified S6, A9, and YJ-1 aptamers attached to 20–40 μm porous garphene oxide membranes are capable of capturing multiple types of tumor cells (SKBR3 breast cancer cells, LNCaP prostate cancer cells, and SW-948 colon cancer cells) selectively and simultaneously from infected blood. Our result shows that the capture efficiency of graphene oxide membranes is ∼95% for multiple types of tumor cells; for each tumor concentration, 10 cells are present per milliliter of blood sample. The selectivity of our assay for capturing targeted tumor cells has been demonstrated using membranes without an antibody. Blood infected with different cells also has been used to demonstrate the targeted tumor cell capturing ability of aptamer-conjugated membranes. Our data also demonstrate that accurate analysis of multiple types of captured CTCs can be performed using multicolor fluorescence imaging. Aptamer-conjugated membranes reported here have good potential for the early diagnosis of diseases that are currently being detected by means of cell capture technologies. PMID:25565372

  3. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  4. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  5. Accurate and efficient correction of adjacency effects for high resolution imagery: comparison to the Lambertian correction for Landsat

    NASA Astrophysics Data System (ADS)

    Sei, Alain

    2016-10-01

    The state of the art of atmospheric correction for moderate resolution and high resolution sensors is based on assuming that the surface reflectance at the bottom of the atmosphere is uniform. This assumption accounts for multiple scattering but ignores the contribution of neighboring pixels, that is it ignores adjacency effects. Its great advantage however is to substantially reduce the computational cost of performing atmospheric correction and make the problem computationally tractable. In a recent paper, (Sei, 2015) a computationally efficient method was introduced for the correction of adjacency effects through the use of fast FFT-based evaluations of singular integrals and the use of analytic continuation. It was shown that divergent Neumann series can be avoided and accurate results be obtained for clear and turbid atmospheres. We analyze in this paper the error of the standard state of the art Lambertian atmospheric correction method on Landsat imagery and compare it to our newly introduced method. We show that for high contrast scenes the state of the art atmospheric correction yields much larger errors than our method.

  6. A systematic approach for the accurate and rapid measurement of water vapor transmission through ultra-high barrier films

    NASA Astrophysics Data System (ADS)

    Kiese, Sandra; Kücükpinar, Esra; Reinelt, Matthias; Miesbauer, Oliver; Ewender, Johann; Langowski, Horst-Christian

    2017-02-01

    Flexible organic electronic devices are often protected from degradation by encapsulation in multilayered films with very high barrier properties against moisture and oxygen. However, metrology must be improved to detect such low quantities of permeants. We therefore developed a modified ultra-low permeation measurement device based on a constant-flow carrier-gas system to measure both the transient and stationary water vapor permeation through high-performance barrier films. The accumulation of permeated water vapor before its transport to the detector allows the measurement of very low water vapor transmission rates (WVTRs) down to 2 × 10-5 g m-2 d-1. The measurement cells are stored in a temperature-controlled chamber, allowing WVTR measurements within the temperature range 23-80 °C. Differences in relative humidity can be controlled within the range 15%-90%. The WVTR values determined using the novel measurement device agree with those measured using a commercially available carrier-gas device from MOCON®. Depending on the structure and quality of the barrier film, it may take a long time for the WVTR to reach a steady-state value. However, by using a combination of the time-dependent measurement and the finite element method, we were able to estimate the steady-state WVTR accurately with significantly shorter measurement times.

  7. Highly sensitive and accurate screening of 40 dyes in soft drinks by liquid chromatography-electrospray tandem mass spectrometry.

    PubMed

    Feng, Feng; Zhao, Yansheng; Yong, Wei; Sun, Li; Jiang, Guibin; Chu, Xiaogang

    2011-06-15

    A method combining solid phase extraction with high performance liquid chromatography-electrospray ionization tandem mass spectrometry was developed for the highly sensitive and accurate screening of 40 dyes, most of which are banned in foods. Electrospray ionization tandem mass spectrometry was used to identify and quantify a large number of dyes for the first time, and demonstrated greater accuracy and sensitivity than the conventional liquid chromatography-ultraviolet/visible methods. The limits of detection at a signal-to-noise ratio of 3 for the dyes are 0.0001-0.01 mg/L except for Tartrazine, Amaranth, New Red and Ponceau 4R, with detection limits of 0.5, 0.25, 0.125 and 0.125 mg/L, respectively. When this method was applied to screening of dyes in soft drinks, the recoveries ranged from 91.1 to 105%. This method has been successfully applied to screening of illegal dyes in commercial soft drink samples, and it is valuable to ensure the safety of food.

  8. Accurate determination of specific heat at high temperatures using the flash diffusivity method

    NASA Technical Reports Server (NTRS)

    Vandersande, J. W.; Zoltan, A.; Wood, C.

    1989-01-01

    The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.

  9. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  10. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  11. HIGH PERFORMANCE EBIS FOR RHIC.

    SciTech Connect

    ALESSI,J.; BEEBE, E.; GOULD, O.; KPONOU, A.; LOCKEY, R.; PIKIN, A.; RAPARIA, D.; RITTER, J.; SNYDSTRUP, L.

    2007-06-25

    An Electron Beam Ion Source (EBIS), capable of producing high charge states and high beam currents of any heavy ion species in short pulses, is ideally suited for injection into a synchrotron. An EBIS-based, high current, heavy ion preinjector is now being built at Brookhaven to provide increased capabilities for the Relativistic Heavy Ion Collider (RHIC), and the NASA Space Radiation Laboratory (NSRL). Benefits of the new preinjector include the ability to produce ions of any species, fast switching between species to serve the simultaneous needs of multiple programs, and lower operating and maintenance costs. A state-of-the-art EBIS, operating with an electron beam current of up to 10 A, and producing multi-milliamperes of high charge state heavy ions, has been developed at Brookhaven, and has been operating very successfully on a test bench for several years. The present performance of this high-current EBIS is presented, along with details of the design of the scaled-up EBIS for RHIC, and the status of its construction. Other aspects of the project, including design and construction of the heavy ion RFQ, Linac, and matching beamlines, are also mentioned.

  12. High-Frequency CTD Measurements for Accurate GPS/acoustic Sea-floor Crustal Deformation Measurement System

    NASA Astrophysics Data System (ADS)

    Tadokoro, K.; Yasuda, K.; Taniguchi, S.; Uemura, Y.; Matsuhiro, K.

    2015-12-01

    The GPS/acoustic sea-floor crustal deformation measurement system has developed as a useful tool to observe tectonic deformation especially at subduction zones. One of the factors preventing accurate GPS/acoustic sea-floor crustal deformation measurement is horizontal heterogeneity of sound speed in the ocean. It is therefore necessary to measure the gradient directly from sound speed structure. We report results of high-frequency CTD measurements using Underway CTD (UCTD) in the Kuroshio region. We perform the UCTD measurements on May 2nd, 2015 at two stations (TCA and TOA) above the sea-floor benchmarks installed across the Nankai Trough, off the south-east of Kii Peninsula, middle Japan. The number of measurement points is six at each station along circles with a diameter of 1.8 nautical miles around the sea-floor benchmark. The stations TCA and TOA are located on the edge and the interior of the Kuroshio current, respectively, judging from difference in sea water density measured at the two stations, as well as a satellite image of sea-surface temperature distribution. We detect a sound speed gradient of high speeds in the southern part and low speeds in the northern part at the two stations. At the TCA station, the gradient is noticeable down to 300 m in depth; the maximum difference in sound speed is +/- 5 m/s. The sound speed difference is as small as +/- 1.3 m/s at depths below 300 m, which causes seafloor benchmark positioning error as large as 1 m. At the TOA station, the gradient is extremely small down to 100 m in depth. The maximum difference in sound speed is less than +/- 0.3 m/s that is negligible small for seafloor benchmark positioning error. Clear gradient of high speed is observed to the depths; the maximum difference in sound speed is +/- 0.8-0.9 m/s, causing seafloor benchmark positioning error of several tens centimeters. The UCTD measurement is effective tool to detect sound speed gradient. We establish a method for accurate sea

  13. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  14. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  15. High-order accurate solution of the incompressible Navier-Stokes equations on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Henniger, R.; Obrist, D.; Kleiser, L.

    2010-05-01

    The emergence of "petascale" supercomputers requires us to develop today's simulation codes for (incompressible) flows by codes which are using numerical schemes and methods that are better able to exploit the offered computational power. In that spirit, we present a massively parallel high-order Navier-Stokes solver for large incompressible flow problems in three dimensions. The governing equations are discretized with finite differences in space and a semi-implicit time integration scheme. This discretization leads to a large linear system of equations which is solved with a cascade of iterative solvers. The iterative solver for the pressure uses a highly efficient commutation-based preconditioner which is robust with respect to grid stretching. The efficiency of the implementation is further enhanced by carefully setting the (adaptive) termination criteria for the different iterative solvers. The computational work is distributed to different processing units by a geometric data decomposition in all three dimensions. This decomposition scheme ensures a low communication overhead and excellent scaling capabilities. The discretization is thoroughly validated. First, we verify the convergence orders of the spatial and temporal discretizations for a forced channel flow. Second, we analyze the iterative solution technique by investigating the absolute accuracy of the implementation with respect to the different termination criteria. Third, Orr-Sommerfeld and Squire eigenmodes for plane Poiseuille flow are simulated and compared to analytical results. Fourth, the practical applicability of the implementation is tested for transitional and turbulent channel flow. The results are compared to solutions from a pseudospectral solver. Subsequently, the performance of the commutation-based preconditioner for the pressure iteration is demonstrated. Finally, the excellent parallel scalability of the proposed method is demonstrated with a weak and a strong scaling test on up to

  16. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The

  17. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  18. Toward a highly accurate ambulatory system for clinical gait analysis via UWB radios.

    PubMed

    Shaban, Heba A; Abou el-Nasr, Mohamad; Buehrer, R Michael

    2010-03-01

    In this paper, we propose and investigate a low-cost and low-complexity wireless ambulatory human locomotion tracking system that provides a high ranging accuracy (intersensor distance) suitable for the assessment of clinical gait analysis using wearable ultra wideband (UWB) transceivers. The system design and transceiver performance are presented in additive-white-gaussian noise and realistic channels, using industry accepted channel models for body area networks. The proposed system is theoretically capable of providing a ranging accuracy of 0.11 cm error at distances equivalent to interarker distances, at an 18 dB SNR in realistic on-body UWB channels. Based on real measurements, it provides the target ranging accuracy at an SNR = 20 dB. The achievable accuracy is ten times better than the accuracy reported in the literature for the intermarker-distance measurement. This makes it suitable for use in clinical gait analysis, and for the characterization and assessment of unstable mobility diseases, such as Parkinson's disease.

  19. Transrectal high-intensity focused ultrasound ablation of prostate cancer: effective treatment requiring accurate imaging.

    PubMed

    Rouvière, Olivier; Souchon, Rémi; Salomir, Rarès; Gelet, Albert; Chapelon, Jean-Yves; Lyonnet, Denis

    2007-09-01

    Transrectal HIFU ablation has become a reasonable option for the treatment of localized prostate cancer in non-surgical patients, with 5-year disease-free survival similar to that of radiation therapy. It is also a promising salvage therapy of local recurrence after radiation therapy. These favourable results are partly due to recent improvements in prostate cancer imaging. However, further improvements are needed in patient selection, pre-operative localization of the tumor foci, assessment of the volume treated and early detection of recurrence. A better knowledge of the factors influencing the HIFU-induced tissue destruction and a better pre-operative assessment of them by imaging techniques should improve treatment outcome. Whereas prostate HIFU ablation is currently performed under transrectal ultrasound guidance, MR guidance with real-time operative monitoring of temperature will be available in the near future. If this technique will give better targeting and more uniform tissue destruction, its cost-effectiveness will have to be carefully evaluated. Finally, a recently reported synergistic effect between HIFU ablation and chemotherapy opens possibilities for treatment in high-risk or clinically advanced tumors.

  20. Accurate High-Temperature Reaction Networks for Alternative Fuels: Butanol Isomers

    SciTech Connect

    Van Geem, K. M.; Pyl, S. P.; Marin, G. B.; Harper, M. R.; Green, W. H.

    2010-11-03

    Oxygenated hydrocarbons, particularly alcohol compounds, are being studied extensively as alternatives and additives to conventional fuels due to their propensity of decreasing soot formation and improving the octane number of gasoline. However, oxygenated fuels also increase the production of toxic byproducts, such as formaldehyde. To gain a better understanding of the oxygenated functional group’s influence on combustion properties—e.g., ignition delay at temperatures above the negative temperature coefficient regime, and the rate of benzene production, which is the common precursor to soot formation—a detailed pressure-dependent reaction network for n-butanol, sec-butanol, and tert-butanol consisting of 281 species and 3608 reactions is presented. The reaction network is validated against shock tube ignition delays and doped methane flame concentration profiles reported previously in the literature, in addition to newly acquired pyrolysis data. Good agreement between simulated and experimental data is achieved in all cases. Flux and sensitivity analyses for each set of experiments have been performed, and high-pressure-limit reaction rate coefficients for important pathways, e.g., the dehydration reactions of the butanol isomers, have been computed using statistical mechanics and quantum chemistry. The different alcohol decomposition pathways, i.e., the pathways from primary, secondary, and tertiary alcohols, are discussed. Furthermore, comparisons between ethanol and n-butanol, two primary alcohols, are presented, as they relate to ignition delay.

  1. How accurate is Poisson-Boltzmann theory for monovalent ions near highly charged interfaces?

    PubMed

    Bu, Wei; Vaknin, David; Travesset, Alex

    2006-06-20

    Surface sensitive synchrotron X-ray scattering studies were performed to obtain the distribution of monovalent ions next to a highly charged interface. A lipid phosphate (dihexadecyl hydrogen-phosphate) was spread as a monolayer at the air-water interface to control surface charge density. Using anomalous reflectivity off and at the L3 Cs+ resonance, we provide spatial counterion (Cs+) distributions next to the negatively charged interfaces. Five decades in bulk concentrations are investigated, demonstrating that the interfacial distribution is strongly dependent on bulk concentration. We show that this is due to the strong binding constant of hydronium H3O+ to the phosphate group, leading to proton-transfer back to the phosphate group and to a reduced surface charge. The increase of Cs+ concentration modifies the contact value potential, thereby causing proton release. This process effectively modifies surface charge density and enables exploration of ion distributions as a function of effective surface charge-density. The experimentally obtained ion distributions are compared to distributions calculated by Poisson-Boltzmann theory accounting for the variation of surface charge density due to proton release and binding. We also discuss the accuracy of our experimental results in discriminating possible deviations from Poisson-Boltzmann theory.

  2. Internal Mammary Sentinel Lymph Node Biopsy With Modified Injection Technique: High Visualization Rate and Accurate Staging.

    PubMed

    Qiu, Peng-Fei; Cong, Bin-Bin; Zhao, Rong-Rong; Yang, Guo-Ren; Liu, Yan-Bing; Chen, Peng; Wang, Yong-Sheng

    2015-10-01

    Although the 2009 American Joint Committee on Cancer incorporated the internal mammary sentinel lymph node biopsy (IM-SLNB) concept, there has been little change in surgical practice patterns because of the low visualization rate of internal mammary sentinel lymph nodes (IMSLN) with the traditional radiotracer injection technique. In this study, various injection techniques were evaluated in term of the IMSLN visualization rate, and the impact of IM-SLNB on the diagnostic and prognostic value were analyzed.Clinically, axillary lymph nodes (ALN) negative patients (n = 407) were divided into group A (traditional peritumoral intraparenchymal injection) and group B (modified periareolar intraparenchymal injection). Group B was then separated into group B1 (low volume) and group B2 (high volume) according to the injection volume. Clinically, ALN-positive patients (n = 63) were managed as group B2. Internal mammary sentinel lymph node biopsy was performed for patients with IMSLN visualized.The IMSLN visualization rate was significantly higher in group B than that in group A (71.1% versus 15.5%, P < 0.001), whereas the axillary sentinel lymph nodes were reliably identified in both groups (98.9% versus 98.3%, P = 0.712). With high injection volume, group B2 was found to have higher IMSLN visualization rate than group B1 (75.1% versus 45.8%, P < 0.001). The IMSLN metastasis rate was only 8.1% (12/149) in clinically ALN-negative patients with successful IM-SLNB, and adjuvant treatment was altered in a small proportion. The IMSLN visualization rate was 69.8% (44/63) in clinically ALN-positive patients with the IMSLN metastasis rate up to 20.5% (9/44), and individual radiotherapy strategy could be guided with the IM-SLNB results.The modified injection technique (periareolar intraparenchymal, high volume, and ultrasound guidance) significantly improved the IMSLN visualization rate, making the routine IM-SLNB possible in daily practice. Internal mammary

  3. Accurate taxonomy assignments from 16S rRNA sequences produced by highly parallel pyrosequencers

    PubMed Central

    Liu, Zongzhi; DeSantis, Todd Z.; Andersen, Gary L.; Knight, Rob

    2008-01-01

    The recent introduction of massively parallel pyrosequencers allows rapid, inexpensive analysis of microbial community composition using 16S ribosomal RNA (rRNA) sequences. However, a major challenge is to design a workflow so that taxonomic information can be accurately and rapidly assigned to each read, so that the composition of each community can be linked back to likely ecological roles played by members of each species, genus, family or phylum. Here, we use three large 16S rRNA datasets to test whether taxonomic information based on the full-length sequences can be recaptured by short reads that simulate the pyrosequencer outputs. We find that different taxonomic assignment methods vary radically in their ability to recapture the taxonomic information in full-length 16S rRNA sequences: most methods are sensitive to the region of the 16S rRNA gene that is targeted for sequencing, but many combinations of methods and rRNA regions produce consistent and accurate results. To process large datasets of partial 16S rRNA sequences obtained from surveys of various microbial communities, including those from human body habitats, we recommend the use of Greengenes or RDP classifier with fragments of at least 250 bases, starting from one of the primers R357, R534, R798, F343 or F517. PMID:18723574

  4. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    PubMed

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement.

  5. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    PubMed Central

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810

  6. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  7. Development of an unmanned aerial vehicle-based spray system for highly accurate site-specific application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Application of crop production and protection materials is a crucial component in the high productivity of American agriculture. Agricultural chemical application is frequently needed at a specific time and location for accurate site-specific management of crop pests. Piloted aircrafts that carry ...

  8. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  9. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    NASA Astrophysics Data System (ADS)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  10. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  11. High Performance Perovskite Solar Cells.

    PubMed

    Tong, Xin; Lin, Feng; Wu, Jiang; Wang, Zhiming M

    2016-05-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long-term stable all-solid-state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost-effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole-transporting materials (HTMs) and electron-transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  12. High performance phenolic pultrusion resin

    SciTech Connect

    Qureshi, S.P.; Ingram, W.H.; Smith, C.

    1996-11-01

    Today, Phenol-Formaldehyde (PF) resins are the materials of choice for aerospace interior applications, primarily due to low FST (flame, smoke and toxicity). Since 1990, growth of PF resins has been steadily increasing in non-aerospace applications (which include mass transit, construction, marine, mine ducting and offshore oil) due to low FST and reasonable cost. This paper describes one component phenol-formaldehyde resin that was jointly developed with Morrison Molded Fiber Glass for their pultrusion process. Physical properties of the resin with flame/smoke/toxicity, chemical resistance and mechanical performance of the pultruded RP are discussed. Neat resin screening tests to identify high-temperature formulations are explored. Research continues at Georgia-Pacific to investigate the effect of formulation variables on processing and mechanical properties.

  13. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction. PMID:27774402

  14. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts.

    PubMed

    Stephanou, Pavlos S; Mavrantzas, Vlasis G

    2014-06-07

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.

  15. Rapid and Highly Accurate Prediction of Poor Loop Diuretic Natriuretic Response in Patients With Heart Failure

    PubMed Central

    Testani, Jeffrey M.; Hanberg, Jennifer S.; Cheng, Susan; Rao, Veena; Onyebeke, Chukwuma; Laur, Olga; Kula, Alexander; Chen, Michael; Wilson, F. Perry; Darlington, Andrew; Bellumkonda, Lavanya; Jacoby, Daniel; Tang, W. H. Wilson; Parikh, Chirag R.

    2015-01-01

    Background Removal of excess sodium and fluid is a primary therapeutic objective in acute decompensated heart failure (ADHF) and commonly monitored with fluid balance and weight loss. However, these parameters are frequently inaccurate or not collected and require a delay of several hours after diuretic administration before they are available. Accessible tools for rapid and accurate prediction of diuretic response are needed. Methods and Results Based on well-established renal physiologic principles an equation was derived to predict net sodium output using a spot urine sample obtained one or two hours following loop diuretic administration. This equation was then prospectively validated in 50 ADHF patients using meticulously obtained timed 6-hour urine collections to quantitate loop diuretic induced cumulative sodium output. Poor natriuretic response was defined as a cumulative sodium output of <50 mmol, a threshold that would result in a positive sodium balance with twice-daily diuretic dosing. Following a median dose of 3 mg (2–4 mg) of intravenous bumetanide, 40% of the population had a poor natriuretic response. The correlation between measured and predicted sodium output was excellent (r=0.91, p<0.0001). Poor natriuretic response could be accurately predicted with the sodium prediction equation (AUC=0.95, 95% CI 0.89–1.0, p<0.0001). Clinically recorded net fluid output had a weaker correlation (r=0.66, p<0.001) and lesser ability to predict poor natriuretic response (AUC=0.76, 95% CI 0.63–0.89, p=0.002). Conclusions In patients being treated for ADHF, poor natriuretic response can be predicted soon after diuretic administration with excellent accuracy using a spot urine sample. PMID:26721915

  16. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  17. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  18. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  19. MULTEM: A new multislice program to perform accurate and fast electron diffraction and imaging simulations using Graphics Processing Units with CUDA.

    PubMed

    Lobato, I; Van Dyck, D

    2015-09-01

    The main features and the GPU implementation of the MULTEM program are presented and described. This new program performs accurate and fast multislice simulations by including higher order expansion of the multislice solution of the high energy Schrödinger equation, the correct subslicing of the three-dimensional potential and top-bottom surfaces. The program implements different kinds of simulation for CTEM, STEM, ED, PED, CBED, ADF-TEM and ABF-HC with proper treatment of the spatial and temporal incoherences. The multislice approach described here treats the specimen as amorphous material which allows a straightforward implementation of the frozen phonon approximation. The generalized transmission function for each slice is calculated when is needed and then discarded. This allows us to perform large simulations that can include millions of atoms and keep the computer memory requirements to a reasonable level.

  20. Automatic generation of reaction energy databases from highly accurate atomization energy benchmark sets.

    PubMed

    Margraf, Johannes T; Ranasinghe, Duminda S; Bartlett, Rodney J

    2017-03-31

    In this contribution, we discuss how reaction energy benchmark sets can automatically be created from arbitrary atomization energy databases. As an example, over 11 000 reaction energies derived from the W4-11 database, as well as some relevant subsets are reported. Importantly, there is only very modest computational overhead involved in computing >11 000 reaction energies compared to 140 atomization energies, since the rate-determining step for either benchmark is performing the same 140 quantum chemical calculations. The performance of commonly used electronic structure methods for the new database is analyzed. This allows investigating the relationship between the performances for atomization and reaction energy benchmarks based on an identical set of molecules. The atomization energy is found to be a weak predictor for the overall usefulness of a method. The performance of density functional approximations in light of the number of empirically optimized parameters used in their design is also discussed.

  1. A highly accurate method for the determination of mass and center of mass of a spacecraft

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Trubert, M. R.; Egwuatu, A.

    1978-01-01

    An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.

  2. Efficient and Accurate Multiple-Phenotype Regression Method for High Dimensional Data Considering Population Structure.

    PubMed

    Joo, Jong Wha J; Kang, Eun Yong; Org, Elin; Furlotte, Nick; Parks, Brian; Hormozdiari, Farhad; Lusis, Aldons J; Eskin, Eleazar

    2016-12-01

    A typical genome-wide association study tests correlation between a single phenotype and each genotype one at a time. However, single-phenotype analysis might miss unmeasured aspects of complex biological networks. Analyzing many phenotypes simultaneously may increase the power to capture these unmeasured aspects and detect more variants. Several multivariate approaches aim to detect variants related to more than one phenotype, but these current approaches do not consider the effects of population structure. As a result, these approaches may result in a significant amount of false positive identifications. Here, we introduce a new methodology, referred to as GAMMA for generalized analysis of molecular variance for mixed-model analysis, which is capable of simultaneously analyzing many phenotypes and correcting for population structure. In a simulated study using data implanted with true genetic effects, GAMMA accurately identifies these true effects without producing false positives induced by population structure. In simulations with this data, GAMMA is an improvement over other methods which either fail to detect true effects or produce many false positive identifications. We further apply our method to genetic studies of yeast and gut microbiome from mice and show that GAMMA identifies several variants that are likely to have true biological mechanisms.

  3. A Low-Cost, Accurate, and High-Precision Fluid Dispensing System for Microscale Application.

    PubMed

    Das, Champak; Wang, Guochun; Nguyen, Chien

    2017-04-01

    We present here the development of a low-cost, accurate, and precise fluid dispensing system. It can be used with peristaltic or any other pump to improve the flow characteristics. The dispensing system has a range of 1 to 100 µL with accuracy of ~99.5% and standard deviation at ~150 nL over the entire range. The system developed does not depend on the accuracy or precision of the driving pump; therefore, any positive displacement pump can be used to get similar accuracy and precision, which gives an opportunity to reduce the cost of the system. The dispensing system does not require periodic calibration and can also be miniaturized for microfluidic application. Although primarily designed for aqueous liquid, it can be extended for different nonconductive liquids as well with modifications. The unit is further used for near real-time measurement of lactate from microdialysate. The individual components can easily be made disposable or sterilized for use in biomedical applications.

  4. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  5. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  6. Computing Highly Accurate Spectroscopic Line Lists that Cover a Large Temperature Range for Characterization of Exoplanet Atmospheres

    NASA Astrophysics Data System (ADS)

    Lee, T. J.; Huang, X.; Schwenke, D. W.

    2013-12-01

    Over the last decade, it has become apparent that the most effective approach for determining highly accurate rotational and rovibrational line lists for molecules of interest in planetary atmospheres is through a combination of high-resolution laboratory experiments coupled with state-of-the art ab initio quantum chemistry methods. The approach involves computing the most accurate potential energy surface (PES) possible using state-of-the art electronic structure methods, followed by computing rotational and rovibrational energy levels using an exact variational method to solve the nuclear Schrödinger equation. Then, reliable experimental data from high-resolution experiments is used to refine the ab initio PES in order to improve the accuracy of the computed energy levels and transition energies. From the refinement step, we have been able to achieve an accuracy of approximately 0.015 cm-1 for rovibrational transition energies, and even better for purely rotational transitions. This combined 'experiment / theory' approach allows for determination of essentially a complete line list, with hundreds of millions of transitions, and having the transition energies and intensities be highly accurate. Our group has successfully applied this approach to determine highly accurate line lists for NH3 and CO2 (and isotopologues), and very recently for SO2 and isotopologues. Here I will report our latest results for SO2 including all isotopologues. Comparisons to the available data in HITRAN2012 and other available databases will be shown, though we note that our line lists SO2 are significantly more complete than any other databases. Since it is important to span a large temperature range in order to model the spectral signature of exoplanets, we will also demonstrate how the spectra change on going from low temperatures (100 K) to higher temperatures (500 K).

  7. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  8. Toward accurate molecular identification of species in complex environmental samples: testing the performance of sequence filtering and clustering methods

    PubMed Central

    Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E

    2015-01-01

    Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860

  9. High Order Accurate Algorithms for Shocks, Rapidly Changing Solutions and Multiscale Problems

    DTIC Science & Technology

    2014-11-13

    for front propagation with obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations ...obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations , sound generation study via... detonation waves, Combustion and Flame, (02 2013): 0. doi: 10.1016/j.combustflame.2012.10.002 Yang Yang, Ishani Roy, Chi-Wang Shu, Li-Zhi Fang. THE

  10. Endoscopic Ultrasound Does Not Accurately Stage Early Adenocarcinoma or High-Grade Dysplasia of the Esophagus

    DTIC Science & Technology

    2010-01-01

    MeSH search terms: " endoscopic ultrasound," "Barrett’s esophagus ," "adeno· carcinoma," "Barrett’s esophagus and high grade dyspla.c;ia...adenocarcinoma of the esophagus ; EMR, endoscopic mucosal resection; EUS, endoscopic ul- trasound; HGD, high-grade dysplasia. <D 2010 by the AGA Institute... esophagus and early adenocarcinoma found EUS examination to have perfecr accuracy for differentiating Tl CLINICAL GASTROENTEROLOGY AND HEPATOLOGY Vol

  11. A Polymer Visualization System with Accurate Heating and Cooling Control and High-Speed Imaging

    PubMed Central

    Wong, Anson; Guo, Yanting; Park, Chul B.; Zhou, Nan Q.

    2015-01-01

    A visualization system to observe crystal and bubble formation in polymers under high temperature and pressure has been developed. Using this system, polymer can be subjected to a programmable thermal treatment to simulate the process in high pressure differential scanning calorimetry (HPDSC). With a high-temperature/high-pressure view-cell unit, this system enables in situ observation of crystal formation in semi-crystalline polymers to complement thermal analyses with HPDSC. The high-speed recording capability of the camera not only allows detailed recording of crystal formation, it also enables in situ capture of plastic foaming processes with a high temporal resolution. To demonstrate the system’s capability, crystal formation and foaming processes of polypropylene/carbon dioxide systems were examined. It was observed that crystals nucleated and grew into spherulites, and they grew at faster rates as temperature decreased. This observation agrees with the crystallinity measurement obtained with the HPDSC. Cell nucleation first occurred at crystals’ boundaries due to CO2 exclusion from crystal growth fronts. Subsequently, cells were nucleated around the existing ones due to tensile stresses generated in the constrained amorphous regions between networks of crystals. PMID:25915031

  12. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 1: Cell embedding method and performance

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Kiesel, Peter; Sommer, Lars Wilko; Schwartz, Julian; Lochbaum, Alexander; Hegyi, Alex; Schuh, Andreas; Arakaki, Kyle; Saha, Bhaskar; Ganguli, Anurag; Kim, Kyung Ho; Kim, ChaeAh; Hah, Hoe Jin; Kim, SeokKoo; Hwang, Gyu-Ok; Chung, Geun-Chang; Choi, Bokkyu; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic sensors. High-performance large-format pouch cells with embedded fiber-optic sensors were fabricated. The first of this two-part paper focuses on the embedding method details and performance of these cells. The seal integrity, capacity retention, cycle life, compatibility with existing module designs, and mass-volume cost estimates indicate their suitability for xEV and other advanced battery applications. The second part of the paper focuses on the internal strain and temperature signals obtained from these sensors under various conditions and their utility for high-accuracy cell state estimation algorithms.

  13. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles.

  14. High-Resolution Photoionization, Photoelectron and Photodissociation Studies. Determination of Accurate Energetic and Spectroscopic Database for Combustion Radicals and Molecules

    SciTech Connect

    Ng, Cheuk-Yiu

    2016-04-25

    The main goal of this research program was to obtain accurate thermochemical and spectroscopic data, such as ionization energies (IEs), 0 K bond dissociation energies, 0 K heats of formation, and spectroscopic constants for radicals and molecules and their ions of relevance to combustion chemistry. Two unique, generally applicable vacuum ultraviolet (VUV) laser photoion-photoelectron apparatuses have been developed in our group, which have used for high-resolution photoionization, photoelectron, and photodissociation studies for many small molecules of combustion relevance.

  15. High-throughput Accurate-wavelength Lens-based Visible Spectrometera

    SciTech Connect

    Ronald E. Belll and Filippo Scotti

    2010-06-04

    A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm-1 grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arcsec, corresponding to a wavelength error ≤ 0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature and pressure are monitored between the time of calibration and the time of measurement to insure a persistent wavelength calibration

  16. Implementing an Inexpensive and Accurate Introductory Gas Density Activity with High School Students

    ERIC Educational Resources Information Center

    Cunningham, W. Patrick; Joseph, Christopher; Morey, Samantha; Santos Romo, Ana; Shope, Cullen; Strang, Jonathan; Yang, Kevin

    2015-01-01

    A simplified activity examined gas density while employing cost-efficient syringes in place of traditional glass bulbs. The exercise measured the density of methane, with very good accuracy and precision, in both first-year high school and AP chemistry settings. The participating students were tasked with finding the density of a gas. The…

  17. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The United States Department of Agriculture’s high-speed wind tunnel facility in College Station, Texas, USA was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds comparable to those used in aerial dispersant application. A laser particle size anal...

  18. Solid rocket booster internal flow analysis by highly accurate adaptive computational methods

    NASA Technical Reports Server (NTRS)

    Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.

    1991-01-01

    The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.

  19. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation.

    PubMed

    Augustin, Christoph M; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J; Niederer, Steven A; Haase, Gundolf; Plank, Gernot

    2016-01-15

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  20. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  1. Reconstruction of high resolution MLC leaf positions using a low resolution detector for accurate 3D dose reconstruction in IMRT

    NASA Astrophysics Data System (ADS)

    Visser, R.; Godart, J.; Wauben, D. J. L.; Langendijk, J. A.; van't Veld, A. A.; Korevaar, E. W.

    2016-12-01

    In pre-treatment dose verification, low resolution detector systems are unable to identify shifts of individual leafs of high resolution multi leaf collimator (MLC) systems from detected changes in the dose deposition. The goal of this study was to introduce an alternative approach (the shutter technique) combined with a previous described iterative reconstruction method to accurately reconstruct high resolution MLC leaf positions based on low resolution measurements. For the shutter technique, two additional radiotherapy treatment plans (RT-plans) were generated in addition to the original RT-plan; one with even MLC leafs closed for reconstructing uneven leaf positions and one with uneven MLC leafs closed for reconstructing even leaf positions. Reconstructed leaf positions were then implemented in the original RT-plan for 3D dose reconstruction. The shutter technique was evaluated for a 6 MV Elekta SLi linac with 5 mm MLC leafs (Agility™) in combination with the MatriXX Evolution detector with detector spacing of 7.62 mm. Dose reconstruction was performed with the COMPASS system (v2.0). The measurement setup allowed one row of ionization chambers to be affected by two adjacent leaf pairs. Measurements were obtained for various field sizes with MLC leaf position errors ranging from 1.0 mm to 10.0 mm. Furthermore, one clinical head and neck IMRT treatment beam with MLC introduced leaf position errors of 5.0 mm was evaluated to illustrate the impact of the shutter technique on 3D dose reconstruction. Without the shutter technique, MLC leaf position reconstruction showed reconstruction errors up to 6.0 mm. Introduction of the shutter technique allowed MLC leaf position reconstruction for the majority of leafs with sub-millimeter accuracy resulting in a reduction of dose reconstruction errors. The shutter technique in combination with the iterative reconstruction method allows high resolution MLC leaf position reconstruction using low resolution

  2. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  3. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    SciTech Connect

    Bell, Ronald E.

    2014-11-15

    A high-throughput spectrometer for the 400–820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm{sup −1} grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  4. Development and operation of a high-throughput accurate-wavelength lens-based spectrometera)

    DOE PAGES

    Bell, Ronald E.

    2014-07-11

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤ 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. The computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection,more » and wavelength calibration.« less

  5. Development and Operation of High-throughput Accurate-wavelength Lens-based Spectrometer

    SciTech Connect

    Bell, Ronald E

    2014-07-01

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy < 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  6. Isomorphism and solid solution as shown by an accurate high-resolution diffraction experiment.

    PubMed

    Poulain, Agnieszka; Kubicki, Maciej; Lecomte, Claude

    2014-12-01

    High-resolution crystal structure determination and spherical and multipolar refinement enabled an organic solid solution of 1-(4'-chlorophenyl)-2-methyl-4-nitro-1H-imidazole-5-carbonitrile and 5-bromo-1-(4'-chlorophenyl)-2-methyl-4-nitro-1H-imidazole to be found, which would not normally be revealed using only standard resolution data (ca 0.8 Å), as the disordered part is only visible at high resolution. Therefore, this new structure would have been reported as just another polymorphic form, even more reasonably as isostructural with other derivatives. To the best of our knowledge this is the first example of organic solid solution modelled via charge density Hansen-Coppens formalism and analysed by means of quantum theory of atoms in molecules (QTAIM) theory.

  7. Kashima RAy-Tracing Service (KARATS) for high accurate GNSS positioning

    NASA Astrophysics Data System (ADS)

    Ichikawa, R.; Hobiger, T.; Hasegawa, S.; Tsutsumi, M.; Koyama, Y.; Kondo, T.

    2010-12-01

    Radio signal delays associated with the neutral atmosphere are one of the major error sources of space geodesy such as GPS, GLONASS, GALILEO, VLBI, In-SAR measurements. We have developed a state-of-art tool to estimate the atmospheric path delays by ray-tracing through JMA meso-scale analysis (MANAL data) data. The tools, which we have named 'KAshima RAytracing Tools (KARAT)', are capable of calculating total slant delays and ray-bending angles considering real atmospheric phenomena. Numerical weather models such as MANAL data have undergone a significant improvement of accuracy and spatial resolution, which makes it feasible to utilize them for the correction of atmosphere excess path delays. In the previous studies for evaluating KARAT performance, the KARAT solutions are slightly better than the solutions using VMF1 and GMF with linear gradient model for horizontal and height positions. Based on these results we have started the web-based online service, 'KAshima RAytracing Service (KARATS)' for providing the atmospheric delay correction of RINEX files on Jan 27th, 2010. The KARATS receives user's RINEX data via a proper web site (http://vps.nict.go.jp/karats/index.html) and processes user's data files using KARAT for reducing atmospheric slant delays. The reduced RINEX files are archived in the specific directory for each user on the KARATS server. Once the processing is finished the information of data archive is sent privately via email to each user. If user want to process a large amount of data files, user can prepare own server which archives them. The KARATS can get these files from the user's server using GNU ¥emph{wget} and performs ray-traced corrections. We will present a brief status of the KARATS and summarize first experiences gained after this service went operational in December 2009. In addition, we will also demonstrate the newest KARAT performance based on the 5km MANAL data which has been operational from April 7th, 2009 and an outlook on

  8. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  9. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  10. An experimental device for accurate ultrasounds measurements in liquid foods at high pressure

    NASA Astrophysics Data System (ADS)

    Hidalgo-Baltasar, E.; Taravillo, M.; Baonza, V. G.; Sanz, P. D.; Guignon, B.

    2012-12-01

    The use of high hydrostatic pressure to ensure safe and high-quality product has markedly increased in the food industry during the last decade. Ultrasonic sensors can be employed to control such processes in an equivalent way as they are currently used in processes carried out at room pressure. However, their installation, calibration and use are particularly challenging in the context of a high pressure environment. Besides, data about acoustic properties of food under pressure and even for water are quite scarce in the pressure range of interest for food treatment (namely, above 200 MPa). The objective of this work was to establish a methodology to determine the speed of sound in foods under pressure. An ultrasonic sensor using the multiple reflections method was adapted to a lab-scale HHP equipment to determine the speed of sound in water between 253.15 and 348.15 K, and at pressures up to 700 MPa. The experimental speed-of-sound data were compared to the data calculated from the equation of state of water (IAPWS-95 formulation). From this analysis, the way to calibrate cell path was validated. After this calibration procedure, the speed of sound could be determined in liquid foods by using this sensor with a relative uncertainty between (0.22 and 0.32) % at a confidence level of 95 % over the whole pressure domain.

  11. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis

    PubMed Central

    Smith, William L.; Chadwick, Sean G.; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E.; Aguin, Tina J.; Sobel, Jack D.

    2016-01-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease: Gardnerella vaginalis, Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the order Clostridiales), Megasphaera phylotype 1 or 2, Lactobacillus iners, Lactobacillus crispatus, Lactobacillus gasseri, and Lactobacillus jensenii. We generated a logistic regression model that identified G. vaginalis, A. vaginae, and Megasphaera phylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion of Lactobacillus spp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  12. How Accurate Is a Single Cutpoint to Identify High Blood Pressure in Adolescents?

    PubMed

    Brambilla, Paolo; Andreano, Anita; Antolini, Laura; Bedogni, Giorgio; Salvatoni, Alessandro; Iughetti, Lorenzo; Moreno, Luis Alberto; Pietrobelli, Angelo; Genovesi, Simonetta

    2017-01-13

    In 2007 the International Diabetes Federation (IDF) proposed single blood pressure (BP) cutpoints (systolic: ≥130 mm Hg and diastolic: ≥85 mm Hg) for the diagnosis of high blood pressure (HBP) in adolescents. Before this proposal, HBP had been defined as BP at or above the 95th percentile for age, sex, and height percentile (reference standard). In this study, we evaluated the risk for misclassification when using the IDF single-cutpoints criteria. We first applied the IDF criteria to a reconstructed population with the same age, sex, and height distribution as the population used to develop the reference standard. The proposed single cutpoints corresponded to percentiles from the 81.6th to 99.9th for systolic BP and from the 92.9th to 98.9th for diastolic BP in the reconstructed population. Using IDF criteria, there were high false-negative fractions for both systolic and diastolic BP (from 54% to 93%) in 10- to 12-year-olds and a false-positive fraction up to 35% in older subjects. We then applied the IDF criteria to 1,162 overweight/obese adolescents recruited during 1998-2000 from pediatric clinical centers in Milano, Varese, and Modena in Italy and in Zaragoza, Spain. Overall false-negative and false-positive fractions were 22% and 2%, respectively; negative predictive values were especially low for 10- to 12-year-old subjects. The use of IDF's single cutpoints carries a high risk of misclassification, mostly due to false negatives in younger subjects. The effort to simplify diagnosis could be overcome by the risk of undiagnosed HBP.

  13. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  14. Accurate calculation of the p Ka of trifluoroacetic acid using high-level ab initio calculations

    NASA Astrophysics Data System (ADS)

    Namazian, Mansoor; Zakery, Maryam; Noorbala, Mohammad R.; Coote, Michelle L.

    2008-01-01

    The p Ka value of trifluoroacetic acid has been successfully calculated using high-level ab initio methods such as G3 and CBS-QB3. Solvation energies have been calculated using CPCM continuum model of solvation at the HF and B3-LYP levels of theory with various basis sets. Excellent agreement with experiment (to within 0.4 p Ka units) was obtained using CPCM solvation energies at the B3-LYP/6-31+G(d) level (or larger) in conjunction with CBS-QB3 or G3 gas-phase energies of trifluoroacetic acid and its anion.

  15. In vivo investigation of homocysteine metabolism to polyamines by high-resolution accurate mass spectrometry and stable isotope labeling.

    PubMed

    Ruseva, Silviya; Lozanov, Valentin; Markova, Petia; Girchev, Radoslav; Mitev, Vanio

    2014-07-15

    Polyamines are essential polycations, playing important roles in mammalian physiology. Theoretically, the involvement of homocysteine in polyamine synthesis via S-adenosylmethionine is possible; however, to our knowledge, it has not been established experimentally. Here, we propose an original approach for investigation of homocysteine metabolites in an animal model. The method is based on the combination of isotope-labeled homocysteine supplementation and high-resolution accurate mass spectrometry analysis. Structural identity of the isotope-labeled metabolites was confirmed by accurate mass measurements of molecular and fragment ions and comparison of the retention times and tandem mass spectrometry fragmentation patterns. Isotope-labeled methionine, spermidine, and spermine were detected in all investigated plasma and tissue samples. The induction of moderate hyperhomocysteinemia leads to an alteration in polyamine levels in a different manner. The involvement of homocysteine in polyamine synthesis and modulation of polyamine levels could contribute to a better understanding of the mechanisms connected with homocysteine toxicity.

  16. High-Performance Phylogeny Reconstruction

    SciTech Connect

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  17. Nucleobase-functionalized graphene nanoribbons for accurate high-speed DNA sequencing

    NASA Astrophysics Data System (ADS)

    Paulechka, Eugene; Wassenaar, Tsjerk A.; Kroenlein, Kenneth; Kazakov, Andrei; Smolyanitsky, Alex

    2016-01-01

    We propose a water-immersed nucleobase-functionalized suspended graphene nanoribbon as an intrinsically selective device for nucleotide detection. The proposed sensing method combines Watson-Crick selective base pairing with graphene's capacity for converting anisotropic lattice strain to changes in an electrical current at the nanoscale. Using detailed atomistic molecular dynamics (MD) simulations, we study sensor operation at ambient conditions. We combine simulated data with theoretical arguments to estimate the levels of measurable electrical signal variation in response to strains and determine that the proposed sensing mechanism shows significant promise for realistic DNA sensing devices without the need for advanced data processing, or highly restrictive operational conditions.We propose a water-immersed nucleobase-functionalized suspended graphene nanoribbon as an intrinsically selective device for nucleotide detection. The proposed sensing method combines Watson-Crick selective base pairing with graphene's capacity for converting anisotropic lattice strain to changes in an electrical current at the nanoscale. Using detailed atomistic molecular dynamics (MD) simulations, we study sensor operation at ambient conditions. We combine simulated data with theoretical arguments to estimate the levels of measurable electrical signal variation in response to strains and determine that the proposed sensing mechanism shows significant promise for realistic DNA sensing devices without the need for advanced data processing, or highly restrictive operational conditions. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07061a

  18. High-throughput automated microfluidic sample preparation for accurate microbial genomics

    PubMed Central

    Kim, Soohong; De Jonghe, Joachim; Kulesa, Anthony B.; Feldman, David; Vatanen, Tommi; Bhattacharyya, Roby P.; Berdy, Brittany; Gomez, James; Nolan, Jill; Epstein, Slava; Blainey, Paul C.

    2017-01-01

    Low-cost shotgun DNA sequencing is transforming the microbial sciences. Sequencing instruments are so effective that sample preparation is now the key limiting factor. Here, we introduce a microfluidic sample preparation platform that integrates the key steps in cells to sequence library sample preparation for up to 96 samples and reduces DNA input requirements 100-fold while maintaining or improving data quality. The general-purpose microarchitecture we demonstrate supports workflows with arbitrary numbers of reaction and clean-up or capture steps. By reducing the sample quantity requirements, we enabled low-input (∼10,000 cells) whole-genome shotgun (WGS) sequencing of Mycobacterium tuberculosis and soil micro-colonies with superior results. We also leveraged the enhanced throughput to sequence ∼400 clinical Pseudomonas aeruginosa libraries and demonstrate excellent single-nucleotide polymorphism detection performance that explained phenotypically observed antibiotic resistance. Fully-integrated lab-on-chip sample preparation overcomes technical barriers to enable broader deployment of genomics across many basic research and translational applications. PMID:28128213

  19. High Performance Torso Cooling Garment

    NASA Technical Reports Server (NTRS)

    Conger, Bruce

    2016-01-01

    The concept proposed in this paper is to improve thermal efficiencies of the liquid cooling and ventilation garment (LCVG) in the torso area, which could facilitate removal of LCVG tubing from the arms and legs, thereby increasing suited crew member mobility. EVA space suit mobility in micro-gravity is challenging, and it becomes even more challenging in the gravity of Mars. By using shaped water tubes that greatly increase the contact area with the skin in the torso region of the body, the heat transfer efficiency can be increased. This increase in efficiency could provide the required liquid cooling via torso tubing only; no arm or leg LCVG tubing would be required. Benefits of this approach include increased crewmember mobility, reduced LCVG mass, enhanced evaporation cooling, increased comfort during Mars EVA tasks, and easing of the overly dry condition in the helmet associated with the Advanced Extravehicular Mobility Unit (EMU) ventilation loop currently under development. This report describes analysis and test activities performed to evaluate the potential improvements to the thermal performance of the LCVG. Analyses evaluated potential tube shapes for improving the thermal performance of the LCVG. The analysis results fed into the selection of flat flow strips to improve thermal contact with the skin of the suited test subject. Testing of small segments was performed to compare thermal performance of the tubing approach of the current LCVG to the flat flow strips proposed as the new concept. Results of the testing is presented along with recommendations for future development of this new concept.

  20. High Performance Torso Cooling Garment

    NASA Technical Reports Server (NTRS)

    Conger, Bruce; Makinen, Janice

    2016-01-01

    The concept proposed in this paper is to improve thermal efficiencies of the liquid cooling and ventilation garment (LCVG) in the torso area, which could facilitate removal of LCVG tubing from the arms and legs, thereby increasing suited crew member mobility. EVA space suit mobility in micro-gravity is challenging, and it becomes even more challenging in the gravity of Mars. By using shaped water tubes that greatly increase the contact area with the skin in the torso region of the body, the heat transfer efficiency can be increased. This increase in efficiency could provide the required liquid cooling via torso tubing only; no arm or leg LCVG tubing would be required. Benefits of this approach include increased crewmember mobility, enhanced evaporation cooling, increased comfort during Mars EVA tasks, and easing of the overly dry condition in the helmet associated with the Advanced Extravehicular Mobility Unit (EMU) ventilation loop currently under development. This report describes analysis and test activities performed to evaluate the potential improvements to the thermal performance of the LCVG. Analyses evaluated potential tube shapes for improving the thermal performance of the LCVG. The analysis results fed into the selection of flat flow strips to improve thermal contact with the skin of the suited test subject. Testing of small segments was performed to compare thermal performance of the tubing approach of the current LCVG to the flat flow strips proposed as the new concept. Results of the testing is presented along with recommendations for future development of this new concept.

  1. Assessment of a high-order accurate Discontinuous Galerkin method for turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Bassi, F.; Botti, L.; Colombo, A.; Crivellini, A.; Franchina, N.; Ghidoni, A.

    2016-04-01

    In this work the capabilities of a high-order Discontinuous Galerkin (DG) method applied to the computation of turbomachinery flows are investigated. The Reynolds averaged Navier-Stokes equations coupled with the two equations k-ω turbulence model are solved to predict the flow features, either in a fixed or rotating reference frame, to simulate the fluid flow around bodies that operate under an imposed steady rotation. To ensure, by design, the positivity of all thermodynamic variables at a discrete level, a set of primitive variables based on pressure and temperature logarithms is used. The flow fields through the MTU T106A low-pressure turbine cascade and the NASA Rotor 37 axial compressor have been computed up to fourth-order of accuracy and compared to the experimental and numerical data available in the literature.

  2. Can we accurately quantify nanoparticle associated proteins when constructing high-affinity MRI molecular imaging probes?

    PubMed

    Rimkus, Gabriella; Bremer-Streck, Sibylle; Grüttner, Cordula; Kaiser, Werner Alois; Hilger, Ingrid

    2011-01-01

    Targeted magnetic resonance contrast agents (e.g. iron oxide nanoparticles) have the potential to become highly selective imaging tools. In this context, quantification of the coupled amount of protein is essential for the design of antibody- or antibody fragment-conjugated nanoparticles. Nevertheless, the presence of magnetic iron oxide nanoparticles is still an unsolved problem for this task. The aim of the present work was to clarify whether proteins can be reliably quantified directly in the presence of magnetic iron oxide nanoparticles without the use of fluorescence or radioactivity. Protein quantification via Bradford was not influenced by the presence of magnetic iron oxide nanoparticles (0-17.2 mmol Fe l(-1) ). Instead, bicinchoninic acid based assay was, indeed, distinctly affected by the presence of nanoparticle-iron in suspension (0.1-17.2 mmol Fe l(-1) ), although the influence was linear. This observation allowed for adequate mathematical corrections with known iron content of a given nanoparticle. The applicability of our approach was demonstrated by the determination of bovine serum albumin (BSA) content coupled to dextrane-coated magnetic nanoparticles, which was found with the QuantiPro Bicinchoninic acid assay to be of 1.5 ± 0.2 µg BSA per 1 mg nanoparticle. Both Bradford and bicinchoninic acid assay protein assays allow for direct quantification of proteins in the presence of iron oxide containing magnetic nanoparticles, without the need for the introduction of radioactivity or fluorescence modules. Thus in future it should be possible to make more precise estimations about the coupled protein amount in high-affinity targeted MRI probes for the identification of specific molecules in living organisms, an aspect which is lacking in corresponding works published so far. Additionally, the present protein coupling procedures can be drastically improved by our proposed protein quantification method.

  3. High expression of CD26 accurately identifies human bacteria-reactive MR1-restricted MAIT cells

    PubMed Central

    Sharma, Prabhat K; Wong, Emily B; Napier, Ruth J; Bishai, William R; Ndung'u, Thumbi; Kasprowicz, Victoria O; Lewinsohn, Deborah A; Lewinsohn, David M; Gold, Marielle C

    2015-01-01

    Mucosa-associated invariant T (MAIT) cells express the semi-invariant T-cell receptor TRAV1–2 and detect a range of bacteria and fungi through the MHC-like molecule MR1. However, knowledge of the function and phenotype of bacteria-reactive MR1-restricted TRAV1–2+ MAIT cells from human blood is limited. We broadly characterized the function of MR1-restricted MAIT cells in response to bacteria-infected targets and defined a phenotypic panel to identify these cells in the circulation. We demonstrated that bacteria-reactive MR1-restricted T cells shared effector functions of cytolytic effector CD8+ T cells. By analysing an extensive panel of phenotypic markers, we determined that CD26 and CD161 were most strongly associated with these T cells. Using FACS to sort phenotypically defined CD8+ subsets we demonstrated that high expression of CD26 on CD8+ TRAV1–2+ cells identified with high specificity and sensitivity, bacteria-reactive MR1-restricted T cells from human blood. CD161hi was also specific for but lacked sensitivity in identifying all bacteria-reactive MR1-restricted T cells, some of which were CD161dim. Using cell surface expression of CD8, TRAV1–2, and CD26hi in the absence of stimulation we confirm that bacteria-reactive T cells are lacking in the blood of individuals with active tuberculosis and are restored in the blood of individuals undergoing treatment for tuberculosis. PMID:25752900

  4. High expression of CD26 accurately identifies human bacteria-reactive MR1-restricted MAIT cells.

    PubMed

    Sharma, Prabhat K; Wong, Emily B; Napier, Ruth J; Bishai, William R; Ndung'u, Thumbi; Kasprowicz, Victoria O; Lewinsohn, Deborah A; Lewinsohn, David M; Gold, Marielle C

    2015-07-01

    Mucosa-associated invariant T (MAIT) cells express the semi-invariant T-cell receptor TRAV1-2 and detect a range of bacteria and fungi through the MHC-like molecule MR1. However, knowledge of the function and phenotype of bacteria-reactive MR1-restricted TRAV1-2(+) MAIT cells from human blood is limited. We broadly characterized the function of MR1-restricted MAIT cells in response to bacteria-infected targets and defined a phenotypic panel to identify these cells in the circulation. We demonstrated that bacteria-reactive MR1-restricted T cells shared effector functions of cytolytic effector CD8(+) T cells. By analysing an extensive panel of phenotypic markers, we determined that CD26 and CD161 were most strongly associated with these T cells. Using FACS to sort phenotypically defined CD8(+) subsets we demonstrated that high expression of CD26 on CD8(+)  TRAV1-2(+) cells identified with high specificity and sensitivity, bacteria-reactive MR1-restricted T cells from human blood. CD161(hi) was also specific for but lacked sensitivity in identifying all bacteria-reactive MR1-restricted T cells, some of which were CD161(dim) . Using cell surface expression of CD8, TRAV1-2, and CD26(hi) in the absence of stimulation we confirm that bacteria-reactive T cells are lacking in the blood of individuals with active tuberculosis and are restored in the blood of individuals undergoing treatment for tuberculosis.

  5. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  6. Indoor Air Quality in High Performance Schools

    EPA Pesticide Factsheets

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  7. A high performance thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Tijani, M. E. H.; Spoelstra, S.

    2011-11-01

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  8. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  9. A new direct absorption measurement for high precision and accurate measurement of water vapor in the UT/LS

    NASA Astrophysics Data System (ADS)

    Sargent, M. R.; Sayres, D. S.; Smith, J. B.; Anderson, J.

    2011-12-01

    Highly accurate and precise water vapor measurements in the upper troposphere and lower stratosphere are critical to understanding the climate feedbacks of water vapor and clouds in that region. However, the continued disagreement among water vapor measurements (~1 - 2 ppmv) are too large to constrain the role of different hydration and dehydration mechanisms operating in the UT/LS, with model validation dependent upon which dataset is chosen. In response to these issues, we present a new instrument for measurement of water vapor in the UT/LS that was flown during the April 2011 MACPEX mission out of Houston, TX. The dual axis instrument combines the heritage and validated accuracy of the Harvard Lyman-alpha instrument with a newly designed direct IR absorption instrument, the Harvard Herriott Hygrometer (HHH). The Lyman-alpha detection axis has flown aboard NASA's WB-57 and ER2 aircraft since 1994, and provides a requisite link between the new HHH instrument and the long history of Harvard water vapor measurements. The instrument utilizes the highly sensitive Lyman-alpha photo-fragment fluorescence detection method; its accuracy has been demonstrated though rigorous laboratory calibrations and in situ diagnostic procedures. The Harvard Herriott Hygrometer employs a fiber coupled near-IR laser with state-of-the-art electronics to measure water vapor via direct absorption in a spherical Herriott cell of 10 cm length. The instrument demonstrated in-flight precision of 0.1 ppmv (1-sec, 1-sigma) at mixing ratios as low as 5 ppmv with accuracies of 10% based on careful laboratory calibrations and in-flight performance. We present a description of the measurement technique along with our methodology for calibration and details of the measurement uncertainties. The simultaneous utilization of radically different measurement techniques in a single duct in the new Harvard Water Vapor (HWV) instrument allows for the constraint of systematic errors inherent in each technique

  10. Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation

    NASA Astrophysics Data System (ADS)

    Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.

    2016-06-01

    Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.

  11. Obtaining Accurate Change Detection Results from High-Resolution Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Bryant, N.; Bunch, W.; Fretz, R.; Kim, P.; Logan, T.; Smyth, M.; Zobrist, A.

    2012-01-01

    Multi-date acquisitions of high-resolution imaging satellites (e.g. GeoEye and WorldView), can display local changes of current economic interest. However, their large data volume precludes effective manual analysis, requiring image co-registration followed by image-to-image change detection, preferably with minimal analyst attention. We have recently developed an automatic change detection procedure that minimizes false-positives. The processing steps include: (a) Conversion of both the pre- and post- images to reflectance values (this step is of critical importance when different sensors are involved); reflectance values can be either top-of-atmosphere units or have full aerosol optical depth calibration applied using bi-directional reflectance knowledge. (b) Panchromatic band image-to-image co-registration, using an orthorectified base reference image (e.g. Digital Orthophoto Quadrangle) and a digital elevation model; this step can be improved if a stereo-pair of images have been acquired on one of the image dates. (c) Pan-sharpening of the multispectral data to assure recognition of change objects at the highest resolution. (d) Characterization of multispectral data in the post-image ( i.e. the background) using unsupervised cluster analysis. (e) Band ratio selection in the post-image to separate surface materials of interest from the background. (f) Preparing a pre-to-post change image. (g) Identifying locations where change has occurred involving materials of interest.

  12. Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data

    PubMed Central

    White, Amelia G.; Cipriani, Patricia G.; Kao, Huey-Ling; Lees, Brandon; Geiger, Davi; Sontag, Eduardo; Gunsalus, Kristin C.; Piano, Fabio

    2011-01-01

    We present a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed stages. The object recognition machine consists of four hierarchical layers, each composed of units upon which evaluation functions output a label score, followed by a grouping mechanism that resolves ambiguities in the score by imposing local consistency constraints. Each layer then outputs groups of units, from which the units of the next layer are derived. Using this hierarchical principle, the machine builds up successively more sophisticated representations of the objects to be classified. The algorithm segments large and small objects, decomposes objects into parts, extracts features from these parts, and classifies them by SVM. We are using this system to analyze phenotypic data from C. elegans high-throughput genetic screens, and our system overcomes a previous bottleneck in image analysis by achieving near real-time scoring of image data. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users. PMID:22053146

  13. High-Performance Composite Chocolate

    ERIC Educational Resources Information Center

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  14. Toward High-Performance Organizations.

    ERIC Educational Resources Information Center

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  15. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  16. High performance, high density hydrocarbon fuels

    NASA Technical Reports Server (NTRS)

    Frankenfeld, J. W.; Hastings, T. W.; Lieberman, M.; Taylor, W. F.

    1978-01-01

    The fuels were selected from 77 original candidates on the basis of estimated merit index and cost effectiveness. The ten candidates consisted of 3 pure compounds, 4 chemical plant streams and 3 refinery streams. Critical physical and chemical properties of the candidate fuels were measured including heat of combustion, density, and viscosity as a function of temperature, freezing points, vapor pressure, boiling point, thermal stability. The best all around candidate was found to be a chemical plant olefin stream rich in dicyclopentadiene. This material has a high merit index and is available at low cost. Possible problem areas were identified as low temperature flow properties and thermal stability. An economic analysis was carried out to determine the production costs of top candidates. The chemical plant and refinery streams were all less than 44 cent/kg while the pure compounds were greater than 44 cent/kg. A literature survey was conducted on the state of the art of advanced hydrocarbon fuel technology as applied to high energy propellents. Several areas for additional research were identified.

  17. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  18. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  19. Nanocrystalline high performance permanent magnets

    NASA Astrophysics Data System (ADS)

    Gutfleisch, O.; Bollero, A.; Handstein, A.; Hinz, D.; Kirchner, A.; Yan, A.; Müller, K.-H.; Schultz, L.

    2002-04-01

    Recent developments in nanocrystalline rare earth-transition metal magnets are reviewed and emphasis is placed on research work at IFW Dresden. Principal synthesis methods include high energy ball milling, melt spinning and hydrogen assisted methods such as reactive milling and hydrogenation-disproportionation-desorption-recombination. These techniques are applied to NdFeB-, PrFeB- and SmCo-type systems with the aim to produce high remanence magnets with high coercivity. Concepts of maximizing the energy density in nanostructured magnets by either inducing a texture via anisotropic HDDR or hot deformation or enhancing the remanence via magnetic exchange coupling are evaluated.

  20. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  1. High-Performance Wireless Telemetry

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  2. High-Performance Miniature Hygrometer

    NASA Technical Reports Server (NTRS)

    Van Zandt, Thomas R.; Kaiser, William J.; Kenny, Thomas W.; Crisp, David

    1994-01-01

    Relatively inexpensive hygrometer that occupies volume less than 4 in.(3) measures dewpoints as much as 100 degrees C below ambient temperatures, with accuracy of 0.1 degrees C. Field tests indicate accuracy and repeatability identical to those of state-of-the-art larger dewpoint hygrometers. Operates up to 100 times as fast as older hygrometers, and offers simplicity and small size needed to meet cost and performance requirements of many applications.

  3. CONDENSED MATTER: STRUCTURE, MECHANICAL AND THERMAL PROPERTIES: An Accurate Image Simulation Method for High-Order Laue Zone Effects

    NASA Astrophysics Data System (ADS)

    Cai, Can-Ying; Zeng, Song-Jun; Liu, Hong-Rong; Yang, Qi-Bin

    2008-05-01

    A completely different formulation for simulation of the high order Laue zone (HOLZ) diffractions is derived. It refers to the new method, i.e. the Taylor series (TS) method. To check the validity and accuracy of the TS method, we take polyvinglidene fluoride (PVDF) crystal as an example to calculate the exit wavefunction by the conventional multi-slice (CMS) method and the TS method. The calculated results show that the TS method is much more accurate than the CMS method and is independent of the slice thicknesses. Moreover, the pure first order Laue zone wavefunction by the TS method can reflect the major potential distribution of the first reciprocal plane.

  4. High performance Vernier racetrack resonators.

    PubMed

    Boeck, Robert; Flueckiger, Jonas; Yun, Han; Chrostowski, Lukas; Jaeger, Nicolas A F

    2012-12-15

    We demonstrate record performance of series-coupled silicon racetrack resonators exhibiting the Vernier effect. Our device has an interstitial peak suppression (IPS) of 25.5 dB, which is 14.5 dB larger than previously reported results. We also demonstrate the relationship between the inter-ring gap distance and the IPS as well as the 3 dB bandwidth (BW) both theoretically and experimentally. Namely, we show that as the inter-ring gap distance increases, the IPS increases and the 3 dB BW decreases.

  5. High-performance solar collector

    NASA Technical Reports Server (NTRS)

    Beekley, D. C.; Mather, G. R., Jr.

    1979-01-01

    Evacuated all-glass concentric tube collector using air or liquid transfer mediums is very efficient at high temperatures. Collector can directly drive existing heating systems that are presently driven by fossil fuel with relative ease of conversion and less expense than installation of complete solar heating systems.

  6. High-performance magnetic gears

    NASA Astrophysics Data System (ADS)

    Atallah, Kais; Calverley, Stuart D.; Howe, David

    2004-05-01

    Magnetic gearing may offer significant advantages such as reduced maintenance and improved reliability, inherent overload protection, and physical isolation between input and output shafts. Despite these advantages, it has received relatively little attention, to date, probably due to the poor torque transmission capability of proposed magnetic gears. The paper describes a magnetic gear topology, which combines a significantly higher torque transmission capability and a very high efficiency.

  7. High performance rotational vibration isolator

    NASA Astrophysics Data System (ADS)

    Sunderland, Andrew; Blair, David G.; Ju, Li; Golden, Howard; Torres, Francis; Chen, Xu; Lockwood, Ray; Wolfgram, Peter

    2013-10-01

    We present a new rotational vibration isolator with an extremely low resonant frequency of 0.055 ± 0.002 Hz. The isolator consists of two concentric spheres separated by a layer of water and joined by very soft silicone springs. The isolator reduces rotation noise at all frequencies above its resonance which is very important for airborne mineral detection. We show that more than 40 dB of isolation is achieved in a helicopter survey for rotations at frequencies between 2 Hz and 20 Hz. Issues affecting performance such as translation to rotation coupling and temperature are discussed. The isolator contains almost no metal, making it particularly suitable for electromagnetic sensors.

  8. High performance rotational vibration isolator.

    PubMed

    Sunderland, Andrew; Blair, David G; Ju, Li; Golden, Howard; Torres, Francis; Chen, Xu; Lockwood, Ray; Wolfgram, Peter

    2013-10-01

    We present a new rotational vibration isolator with an extremely low resonant frequency of 0.055 ± 0.002 Hz. The isolator consists of two concentric spheres separated by a layer of water and joined by very soft silicone springs. The isolator reduces rotation noise at all frequencies above its resonance which is very important for airborne mineral detection. We show that more than 40 dB of isolation is achieved in a helicopter survey for rotations at frequencies between 2 Hz and 20 Hz. Issues affecting performance such as translation to rotation coupling and temperature are discussed. The isolator contains almost no metal, making it particularly suitable for electromagnetic sensors.

  9. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV.

  10. Accurate calculation and assignment of highly excited vibrational levels of floppy triatomic molecules in a basis of adiabatic vibrational eigenstates

    NASA Astrophysics Data System (ADS)

    Bačić, Z.

    1991-09-01

    We show that the triatomic adiabatic vibrational eigenstates (AVES) provide a convenient basis for accurate discrete variable representation (DVR) calculation and automatic assignment of highly excited, large amplitude motion vibrational states of floppy triatomic molecules. The DVR-AVES states are eigenvectors of the diagonal (in the stretch states) blocks of the adiabatically rearranged triatomic DVR-ray eigenvector (DVR-REV) Hamiltonian [J. C. Light and Z. Bačić, J. Chem. Phys. 87, 4008 (1987)]. The transformation of the full triatomic vibrational Hamiltonian from the DVR-REV basis to the new DVR-AVES basis is simple, and does not involve calculation of any new matrix elements. No dynamical approximation is made in the energy level calculation by the DVR-AVES approach; its accuracy and efficiency are identical to those of the DVR-REV method. The DVR-AVES states, as the adiabatic approximation to the vibrational states of a triatomic molecule, are labeled by three vibrational quantum numbers. Consequently, accurate large amplitude motion vibrational levels obtained by diagonalizing the full vibrational Hamiltonian transformed to the DVR-AVES basis, can be assigned automatically by the code, with the three quantum numbers of the dominant DVR-AVES state associated with the largest (by modulus) eigenvector element in the DVR-AVES basis. The DVR-AVES approach is used to calculate accurate highly excited localized and delocalized vibrational levels of HCN/HNC and LiCN/LiNC. A significant fraction of localized states of both systems, below and above the isomerization barrier, is assigned automatically, without inspection of wave function plots or separate approximate calculations.

  11. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  12. Accurate Coregistration between Ultra-High-Resolution Micro-SPECT and Circular Cone-Beam Micro-CT Scanners.

    PubMed

    Ji, Changguo; van der Have, Frans; Gratama van Andel, Hugo; Ramakers, Ruud; Beekman, Freek

    2010-01-01

    Introduction. Spatially registering SPECT with CT makes it possible to anatomically localize SPECT tracers. In this study, an accurate method for the coregistration of ultra-high-resolution SPECT volumes and multiple cone-beam CT volumes is developed and validated, which does not require markers during animal scanning. Methods. Transferable animal beds were developed with an accurate mounting interface. Simple calibration phantoms make it possible to obtain both the spatial transformation matrix for stitching multiple CT scans of different parts of the animal and to register SPECT and CT. The spatial transformation for image coregistration is calculated once using Horn's matching algorithm. Animal images can then be coregistered without using markers. Results. For mouse-sized objects, average coregistration errors between SPECT and CT in X, Y, and Z directions are within 0.04 mm, 0.10 mm, and 0.19 mm, respectively. For rat-sized objects, these numbers are 0.22 mm, 0.14 mm, and 0.28 mm. Average 3D coregistration errors were within 0.24 mm and 0.42 mm for mouse and rat imaging, respectively. Conclusion. Extending the field-of-view of cone-beam CT by stitching is improved by prior registration of the CT volumes. The accuracy of registration between SPECT and CT is typically better than the image resolution of current ultra-high-resolution SPECT.

  13. High performance light emitting transistors

    NASA Astrophysics Data System (ADS)

    Namdas, Ebinazar B.; Ledochowitsch, Peter; Yuen, Jonathan D.; Moses, Daniel; Heeger, Alan J.

    2008-05-01

    Solution processed light emitting field-effect transistors (LEFETs) with peak brightness exceeding 2500cd/m2 and external quantum efficiency of 0.15% are demonstrated. The devices utilized a bilayer film comprising a hole transporting polymer, poly(2,5-bis(3-tetradecylthiophen-2-yl)thieno[3,2-b] thiophene) and a light emitting polymer, Super Yellow, a polyphenylenevinylene derivative. The LEFETs were fabricated in the bottom gate architecture with top-contact Ca /Ag as source/drain electrodes. Light emission was controlled by the gate voltage which controls the hole current. These results indicate that high brightness LEFETs can be made by using the bilayer film (hole transporting layer and a light emitting polymer).

  14. Designing high-performance jobs.

    PubMed

    Simons, Robert

    2005-01-01

    Tales of great strategies derailed by poor execution are all too common. That's because some organizations are designed to fail. For a company to achieve its potential, each employee's supply of organizational resources should equal the demand, and the same balance must apply to every business unit and to the company as a whole. To carry out his or her job, each employee has to know the answers to four basic questions: What resources do I control to accomplish my tasks? What measures will be used to evaluate my performance? Who do I need to interact with and influence to achieve my goals? And how much support can I expect when I reach out to others for help? The questions correspond to what the author calls the four basic spans of a job-control, accountability, influence, and support. Each span can be adjusted so that it is narrow or wide or somewhere in between. If you get the settings right, you can design a job in which a talented individual can successfully execute on your company's strategy. If you get the settings wrong, it will be difficult for an employee to be effective. The first step is to set the span of control to reflect the resources allocated to each position and unit that plays an important role in delivering customer value. This setting, like the others, is determined by how the business creates value for customers and differentiates its products and services. Next, you can dial in different levels of entrepreneurial behavior and creative tension by widening or narrowing spans of accountability and influence. Finally, you must adjust the span of support to ensure that the job or unit will get the informal help it needs.

  15. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda

    PubMed Central

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I.; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne MQ; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039–2.3438% by linear regression analysis (R2 = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  16. HIGH-PERFORMANCE COATING MATERIALS

    SciTech Connect

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  17. Flow simulation and high performance computing

    NASA Astrophysics Data System (ADS)

    Tezduyar, T.; Aliabadi, S.; Behr, M.; Johnson, A.; Kalro, V.; Litke, M.

    1996-10-01

    Flow simulation is a computational tool for exploring science and technology involving flow applications. It can provide cost-effective alternatives or complements to laboratory experiments, field tests and prototyping. Flow simulation relies heavily on high performance computing (HPC). We view HPC as having two major components. One is advanced algorithms capable of accurately simulating complex, real-world problems. The other is advanced computer hardware and networking with sufficient power, memory and bandwidth to execute those simulations. While HPC enables flow simulation, flow simulation motivates development of novel HPC techniques. This paper focuses on demonstrating that flow simulation has come a long way and is being applied to many complex, real-world problems in different fields of engineering and applied sciences, particularly in aerospace engineering and applied fluid mechanics. Flow simulation has come a long way because HPC has come a long way. This paper also provides a brief review of some of the recently-developed HPC methods and tools that has played a major role in bringing flow simulation where it is today. A number of 3D flow simulations are presented in this paper as examples of the level of computational capability reached with recent HPC methods and hardware. These examples are, flow around a fighter aircraft, flow around two trains passing in a tunnel, large ram-air parachutes, flow over hydraulic structures, contaminant dispersion in a model subway station, airflow past an automobile, multiple spheres falling in a liquid-filled tube, and dynamics of a paratrooper jumping from a cargo aircraft.

  18. High Concentrations of Measles Neutralizing Antibodies and High-Avidity Measles IgG Accurately Identify Measles Reinfection Cases

    PubMed Central

    Rota, Jennifer S.; Hickman, Carole J.; Mercader, Sara; Redd, Susan; McNall, Rebecca J.; Williams, Nobia; McGrew, Marcia; Walls, M. Laura; Rota, Paul A.; Bellini, William J.

    2016-01-01

    In the United States, approximately 9% of the measles cases reported from 2012 to 2014 occurred in vaccinated individuals. Laboratory confirmation of measles in vaccinated individuals is challenging since IgM assays can give inconclusive results. Although a positive reverse transcription (RT)-PCR assay result from an appropriately timed specimen can provide confirmation, negative results may not rule out a highly suspicious case. Detection of high-avidity measles IgG in serum samples provides laboratory evidence of a past immunologic response to measles from natural infection or immunization. High concentrations of measles neutralizing antibody have been observed by plaque reduction neutralization (PRN) assays among confirmed measles cases with high-avidity IgG, referred to here as reinfection cases (RICs). In this study, we evaluated the utility of measuring levels of measles neutralizing antibody to distinguish RICs from noncases by receiver operating characteristic curve analysis. Single and paired serum samples with high-avidity measles IgG from suspected measles cases submitted to the CDC for routine surveillance were used for the analysis. The RICs were confirmed by a 4-fold rise in PRN titer or by RT-quantitative PCR (RT-qPCR) assay, while the noncases were negative by both assays. Discrimination accuracy was high with serum samples collected ≥3 days after rash onset (area under the curve, 0.953; 95% confidence interval [CI], 0.854 to 0.993). Measles neutralizing antibody concentrations of ≥40,000 mIU/ml identified RICs with 90% sensitivity (95% CI, 74 to 98%) and 100% specificity (95% CI, 82 to 100%). Therefore, when serological or RT-qPCR results are unavailable or inconclusive, suspected measles cases with high-avidity measles IgG can be confirmed as RICs by measles neutralizing antibody concentrations of ≥40,000 mIU/ml. PMID:27335386

  19. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  20. Accurate measurement of dispersion data through short and narrow tubes used in very high-pressure liquid chromatography.

    PubMed

    Gritti, Fabrice; McDonald, Thomas; Gilar, Martin

    2015-09-04

    An original method is proposed for the accurate and reproducible measurement of the time-based dispersion properties of short L< 50cm and narrow rc< 50μm tubes at mobile phase flow rates typically used in very high-pressure liquid chromatography (vHPLC). Such tubes are used to minimize sample dispersion in vHPLC; however, their dispersion characteristics cannot be accurately measured at such flow rates due to system dispersion contribution of vHPLC injector and detector. It is shown that using longer and wider tubes (>10μL) enables a reliable measurement of the dispersion data. We confirmed that the dimensionless plot of the reduced dispersion coefficient versus the reduced linear velocity (Peclet number) depends on the aspect ratio, L/rc, of the tube, and unexpectedly also on the diffusion coefficient of the analyte. This dimensionless plot could be easily obtained for a large volume tube, which has the same aspect ratio as that of the short and narrow tube, and for the same diffusion coefficient. The dispersion data for the small volume tube are then directly extrapolated from this plot. For instance, it is found that the maximum volume variances of 75μm×30.5cm and 100μm×30.5cm prototype finger-tightened connecting tubes are 0.10 and 0.30μL(2), respectively, with an accuracy of a few percent and a precision smaller than seven percent.

  1. Highly accurate relativistic universal Gaussian basis set: Dirac-Fock-Coulomb calculations for atomic systems up to nobelium

    NASA Astrophysics Data System (ADS)

    Malli, G. L.; Da Silva, A. B. F.; Ishikawa, Yasuyuki

    1994-10-01

    A universal Gaussian basis set is developed that leads to relativistic Dirac-Fock SCF energies of comparable accuracy as that obtained by the accurate numerical finite-difference method (GRASP2 package) [J. Phys. B 25, 1 (1992)]. The Gaussian-type functions of our universal basis set satisfy the relativistic boundary conditions associated with the finite nuclear model for a finite speed of light and conform to the so-called kinetic balance at the nonrelativistic limit. We attribute the exceptionally high accuracy obtained in our calculations to the fact that the representation of the relativistic dynamics of an electron in a spherical ball finite nucleus near the origin in terms of our universal Gaussian basis set is as accurate as that provided by the numerical finite-difference method. Results of the Dirac-Fock-Coulomb energies for a number of atoms up to No (Z=102) and some negative ions are presented and compared with the recent results obtained with the numerical finite-difference method and geometrical Gaussian basis sets by Parpia, Mohanty, and Clementi [J. Phys. B 25, 1 (1992)]. The accuracy of our calculations is estimated to be within a few parts in 109 for all the atomic systems studied.

  2. Multi-stencils fast marching methods: a highly accurate solution to the eikonal equation on cartesian domains.

    PubMed

    Hassouna, M Sabry; Farag, A A

    2007-09-01

    A wide range of computer vision applications require an accurate solution of a particular Hamilton- Jacobi (HJ) equation, known as the Eikonal equation. In this paper, we propose an improved version of the fast marching method (FMM) that is highly accurate for both 2D and 3D Cartesian domains. The new method is called multi-stencils fast marching (MSFM), which computes the solution at each grid point by solving the Eikonal equation along several stencils and then picks the solution that satisfies the upwind condition. The stencils are centered at each grid point and cover its entire nearest neighbors. In 2D space, 2 stencils cover the 8-neighbors of the point, while in 3D space, 6 stencils cover its 26-neighbors. For those stencils that are not aligned with the natural coordinate system, the Eikonal equation is derived using directional derivatives and then solved using higher order finite difference schemes. The accuracy of the proposed method over the state-of-the-art FMM-based techniques has been demonstrated through comprehensive numerical experiments.

  3. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  4. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  5. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  6. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information.

    PubMed

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W; Jin, Ke; Du, Yingge; Neeway, James J; Ryan, Joseph V; Hu, Dehong; Zhang, Kelvin H L; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest.

  7. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  8. Distribution of high-stability 10 GHz local oscillator over 100 km optical fiber with accurate phase-correction system.

    PubMed

    Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng

    2014-02-15

    We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70  dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π.

  9. Impact of interfacial high-density water layer on accurate estimation of adsorption free energy by Jarzynski's equality

    NASA Astrophysics Data System (ADS)

    Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang

    2014-01-01

    The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.

  10. A highly sensitive and accurate gene expression analysis by sequencing ("bead-seq") for a single cell.

    PubMed

    Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki

    2015-02-15

    Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes.

  11. Accurate high-resolution measurements of 3-D tissue dynamics with registration-enhanced displacement encoded MRI.

    PubMed

    Gomez, Arnold D; Merchant, Samer S; Hsu, Edward W

    2014-06-01

    Displacement fields are important to analyze deformation, which is associated with functional and material tissue properties often used as indicators of health. Magnetic resonance imaging (MRI) techniques like DENSE and image registration methods like Hyperelastic Warping have been used to produce pixel-level deformation fields that are desirable in high-resolution analysis. However, DENSE can be complicated by challenges associated with image phase unwrapping, in particular offset determination. On the other hand, Hyperelastic Warping can be hampered by low local image contrast. The current work proposes a novel approach for measuring tissue displacement with both DENSE and Hyperelastic Warping, incorporating physically accurate displacements obtained by the latter to improve phase characterization in DENSE. The validity of the proposed technique is demonstrated using numerical and physical phantoms, and in vivo small animal cardiac MRI.

  12. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  13. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  14. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  15. Highly Accurate Quartic Force Fields, Vibrational Frequencies, and Spectroscopic Constants for Cyclic and Linear C3H3(+)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.

    2011-01-01

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.

  16. High performance computing at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  17. High Resolution Urban Feature Extraction for Global Population Mapping using High Performance Computing

    SciTech Connect

    Vijayaraj, Veeraraghavan; Bright, Eddie A; Bhaduri, Budhendra L

    2007-01-01

    The advent of high spatial resolution satellite imagery like Quick Bird (0.6 meter) and IKONOS (1 meter) has provided a new data source for high resolution urban land cover mapping. Extracting accurate urban regions from high resolution images has many applications and is essential to the population mapping efforts of Oak Ridge National Laboratory's (ORNL) LandScan population distribution program. This paper discusses an automated parallel algorithm that has been implemented on a high performance computing environment to extract urban regions from high resolution images using texture and spectral features

  18. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    NASA Astrophysics Data System (ADS)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  19. LANL High-Performance Data System (HPDS)

    NASA Technical Reports Server (NTRS)

    Collins, M. William; Cook, Danny; Jones, Lynn; Kluegel, Lynn; Ramsey, Cheryl

    1993-01-01

    The Los Alamos High-Performance Data System (HPDS) is being developed to meet the very large data storage and data handling requirements of a high-performance computing environment. The HPDS will consist of fast, large-capacity storage devices that are directly connected to a high-speed network and managed by software distributed in workstations. The HPDS model, the HPDS implementation approach, and experiences with a prototype disk array storage system are presented.

  20. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  1. Development of a Ground-Based Differential Absorption Lidar for High Accurate Measurements of Vertical CO2 Concentration Profiles

    NASA Astrophysics Data System (ADS)

    Nagasawa, Chikao; Abo, Makoto; Shibata, Yasukuni; Nagai, Tomohiro; Nakazato, Masahisa; Sakai, Tetsu; Tsukamoto, Makoto; Sakaizawa, Daisuku

    2010-05-01

    High-accurate vertical carbon dioxide (CO2) profiles are highly desirable in the inverse method to improve quantification and understanding of the global sink and source of CO2, and also global climate change. We have developed a ground based 1.6μm differential absorption lidar (DIAL) to achieve high accurate measurements of vertical CO2 profiles in the atmosphere. The DIAL system is constructed from the optical parametric oscillation(OPO) transmitter and the direct detection receiving system that included a near-infrared photomultiplier tube operating at photon counting mode. The primitive DIAL measurement was achieved successfully the vertical CO2 profile up to 7 km altitude with an error less than 1.0 % by integration time of 50 minutes and vertical resolution of 150m. We are developing the next generation 1.6 μm DIAL that can measure simultaneously the vertical CO2 concentration, temperature and pressure profiles in the atmosphere. The output laser of the OPO is 20mJ at a 500 Hz repetition rate and a 600mm diameter telescope is employed for this measurement. A very narrow interference filter (0.5nm FWHM) is used for daytime measurement. As the spectra of absorption lines of any molecules are influenced basically by the temperature and pressure in the atmosphere, it is important to measure them simultaneously so that the better accuracy of the DIAL measurement may be realized. Moreover, the value of the retrieved CO2 concentration will be improved remarkably by processing the iteration assignment of CO2 concentration, temperature and pressure, which measured by DIAL techniques. This work was financially supported by the Japan EOS Promotion Program by the MEXT Japan and System Development Program for Advanced Measurement and Analysis by the JST. Reference D. Sakaizawa, C. Nagasawa, T. Nagai, M. Abo, Y. Shibata, H. Nagai, M. Nakazato, and T. Sakai, Development of a 1.6μm differential absorption lidar with a quasi-phase-matching optical parametric oscillator and

  2. Development of Ground-Based DIAL Techniques for High Accurate Measurements of CO2 Concentration Profiles in the Atmosphere

    NASA Astrophysics Data System (ADS)

    Nagasawa, C.; Abo, M.; Shibata, Y.; Nagai, T.; Nakazato, M.; Sakai, T.; Tsukamoto, M.; Sakaizawa, D.

    2009-12-01

    High-accurate vertical carbon dioxide (CO2) profiles are highly desirable in the inverse method to improve quantification and understanding of the global sink and source of CO2, and also global climate change. We have developed a ground based 1.6μm differential absorption lidar (DIAL) to achieve high accurate measurements of vertical CO2 profiles in the atmosphere. The DIAL system is constructed from the optical parametric oscillation(OPO) transmitter and the direct detection receiving system that included a near-infrared photomultiplier tube operating at photon counting mode (Fig.1). The primitive DIAL measurement was achieved successfully the vertical CO2 profile up to 7 km altitude with an error less than 1.0 % by integration time of 50 minutes and vertical resolution of 150m. We develop the next generation 1.6 μm DIAL that can measure simultaneously the vertical CO2 concentration, temperature and pressure profiles in the atmosphere. The characteristics of the 1.6 μm DIALs of the primitive and next generations are shown in Table 1. As the spectra of absorption lines of any molecules are influenced basically by the temperature and pressure in the atmosphere, it is important to measure them simultaneously so that the better accuracy of the DIAL measurement may be realized. Moreover, the value of the retrieved CO2 concentration will be improved remarkably by processing the iteration assignment of CO2 concentration, temperature and pressure which measured by DIAL techniques. This work was financially supported by the Japan EOS Promotion Program by the MEXT Japan and System Development Program for Advanced Measurement and Analysis by the JST. Reference D. Sakaisawa et al., Development of a 1.6μm differential absorption lidar with a quasi-phase-matching optical parametric oscillator and photon-counting detector for the vertical CO2 profile, Applied Optics, Vol.48, No.4, pp.748-757, 2009. Fig. 1 Experimental setup of the 1.6 μm CO2 DIAL. Comparison of primitive

  3. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  4. Accurate and High-Coverage Immune Repertoire Sequencing Reveals Characteristics of Antibody Repertoire Diversification in Young Children with Malaria

    NASA Astrophysics Data System (ADS)

    Jiang, Ning

    Accurately measuring the immune repertoire sequence composition, diversity, and abundance is important in studying repertoire response in infections, vaccinations, and cancer immunology. Using molecular identifiers (MIDs) to tag mRNA molecules is an effective method in improving the accuracy of immune repertoire sequencing (IR-seq). However, it is still difficult to use IR-seq on small amount of clinical samples to achieve a high coverage of the repertoire diversities. This is especially challenging in studying infections and vaccinations where B cell subpopulations with fewer cells, such as memory B cells or plasmablasts, are often of great interest to study somatic mutation patterns and diversity changes. Here, we describe an approach of IR-seq based on the use of MIDs in combination with a clustering method that can reveal more than 80% of the antibody diversity in a sample and can be applied to as few as 1,000 B cells. We applied this to study the antibody repertoires of young children before and during an acute malaria infection. We discovered unexpectedly high levels of somatic hypermutation (SHM) in infants and revealed characteristics of antibody repertoire development in young children that would have a profound impact on immunization in children.

  5. Highly Accurate Semi-Empirical IR Line Lists of Asymmetric SO2 Isotopologues: SO18O and SO17O

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2015-12-01

    Atmosphere models and simulations of Venus, Mars, and Exo-planets will greatly benefit from complete and accurate Infrared spectra data of important molecules such as SO2 and CO2. Currently, high resolution spectra data for SO2 is very limited at 296K and mainly for the primary isotopologue 626. It cannot effectively support the observed data analysis and simulations. Recently we published a semi-empirically refined potential energy surface, denoted Ames-1, and Ames-296K IR line lists for SO2 626 and a few symmetric isotopologues including 646, 636, 666 and 828. The accuracy of line positions is around 0.01 - 0.03 cm-1 for most transitions. For intensities, most deviations are less than 5-15%. Now we have carried out new potential energy surface refinements by including latest experimental data and those of isotopologues. On the newly fitted surface, for the first time we have computed 296K line lists for the two most abundant asymmetric isotopologues, SO2 628 and SO2 627. We will present the spectra simulations of SO2 628 and SO2 627, and compare it with latest high resolution experimental spectroscopy of SO2 628. A composite "natural" line list at 296K is also available with terrestial abundances. These line lists will be available to download at http://huang.seti.org.

  6. A novel, integrated PET-guided MRS technique resulting in more accurate initial diagnosis of high-grade glioma.

    PubMed

    Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash

    2016-06-01

    Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas.

  7. Intra-Auditory Integration Improves Motor Performance and Synergy in an Accurate Multi-Finger Pressing Task

    PubMed Central

    Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H.; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun

    2016-01-01

    Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly

  8. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  9. Advanced high-performance computer system architectures

    NASA Astrophysics Data System (ADS)

    Vinogradov, V. I.

    2007-02-01

    Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

  10. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  11. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  12. Rapidly Reconfigurable High Performance Computing Cluster

    DTIC Science & Technology

    2005-07-01

    1 SECTION 2 BACKGROUN D AN D OBJECTIVES ......................................................................... 2 2.1 H...igh Perform ance Com puting Trends ................................................................................ 2 2.2 Georgia Tech Activity in H PEC

  13. High Performance Split-Stirling Cooler Program

    DTIC Science & Technology

    1982-09-01

    7 SPLIT- STIRLING CYCLE CRYOCOOLER . ...... . . . . . 13 8 TEMPERATURE-SHOCK COMPARISON PERFORMANCE DATA, S/N 002 . . 23 9 TEMPERATURE-SHOCK...PERFORMANCE SPLIT- STIRLING "COOLER PROGRAM FINAL TECHNICAL REPORT "September 1982 Prepared for NIGHT VISION AND ELECTRO-OPTICS LABORATORI ES "Contract DAAK70...REPORT & P.Vt2OO COVERED HIGH PERFORMANCE SPLIT- STIRLING COOLER PROGRAM Final Technical Sept. 1979. - Sept. 1982 S. PERPORMING ORO. REPORT KUMMER

  14. Architecture Analysis of High Performance Capacitors (POSTPRINT)

    DTIC Science & Technology

    2009-07-01

    includes the measurement of heat dissipated from a recently developed fluorenyl polyester (FPE) capacitor under an AC excitation. II. Capacitor ...AFRL-RZ-WP-TP-2010-2100 ARCHITECTURE ANALYSIS OF HIGH PERFORMANCE CAPACITORS (POSTPRINT) Hiroyuki Kosai and Tyler Bixel UES, Inc...2009 4. TITLE AND SUBTITLE ARCHITECTURE ANALYSIS OF HIGH PERFORMANCE CAPACITORS (POSTPRINT) 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c

  15. Quantitation of Insulin-Like Growth Factor 1 in Serum by Liquid Chromatography High Resolution Accurate-Mass Mass Spectrometry.

    PubMed

    Ketha, Hemamalini; Singh, Ravinder J

    2016-01-01

    Insulin-like growth factor 1 (IGF-1) is a 70 amino acid peptide hormone which acts as the principal mediator of the effects of growth hormone (GH). Due to a wide variability in circulating concentration of GH, IGF-1 quantitation is the first step in the diagnosis of GH excess or deficiency. Majority (>95 %) of IGF-1 circulates as a ternary complex along with its principle binding protein insulin-like growth factor 1 binding protein 3 (IGFBP-3) and acid labile subunit. The assay design approach for IGF-1 quantitation has to include a step to dissociate IGF-1 from its ternary complex. Several commercial assays employ a buffer containing acidified ethanol to achieve this. Despite several modifications, commercially available immunoassays have been shown to have challenges with interference from IGFBP-3. Additionally, inter-method comparison between IGF-1 immunoassays has been shown to be suboptimal. Mass spectrometry has been utilized for quantitation of IGF-1. In this chapter a liquid chromatography high resolution accurate-mass mass spectrometry (LC-HRAMS) based method for IGF-1 quantitation has been described.

  16. Nuclear Quantum Effects in Liquid Water: A Highly Accurate ab initio Path-Integral Molecular Dynamics Study

    NASA Astrophysics Data System (ADS)

    Distasio, Robert A., Jr.; Santra, Biswajit; Ko, Hsin-Yu; Car, Roberto

    2014-03-01

    In this work, we report highly accurate ab initio path-integral molecular dynamics (AI-PIMD) simulations on liquid water at ambient conditions utilizing the recently developed PBE0+vdW(SC) exchange-correlation functional, which accounts for exact exchange and a self-consistent pairwise treatment of van der Waals (vdW) or dispersion interactions, combined with nuclear quantum effects (via the colored-noise generalized Langevin equation). The importance of each of these effects in the theoretical prediction of the structure of liquid water will be demonstrated by a detailed comparative analysis of the predicted and experimental oxygen-oxygen (O-O), oxygen-hydrogen (O-H), and hydrogen-hydrogen (H-H) radial distribution functions as well as other structural properties. In addition, we will discuss the theoretically obtained proton momentum distribution, computed using the recently developed Feynman path formulation, in light of the experimental deep inelastic neutron scattering (DINS) measurements. DOE: DE-SC0008626, DOE: DE-SC0005180.

  17. A portable analog lock-in amplifier for accurate phase measurement and application in high-precision optical oxygen concentration detection

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Wei, Wei; Liu, Yuanyuan; Qin, Zengguang

    2017-03-01

    A portable analog lock-in amplifier capable of accurate phase detection is proposed in this paper. The proposed lock-in amplifier, which uses the dual-channel orthometric signals as the references to build the xy coordinate system, can detect the relative phase between the input and x-axis based on trigonometric function. The sensitivity of the phase measurement reaches 0.014 degree, and a detection precision of 0.1 degree is achieved. At the same time, the performance of the lock-in amplifier is verified in the high precision optical oxygen concentration detection. Experimental results reveal that the portable analog lock-in amplifier is accurate for phase detection applications. In the oxygen sensing experiments, 0.058% oxygen concentration resulted in 0.1 degree phase shift detected by the lock-in amplifier precisely. In addition, the lock-in amplifier is small and economical compared with the commercial lock-in equipments, so it can be easily integrated in many portable devices for industrial applications.

  18. A portable analog lock-in amplifier for accurate phase measurement and application in high-precision optical oxygen concentration detection

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Wei, Wei; Liu, Yuanyuan; Qin, Zengguang

    2016-10-01

    A portable analog lock-in amplifier capable of accurate phase detection is proposed in this paper. The proposed lock-in amplifier, which uses the dual-channel orthometric signals as the references to build the xy coordinate system, can detect the relative phase between the input and x-axis based on trigonometric function. The sensitivity of the phase measurement reaches 0.014 degree, and a detection precision of 0.1 degree is achieved. At the same time, the performance of the lock-in amplifier is verified in the high precision optical oxygen concentration detection. Experimental results reveal that the portable analog lock-in amplifier is accurate for phase detection applications. In the oxygen sensing experiments, 0.058% oxygen concentration resulted in 0.1 degree phase shift detected by the lock-in amplifier precisely. In addition, the lock-in amplifier is small and economical compared with the commercial lock-in equipments, so it can be easily integrated in many portable devices for industrial applications.

  19. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1

    PubMed Central

    Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.

    2016-01-01

    Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria

  20. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  1. A standardized framework for accurate, high-throughput genotyping of recombinant and non-recombinant viral sequences.

    PubMed

    Alcantara, Luiz Carlos Junior; Cassol, Sharon; Libin, Pieter; Deforche, Koen; Pybus, Oliver G; Van Ranst, Marc; Galvão-Castro, Bernardo; Vandamme, Anne-Mieke; de Oliveira, Tulio

    2009-07-01

    Human immunodeficiency virus type-1 (HIV-1), hepatitis B and C and other rapidly evolving viruses are characterized by extremely high levels of genetic diversity. To facilitate diagnosis and the development of prevention and treatment strategies that efficiently target the diversity of these viruses, and other pathogens such as human T-lymphotropic virus type-1 (HTLV-1), human herpes virus type-8 (HHV8) and human papillomavirus (HPV), we developed a rapid high-throughput-genotyping system. The method involves the alignment of a query sequence with a carefully selected set of pre-defined reference strains, followed by phylogenetic analysis of multiple overlapping segments of the alignment using a sliding window. Each segment of the query sequence is assigned the genotype and sub-genotype of the reference strain with the highest bootstrap (>70%) and bootscanning (>90%) scores. Results from all windows are combined and displayed graphically using color-coded genotypes. The new Virus-Genotyping Tools provide accurate classification of recombinant and non-recombinant viruses and are currently being assessed for their diagnostic utility. They have incorporated into several HIV drug resistance algorithms including the Stanford (http://hivdb.stanford.edu) and two European databases (http://www.umcutrecht.nl/subsite/spread-programme/ and http://www.hivrdb.org.uk/) and have been successfully used to genotype a large number of sequences in these and other databases. The tools are a PHP/JAVA web application and are freely accessible on a number of servers including: http://bioafrica.mrc.ac.za/rega-genotype/html/, http://lasp.cpqgm.fiocruz.br/virus-genotype/html/, http://jose.med.kuleuven.be/genotypetool/html/.

  2. Highly accurate isotope composition measurements by a miniature laser ablation mass spectrometer designed for in situ investigations on planetary surfaces

    NASA Astrophysics Data System (ADS)

    Riedo, A.; Meyer, S.; Heredia, B.; Neuland, M. B.; Bieler, A.; Tulej, M.; Leya, I.; Iakovleva, M.; Mezger, K.; Wurz, P.

    2013-10-01

    An experimental procedure for precise and accurate measurements of isotope abundances by a miniature laser ablation mass spectrometer for space research is described. The measurements were conducted on different untreated NIST standards and galena samples by applying pulsed UV laser radiation (266 nm, 3 ns and 20 Hz) for ablation, atomisation, and ionisation of the sample material. Mass spectra of released ions are measured by a reflectron-type time-of-flight mass analyser. A computer controlled performance optimiser was used to operate the system at maximum ion transmission and mass resolution. At optimal experimental conditions, the best relative accuracy and precision achieved for Pb isotope compositions are at the per mill level and were obtained in a range of applied laser irradiances and a defined number of accumulated spectra. A similar relative accuracy and precision was achieved in the study of Pb isotope compositions in terrestrial galena samples. The results for the galena samples are similar to those obtained with a thermal ionisation mass spectrometer (TIMS). The studies of the isotope composition of other elements yielded relative accuracy and precision at the per mill level too, with characteristic instrument parameters for each element. The relative accuracy and precision of the measurements is degrading with lower element/isotope concentration in a sample. For the elements with abundances below 100 ppm these values drop to the percent level. Depending on the isotopic abundances of Pb in minerals, 207Pb/206Pb ages with accuracy in the range of tens of millions of years can be achieved.

  3. Identification of mycobacteria by high-performance liquid chromatography.

    PubMed Central

    Butler, W R; Jost, K C; Kilburn, J O

    1991-01-01

    Mycolic acids extracted from saponified mycobacterial cells were examined as p-bromophenacyl esters by high-performance liquid chromatography (HPLC). Standard HPLC patterns were developed for species of Mycobacterium by examination of strains from culture collections and other well-characterized isolates. Relative retention times of peaks and peak height comparisons were used to develop a differentiation scheme that was 98% accurate for the species examined. A rapid, cost-effective HPLC method which offers an alternative approach to the identification of mycobacteria is described. PMID:1774251

  4. Evaluation of high-definition television for remote task performance

    SciTech Connect

    Draper, J.V.; Fujita, Y.; Herndon, J.N.

    1987-04-01

    High-definition television (HDTV) transmits a video image with more than twice the number (1125 for HDTV to 525 for standard-resolution TV) of horizontal scan lines that standard-resolution TV provides. The improvement in picture quality (compared to standard-resolution TV) that the extra scan lines provide is impressive. Objects in the HDTV picture have more sharply defined edges, better contrast, and more accurate reproduction of shading and color patterns than do those in the standard-resolution TV picture. Because the TV viewing system is a key component for teleoperator performance, an improvement in TV picture quality could mean an improvement in the speed and accuracy with which teleoperators perform tasks. This report describes three experiments designed to evaluate the impact of HDTV on the performance of typical remote tasks. The performance of HDTV was compared to that of standard-resolution, monochromatic TV and standard-resolution, stereoscopic, monochromatic TV in the context of judgment of depth in a televised scene, visual inspection of an object, and performance of a typical remote handling task. The results of the three experiments show that in some areas HDTV can lead to improvement in teleoperator performance. Observers inspecting a small object for a flaw were more accurate with HDTV than with either of the standard-resolution systems. High resolution is critical for detection of small-scale flaws of the type in the experiment (a scratch on a glass bottle). These experiments provided an evaluation of HDTV television for use in tasks that must be routinely performed to remotely maintain a nuclear fuel reprocessing facility. 5 refs., 7 figs., 9 tabs.

  5. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  6. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  7. Teacher Accountability at High Performing Charter Schools

    ERIC Educational Resources Information Center

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  8. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  9. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  10. Rachis morphology cannot accurately predict the mechanical performance of primary feathers in extant (and therefore fossil) feathered flyers

    PubMed Central

    Garner, Terence; Cooper, Glen; Nudds, Robert

    2017-01-01

    It was previously suggested that the flight ability of feathered fossils could be hypothesized from the diameter of their feather rachises. Central to the idea is the unvalidated assumption that the strength of a primary flight feather (i.e. its material and structural properties) may be consistently calculated from the external diameter of the feather rachis, which is the only dimension that is likely to relate to structural properties available from fossils. Here, using three-point bending tests, the relationship between feather structural properties (maximum bending moment, Mmax and Young's modulus, Ebend) and external morphological parameters (primary feather rachis length, diameter and second moment of area at the calamus) in 180 primary feathers from four species of bird of differing flight style was investigated. Intraspecifically, both Ebend and Mmax were strongly correlated with morphology, decreasing and increasing, respectively, with all three morphological measures. Without accounting for species, however, external morphology was a poor predictor of rachis structural properties, meaning that precise determination of aerial performance in extinct, feathered species from external rachis dimensions alone is not possible. Even if it were possible to calculate the second moment of area of the rachis, our data suggest that feather strength could still not be reliably estimated. PMID:28386445

  11. Appraisal of Artificial Screening Techniques of Tomato to Accurately Reflect Field Performance of the Late Blight Resistance

    PubMed Central

    Nowakowska, Marzena; Nowicki, Marcin; Kłosińska, Urszula; Maciorowski, Robert; Kozik, Elżbieta U.

    2014-01-01

    Late blight (LB) caused by the oomycete Phytophthora infestans continues to thwart global tomato production, while only few resistant cultivars have been introduced locally. In order to gain from the released tomato germplasm with LB resistance, we compared the 5-year field performance of LB resistance in several tomato cultigens, with the results of controlled conditions testing (i.e., detached leaflet/leaf, whole plant). In case of these artificial screening techniques, the effects of plant age and inoculum concentration were additionally considered. In the field trials, LA 1033, L 3707, L 3708 displayed the highest LB resistance, and could be used for cultivar development under Polish conditions. Of the three methods using controlled conditions, the detached leaf and the whole plant tests had the highest correlation with thefield experiments. The plant age effect on LB resistance in tomato reported here, irrespective of the cultigen tested or inoculum concentration used, makes it important to standardize the test parameters when screening for resistance. Our results help show why other reports disagree on LB resistance in tomato. PMID:25279467

  12. Performing accurate joint kinematics from 3-D in vivo image sequences through consensus-driven simultaneous registration.

    PubMed

    Jacq, Jean-José; Cresson, Thierry; Burdin, Valérie; Roux, Christian

    2008-05-01

    This paper addresses the problem of the robust registration of multiple observations of the same object. Such a problem typically arises whenever it becomes necessary to recover the trajectory of an evolving object observed through standard 3-D medical imaging techniques. The instances of the tracked object are assumed to be variously truncated, locally subject to morphological evolutions throughout the sequence, and imprinted with significant segmentation errors as well as significant noise perturbations. The algorithm operates through the robust and simultaneous registration of all surface instances of a given object through median consensus. This operation consists of two interwoven processes set up to work in close collaboration. The first one progressively generates a median and implicit shape computed with respect to current estimations of the registration transformations, while the other refines these transformations with respect to the current estimation of their median shape. When compared with standard robust techniques, tests reveal significant improvements, both in robustness and precision. The algorithm is based on widely-used techniques, and proves highly effective while offering great flexibility of utilization.

  13. High-performance computing and communications

    SciTech Connect

    Stevens, R.

    1993-11-01

    This presentation has two parts. The first part discusses the US High-Performance Computing and Communications program -- its goals, funding, process, revisions, and research in high-performance computing systems, advanced software technology, and basic research and human resources. The second part of the presentation covers specific work conducted under this program at Argonne National Laboratory. Argonne`s efforts focus on computational science research, software tool development, and evaluation of experimental computer architectures. In addition, the author describes collaborative activities at Argonne in high-performance computing, including an Argonne/IBM project to evaluate and test IBM`s newest parallel computers and the Scalable I/O Initiative being spearheaded by the Concurrent Supercomputing Consortium.

  14. Engineering high-performance vertical cavity lasers

    SciTech Connect

    Lear, K.L.; Hou, H.Q.; Hietala, V.M.; Choquette, K.D.; Schneider, R.P. Jr.

    1996-12-31

    The cw and high-speed performance of vertical cavity surface emitting laser diodes (VCSELs) are affected by both electrical and optical issues arising from the geometry and fabrication of these devices. Structures with low resistance semiconductor mirrors and Al-oxide confinement layers address these issues and have produced record performance including 50% power conversion efficiency and modulation bandwidths up to 20 GHz at small bias currents.

  15. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-09

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  16. Massive Contingency Analysis with High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu; Nieplocha, Jaroslaw

    2009-07-26

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimates. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. Faster analysis of more cases is required to safely and reliably operate today’s power grids with less marginal and more intermittent renewable energy sources. Enabled by the latest development in the computer industry, high performance computing holds the promise of meet the need in the power industry. This paper investigates the potential of high performance computing for massive contingency analysis. The framework of "N-x" contingency analysis is established and computational load balancing schemes are studied and implemented with high performance computers. Case studies of massive 300,000-contingency-case analysis using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing and demonstrate the performance of the framework and computational load balancing schemes.

  17. Performance variability of highly parallel architectures

    SciTech Connect

    Kramer, William T.C.; Ryan, Clint

    2003-05-01

    The design and evaluation of high performance computers has concentrated on increasing computational speed for applications. This performance is often measured on a well configured dedicated system to show the best case. In the real environment, resources are not always dedicated to a single task, and systems run tasks that may influence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner. It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance.

  18. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  19. Simulating the Cranfield geological carbon sequestration project with high-resolution static models and an accurate equation of state

    SciTech Connect

    Soltanian, Mohamad Reza; Amooie, Mohammad Amin; Cole, David R.; Graham, David E.; Hosseini, Seyyed Abolfazl; Hovorka, Susan; Pfiffner, Susan M.; Phelps, Tommy Joe; Moortgat, Joachim

    2016-10-11

    In this study, a field-scale carbon dioxide (CO2) injection pilot project was conducted as part of the Southeast Regional Sequestration Partnership (SECARB) at Cranfield, Mississippi. We present higher-order finite element simulations of the compositional two-phase CO2-brine flow and transport during the experiment. High- resolution static models of the formation geology in the Detailed Area Study (DAS) located below the oil- water contact (brine saturated) are used to capture the impact of connected flow paths on breakthrough times in two observation wells. Phase behavior is described by the cubic-plus-association (CPA) equation of state, which takes into account the polar nature of water molecules. Parameter studies are performed to investigate the importance of Fickian diffusion, permeability heterogeneity, relative permeabilities, and capillarity. Simulation results for the pressure response in the injection well and the CO2 breakthrough times at the observation wells show good agreement with the field data. For the high injection rates and short duration of the experiment, diffusion is relatively unimportant (high P clet numbers), while relative permeabilities have a profound impact on the pressure response. High-permeability pathways, created by fluvial deposits, strongly affect the CO2 transport and highlight the importance of properly characterizing the formation heterogeneity in future carbon sequestration projects.

  20. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    SciTech Connect

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-15

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  1. Simulating the Cranfield geological carbon sequestration project with high-resolution static models and an accurate equation of state

    DOE PAGES

    Soltanian, Mohamad Reza; Amooie, Mohammad Amin; Cole, David R.; ...

    2016-10-11

    In this study, a field-scale carbon dioxide (CO2) injection pilot project was conducted as part of the Southeast Regional Sequestration Partnership (SECARB) at Cranfield, Mississippi. We present higher-order finite element simulations of the compositional two-phase CO2-brine flow and transport during the experiment. High- resolution static models of the formation geology in the Detailed Area Study (DAS) located below the oil- water contact (brine saturated) are used to capture the impact of connected flow paths on breakthrough times in two observation wells. Phase behavior is described by the cubic-plus-association (CPA) equation of state, which takes into account the polar nature ofmore » water molecules. Parameter studies are performed to investigate the importance of Fickian diffusion, permeability heterogeneity, relative permeabilities, and capillarity. Simulation results for the pressure response in the injection well and the CO2 breakthrough times at the observation wells show good agreement with the field data. For the high injection rates and short duration of the experiment, diffusion is relatively unimportant (high P clet numbers), while relative permeabilities have a profound impact on the pressure response. High-permeability pathways, created by fluvial deposits, strongly affect the CO2 transport and highlight the importance of properly characterizing the formation heterogeneity in future carbon sequestration projects.« less

  2. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers.

    PubMed

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  3. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    NASA Astrophysics Data System (ADS)

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  4. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  5. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  6. High Performance Multiwall Carbon Nanotube Bolometers

    DTIC Science & Technology

    2010-10-21

    REPORT High performance multiwall carbon nanotube bolometers 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: High infrared bolometric photoresponse has...been observed in multiwall carbon nanotube MWCNT films at room temperature. The observed detectivity D in exceeding 3.3 106 cm Hz1/2 /W on MWCNT film...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS carbon nanotube, infrared detector, bolometer

  7. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  8. Accurate high-pressure and high-temperature effective pair potentials for the systems N2-N and O2-O

    NASA Astrophysics Data System (ADS)

    van Thiel, M.; Ree, F. H.

    1996-04-01

    Statistical mechanical chemical equilibrium calculations of N2 and O2 show that these molecules dissociate behind strong shock waves. Our determination of accurate intermolecular potentials has required the consideration of the dissociation products N and O. Our previous theoretical efforts to predict the thermodynamic properties of these molecules relied in part on corresponding states theory and shock wave data of argon, without consideration of the dissociation products. Recent high-pressure Hugoniot measurements, however, allowed a more accurate determination of the potentials and the explicit inclusion of the dissociation products. The best fit to the data is obtained with the exponential-6 coefficients, for O2-O2: ɛ/k=125 K, r*=3.86 Å, α=13.2; for O-O: ɛ/k=700 K, r*=2.40 Å, α=11.0; for N2-N2: ɛ/k=293 K, r*=3.91 Å, α=11.5; and for N-N: ɛ/k=600 K, r*=2.47 Å, α=10.0. The unlike pair interactions are obtained from these like interactions with a modified Lorentz-Berthelot rule. The coefficients in the modified Lorentz-Berthelot equations are k/l/m=1/1/0.93 for O2-O- and k/l/m=1/1/0.90 for N2-N interactions.

  9. Task parallelism and high-performance languages

    SciTech Connect

    Foster, I.

    1996-03-01

    The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism. The term task parallelism refers to the explicit creation of multiple threads of control, or tasks, which synchronize and communicate under programmer control. Task and data parallelism are complementary rather than competing programming models. While task parallelism is more general and can be used to implement algorithms that are not amenable to data-parallel solutions, many problems can benefit from a mixed approach, with for example a task-parallel coordination layer integrating multiple data-parallel computations. Other problems admit to both data- and task-parallel solutions, with the better solution depending on machine characteristics, compiler performance, or personal taste. For these reasons, we believe that a general-purpose high-performance language should integrate both task- and data-parallel constructs. The challenge is to do so in a way that provides the expressivity needed for applications, while preserving the flexibility and portability of a high-level language. In this paper, we examine and illustrate the considerations that motivate the use of task parallelism. We also describe one particular approach to task parallelism in Fortran, namely the Fortran M extensions. Finally, we contrast Fortran M with other proposed approaches and discuss the implications of this work for task parallelism and high-performance languages.

  10. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  11. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  12. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  13. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  14. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  15. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  16. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  17. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  18. High voltage electric substation performance in earthquakes

    SciTech Connect

    Eidinger, J.; Ostrom, D.; Matsuda, E.

    1995-12-31

    This paper examines the performance of several types of high voltage substation equipment in past earthquakes. Damage data is provided in chart form. This data is then developed into a tool for estimating the performance of a substation subjected to an earthquake. First, suggests are made about the development of equipment class fragility curves that represent the expected earthquake performance of different voltages and types of equipment. Second, suggestions are made about how damage to individual pieces of equipment at a substation likely affects the post-earthquake performance of the substation as a whole. Finally, estimates are provided as to how quickly a substation, at various levels of damage, can be restored to operational service after the earthquake.

  19. High Performance Woven Mesh Heat Exchangers

    NASA Astrophysics Data System (ADS)

    Wirtz, Richard A.; Li, Chen; Park, Ji-Wook; Xu, Jun

    2002-07-01

    Simple-to-fabricate woven mesh structures, consisting of bonded laminates of two-dimensional plain-weave conductive screens, or three-dimensional orthogonal weaves are described. Geometric equations show that these porous matrices can be fabricated to have a wide range of porosity and a highly anisotropic thermal conductivity vector. A mathematical model of the thermal performance of such a mesh, deployed as a heat exchange surface, is developed. Measurements of pressure drop and overall heat transfer rate are reported and used with the performance model to develop correlation equations of mesh friction factor and Colburn j-factor as a function of coolant properties, mesh characteristics and flow rate through the mesh. A heat exchanger performance analysis delineates conditions where the two mesh technologies offer superior performance.

  20. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  1. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  2. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  3. Combining symmetry breaking and restoration with configuration interaction: A highly accurate many-body scheme applied to the pairing Hamiltonian

    NASA Astrophysics Data System (ADS)

    Ripoche, J.; Lacroix, D.; Gambacurta, D.; Ebran, J.-P.; Duguet, T.

    2017-01-01

    internucleon coupling defining the pairing Hamiltonian and driving the normal-to-superfluid quantum phase transition. The presently proposed method offers the advantage of automatic access to the low-lying spectroscopy, which it does with high accuracy. Conclusions: The numerical cost of the newly designed variational method is polynomial (N6) in system size. This method achieves unprecedented accuracy for the ground-state correlation energy, effective pairing gap, and one-body entropy as well as for the excitation energy of low-lying states of the attractive pairing Hamiltonian. This constitutes a sufficiently strong motivation to envision its application to realistic nuclear Hamiltonians in view of providing a complementary, accurate, and versatile ab initio description of mid-mass open-shell nuclei in the future.

  4. Retrospective screening of relevant pesticide metabolites in food using liquid chromatography high resolution mass spectrometry and accurate-mass databases of parent molecules and diagnostic fragment ions.

    PubMed

    Polgár, László; García-Reyes, Juan F; Fodor, Péter; Gyepes, Attila; Dernovics, Mihály; Abrankó, László; Gilbert-López, Bienvenida; Molina-Díaz, Antonio

    2012-08-03

    In recent years, the detection and characterization of relevant pesticide metabolites in food is an important task in order to evaluate their formation, kinetics, stability, and toxicity. In this article, a methodology for the systematic screening of pesticides and their main metabolites in fruit and vegetable samples is described, using LC-HRMS and accurate-mass database search of parent compounds and their diagnostic fragment ions. The approach is based on (i) search for parent pesticide molecules; (ii) search for their metabolites in the positive samples, assuming common fragmentation pathways between the metabolites and parent pesticide molecules; and (iii) search for pesticide conjugates using the data from both parent species and diagnostic fragment ions. An accurate-mass database was constructed consisting of 1396 compounds (850 parent compounds, 447 fragment ions and 99 metabolites). The screening process was performed by the software in an automated fashion. The proposed methodology was evaluated with 29 incurred samples and the output obtained was compared to standard pesticide testing methods (targeted LC-MS/MS). Examples on the application of the proposed approach are shown, including the detection of several pesticide glycosides derivatives, which were found with significantly relevant intensities. Glucose-conjugated forms of parent compounds (e.g., fenhexamid-O-glucoside) and those of metabolites (e.g., despropyl-iprodione-N-glycoside) were detected. Facing the lack of standards for glycosylated pesticides, the study was completed with the synthesis of fenhexamid-O-glucoside for quantification purposes. In some cases the pesticide derivatives were found in a relatively high ratio, drawing the attention to these kinds of metabolites and showing that they should not be neglected in multi-residue methods. The global coverage obtained on the 29 analyzed samples showed the usefulness and benefits of the proposed approach and highlights the practical

  5. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  6. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  7. Determination of accurate protein monoisotopic mass with the most abundant mass measurable using high-resolution mass spectrometry.

    PubMed

    Chen, Ya-Fen; Chang, C Allen; Lin, Yu-Hsuan; Tsay, Yeou-Guang

    2013-09-01

    While recent developments in mass spectrometry enable direct evaluation of monoisotopic masses (M(mi)) of smaller compounds, protein M(mi) is mostly determined based on its relationship to average mass (Mav). Here, we propose an alternative approach to determining protein M(mi) based on its correlation with the most abundant mass (M(ma)) measurable using high-resolution mass spectrometry. To test this supposition, we first empirically calculated M(mi) and M(ma) of 6158 Escherichia coli proteins, which helped serendipitously uncover a linear correlation between these two protein masses. With the relationship characterized, liquid chromatography-mass spectrometry was employed to measure M(ma) of protein samples in its ion cluster with the highest signal in the mass spectrum. Generally, our method produces a short series of likely M(mi) in 1-Da steps, and the probability of each likely M(mi) is assigned statistically. It is remarkable that the mass error of this M(mi) is as miniscule as a few parts per million, indicating that our method is capable of determining protein M(mi) with high accuracy. Benefitting from the outstanding performance of modern mass spectrometry, our approach is a significant improvement over others and should be of great utility in the rapid assessment of protein primary structures.

  8. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  9. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  10. High thermoelectric performance of the distorted bismuth(110) layer.

    PubMed

    Cheng, L; Liu, H J; Zhang, J; Wei, J; Liang, J H; Jiang, P H; Fan, D D; Sun, L; Shi, J

    2016-07-14

    The thermoelectric properties of the distorted bismuth(110) layer are investigated using first-principles calculations combined with the Boltzmann transport equation for both electrons and phonons. To accurately predict the electronic and transport properties, the quasiparticle corrections with the GW approximation of many-body effects have been explicitly included. It is found that a maximum ZT value of 6.4 can be achieved for n-type systems, which essentially stemmed from the weak scattering of electrons. Moreover, we demonstrate that the distorted Bi layer retains high ZT values in relatively broad regions of both temperature and carrier concentration. Our theoretical work emphasizes that the deformation potential constant characterizing the electron-phonon scattering strength is an important paradigm for searching high thermoelectric performance materials.

  11. High performance FDTD algorithm for GPGPU supercomputers

    NASA Astrophysics Data System (ADS)

    Zakirov, Andrey; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2016-10-01

    An implementation of FDTD method for solution of optical and other electrodynamic problems of high computational cost is described. The implementation is based on the LRnLA algorithm DiamondTorre, which is developed specifically for GPGPU hardware. The specifics of the DiamondTorre algorithms for staggered grid (Yee cell) and many-GPU devices are shown. The algorithm is implemented in the software for real physics calculation. The software performance is estimated through algorithms parameters and computer model. The real performance is tested on one GPU device, as well as on the many-GPU cluster. The performance of up to 0.65 • 1012 cell updates per second for 3D domain with 0.3 • 1012 Yee cells total is achieved.

  12. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  13. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  14. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  15. High performance forward swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Koenig, David G. (Inventor); Aoyagi, Kiyoshi (Inventor); Dudley, Michael R. (Inventor); Schmidt, Susan B. (Inventor)

    1988-01-01

    A high performance aircraft capable of subsonic, transonic and supersonic speeds employs a forward swept wing planform and at least one first and second solution ejector located on the inboard section of the wing. A high degree of flow control on the inboard sections of the wing is achieved along with improved maneuverability and control of pitch, roll and yaw. Lift loss is delayed to higher angles of attack than in conventional aircraft. In one embodiment the ejectors may be advantageously positioned spanwise on the wing while the ductwork is kept to a minimum.

  16. High Performance Databases For Scientific Applications

    NASA Technical Reports Server (NTRS)

    French, James C.; Grimshaw, Andrew S.

    1997-01-01

    The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.

  17. Tough, High-Performance, Thermoplastic Addition Polymers

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Proctor, K. Mason; Gleason, John; Morgan, Cassandra; Partos, Richard

    1991-01-01

    Series of addition-type thermoplastics (ATT's) exhibit useful properties. Because of their addition curing and linear structure, ATT polymers have toughness, like thermoplastics, and easily processed, like thermosets. Work undertaken to develop chemical reaction forming stable aromatic rings in backbone of ATT polymer, combining high-temperature performance and thermo-oxidative stability with toughness and easy processibility, and minimizing or eliminating necessity for tradeoffs among properties often observed in conventional polymer syntheses.

  18. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2010-01-01

    treatments and reconstructive surgeries . High performance computer simu- lation allows designers to try out numerous mechanical and material...investigating the effect of techniques for simplifying the calculations (sending the projectile through a pre-existing hole, for example) on the accuracy of...semiconductor particles are size-dependent. These properties, including yield strength and resistance to fatigue, are not well predicted by macroscopic

  19. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2008-01-01

    materials “from the atoms up” or to model biological systems at the molecular level. The speed and capacity of massively parallel computers are key...Streamlined, massively parallel high performance computing structural codes allow researchers to examine many relevant physical factors simultaneously...expenditure of energy, so that the drones can carry their load of sensors, communications devices, and fuel. AHPCRC researchers are using massively

  20. High-performance reactionless scan mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Ellen I.; Summers, Richard T.; Ostaszewski, Miroslaw A.

    1995-01-01

    A high-performance reactionless scan mirror mechanism was developed for space applications to provide thermal images of the Earth. The design incorporates a unique mechanical means of providing reactionless operation that also minimizes weight, mechanical resonance operation to minimize power, combined use of a single optical encoder to sense coarse and fine angular position, and a new kinematic mount of the mirror. A flex pivot hardware failure and current project status are discussed.

  1. Development of high performance BWR spacer

    SciTech Connect

    Morooka, Shinichi; Shirakawa, Kenetu; Mitutake, Tohru; Yamamoto, Yasushi; Yano, Takashi; Kimura, Jiro

    1996-07-01

    The spacer has a significant effect on thermal hydraulic performance of BWR fuel assembly. The purpose of this study is to develop a new BWR spacer with high critical power and low pressure drop performance. The developed high performance spacer is a ferrule type spacer with twisted tape and improved flow tab. This spacer is called CYCLONE spacer. Critical power and pressure drop have been measured at BEST (BWR Experimental Loop for Stability and Transient test) of Toshiba Corporation. The test bundle consists of electrically heated rods in a 4x4 array configuration. These heater rods are indirectly heated. The heated length and outer diameter of the heater rod, as well as the number and the axial locations of the spacers, are the same as for those for a BWR fuel assembly. The axial power shape is stepped cosine (1.4 of the maximum peaking factor). Two test assemblies with different radial power distribution have been used. One test assembly has the maximum power rods at the center of the test assembly and the other has the maximum power rods near the channel wall. The results show that the critical power performance of CYCLONE spacer is 10 to 25 % higher than that of the ferrule spacers, while the pressure drop for CYCLONE spacer is nearly equal to that of the ferrule spacer.

  2. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  3. Towards High-Assurance High-Performance Program Synthesis

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Roach, Steven; vanBaalen, Jeffrey

    1997-01-01

    Domain-specific automatic program synthesis tools, also called application generators, are playing an ever-increasing role in software development. However, high-performance application generators require difficult manual construction, and are very difficult to verify correct. This paper describes research and an implemented system that transforms program synthesis tools based on deductive synthesis into high-performance application generators. Deductive synthesis uses theorem-proving to construct solutions when given problem specifications. The verification condition for a deductive synthesis tool is essentially the soundness of the implemented inference rules. Theory Operationalization for Program Synthesis (TOPS) synergistically combines reformulation, automated mathematical classification, and compilation through partial deduction to decision procedures. It transforms general-purpose deductive synthesis, with exponential performance, into efficient special-purpose deductive synthesis, with near-linear performance. This paper describes our experience with and empirical results of PD(TH) theory-based partial deduction - in which partial deduction of a set of first-order formulae is performed within the context of a background theory. The implemented TOPS system currently performs a special variant of PD(TH) in which the compilation process results in the transformation of a set of first order formulae into the theory of an instantiated library decision procedure augmented by a compiled unit theory.

  4. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  5. Efficient and accurate local single reference correlation methods for high-spin open-shell molecules using pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Hansen, Andreas; Liakos, Dimitrios G.; Neese, Frank

    2011-12-01

    A production level implementation of the high-spin open-shell (spin unrestricted) single reference coupled pair, quadratic configuration interaction and coupled cluster methods with up to doubly excited determinants in the framework of the local pair natural orbital (LPNO) concept is reported. This work is an extension of the closed-shell LPNO methods developed earlier [F. Neese, F. Wennmohs, and A. Hansen, J. Chem. Phys. 130, 114108 (2009), 10.1063/1.3086717; F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The internal space is spanned by localized orbitals, while the external space for each electron pair is represented by a truncated PNO expansion. The laborious integral transformation associated with the large number of PNOs becomes feasible through the extensive use of density fitting (resolution of the identity (RI)) techniques. Technical complications arising for the open-shell case and the use of quasi-restricted orbitals for the construction of the reference determinant are discussed in detail. As in the closed-shell case, only three cutoff parameters control the average number of PNOs per electron pair, the size of the significant pair list, and the number of contributing auxiliary basis functions per PNO. The chosen threshold default values ensure robustness and the results of the parent canonical methods are reproduced to high accuracy. Comprehensive numerical tests on absolute and relative energies as well as timings consistently show that the outstanding performance of the LPNO methods carries over to the open-shell case with minor modifications. Finally, hyperfine couplings calculated with the variational LPNO-CEPA/1 method, for which a well-defined expectation value type density exists, indicate the great potential of the LPNO approach for the efficient calculation of molecular properties.

  6. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  7. Heavily Doped PBSE with High Thermoelectric Performance

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor); Wang, Heng (Inventor); Pei, Yanzhong (Inventor)

    2015-01-01

    The present invention discloses heavily doped PbSe with high thermoelectric performance. Thermoelectric property measurements disclosed herein indicated that PbSe is high zT material for mid-to-high temperature thermoelectric applications. At 850 K a peak zT (is) greater than 1.3 was observed when n(sub H) approximately 1.0 X 10(exp 20) cm(exp -3). The present invention also discloses that a number of strategies used to improve zT of PbTe, such as alloying with other elements, nanostructuring and band modification may also be used to further improve zT in PbSe.

  8. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  9. The Hall effect in the organic conductor TTF-TCNQ: choice of geometry for accurate measurements of a highly anisotropic system.

    PubMed

    Tafra, E; Culo, M; Basletić, M; Korin-Hamzić, B; Hamzić, A; Jacobsen, C S

    2012-02-01

    We have measured the Hall effect on recently synthesized single crystals of the quasi-one-dimensional organic conductor TTF-TCNQ (tetrathiafulvalene-tetracyanoquinodimethane), a well known charge transfer complex that has two kinds of conductive stacks: the donor (TTF) and the acceptor (TCNQ) chains. The measurements were performed in the temperature interval 30 K < T < 300 K and for several different magnetic field and current directions through the crystal. By applying the equivalent isotropic sample approach, we have demonstrated the importance of the choice of optimal geometry for accurate Hall effect measurements. Our results show, contrary to past belief, that the Hall coefficient does not depend on the geometry of measurements and that the Hall coefficient value is approximately zero in the high temperature region (T > 150 K), implying that there is no dominance of either the TTF or the TCNQ chain. At lower temperatures our measurements clearly prove that all three phase transitions of TTF-TCNQ could be identified from Hall effect measurements.

  10. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  11. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  12. High capacity heat pipe performance demonstration

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A high capacity heat pipe which will operate in one-g and in zero-g is investigated. An artery configuration which is self-priming in one-g was emphasized. Two artery modifications were evolved as candidates to achieve one-g priming and will provide the very high performance: the four artery and the eight artery configurations. These were each evaluated analytically for performance and priming capability. The eight artery configuration was found to be inadequate from a performance standpoint. The four artery showed promise of working. A five-inch long priming element test article was fabricated using the four artery design. Plexiglas viewing windows were made on each end of the heat pipe to permit viewing of the priming activity. The five-inch primary element would not successfully prime in one-g. Difficulties on priming in one-g raised questions about zero-g priming. Therefore a small test element heat pipe for verifying that the proposed configuration will self-prime in zero-g was fabricated and delivered.

  13. Towards high performance inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Gong, Xiong

    2013-03-01

    Bulk heterojunction polymer solar cells that can be fabricated by solution processing techniques are under intense investigation in both academic institutions and industrial companies because of their potential to enable mass production of flexible and cost-effective alternative to silicon-based electronics. Despite the envisioned advantages and recent technology advances, so far the performance of polymer solar cells is still inferior to inorganic counterparts in terms of the efficiency and stability. There are many factors limiting the performance of polymer solar cells. Among them, the optical and electronic properties of materials in the active layer, device architecture and elimination of PEDOT:PSS are the most determining factors in the overall performance of polymer solar cells. In this presentation, I will present how we approach high performance of polymer solar cells. For example, by developing novel materials, fabrication polymer photovoltaic cells with an inverted device structure and elimination of PEDOT:PSS, we were able to observe over 8.4% power conversion efficiency from inverted polymer solar cells.

  14. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  15. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  16. Performance of the CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Perrotta, Andrea

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved tracking and vertexing algorithms, discussing their impact on the b-tagging performance as well as on the jet and missing energy reconstruction.

  17. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  18. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  19. High Power MPD Thruster Performance Measurements

    NASA Technical Reports Server (NTRS)

    LaPointe, Michael R.; Strzempkowski, Eugene; Pencil, Eric

    2004-01-01

    High power magnetoplasmadynamic (MPD) thrusters are being developed as cost effective propulsion systems for cargo transport to lunar and Mars bases, crewed missions to Mars and the outer planets, and robotic deep space exploration missions. Electromagnetic MPD thrusters have demonstrated, at the laboratory level, the ability to process megawatts of electrical power while providing significantly higher thrust densities than electrostatic electric propulsion systems. The ability to generate higher thrust densities permits a reduction in the number of thrusters required to perform a given mission, and alleviates the system complexity associated with multiple thruster arrays. The specific impulse of an MPD thruster can be optimized to meet given mission requirements, from a few thousand seconds with heavier gas propellants up to 10,000 seconds with hydrogen propellant. In support of programs envisioned by the NASA Office of Exploration Systems, Glenn Research Center is developing and testing quasi-steady MW-class MPD thrusters as a prelude to steady state high power thruster tests. This paper provides an overview of the GRC high power pulsed thruster test facility, and presents preliminary performance data for a quasi-steady baseline MPD thruster geometry.

  20. A high performance architecture for prolog

    SciTech Connect

    Dobry, T.

    1987-01-01

    Artificial Intelligence is entering the mainstream of computer applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the processing power of conventional architecture. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an architecture specialized for Prolog can achieve a ten-fold improvement in performance over conventional general-purpose architecture, and presents such an architecture for high performance execution of Prolog programs. The architecture is based on the abstract machine description known as the Warren Abstract Machine (WAM). The execution model of the WAM is described and extended to provide a complete Instruction Set Architecture (ISA) for Prolog known as the PLM. The ISA is then realized in a microarchitecture and finally in a hardware design.

  1. High-performance architecture for Prolog

    SciTech Connect

    Dobry, T.P.

    1987-01-01

    Artificial intelligence is entering the mainstream of computer applications and, as techniques are developed and integrated into a wide variety of areas, they are beginning to tax the processing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an architecture specialized for Prolog can achieve a tenfold improvement in performance over conventional, general-purpose architectures. This dissertation presents such an architecture for high performance execution of Prolog programs. The architecture is based on the abstract machine description introduced by David H.D. Warren known as the Warren Abstract Machine (WAM). The execution model of the WAM is described and extended to provide a complete Instruction Set Architecture (ISA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design.

  2. The high performance solar array GSR3

    NASA Astrophysics Data System (ADS)

    Mamode, A.; Bartevian, J.; Bastard, J. L.; Auffray, P.; Plagne, A.

    A foldout solar array for communication satellites was developed. A wing composed of 4 panels of 1.6 x 1.5 m and a Y-shaped yoke, and a wing with 3 panels of 2.4 x 2.4 m were made. End of life performance goal is greater than 35 W/kg with BSR 180 micron solar cells, and 50 W/kg using 50 micron BSFR cells. Analysis shows that all identified requirements can be covered with current skin made of open weave very high modulus carbon fiber; reinforcements of unidirectional carbon fiber; honeycomb in current section; hold-down inserts made of wound carbon fibers; titanium hinge fitting; and Kapton foil (25 or 50 micron thickness). Tests confirm performance predictions.

  3. Proven high-performance display solution

    NASA Astrophysics Data System (ADS)

    Johnson, Rick J.; Shaw, James E.; Mosier, Don; Liss, Raymond L.; Prouty, Todd D.; Davis, Josh; Marzen, Vincent P.; Deloy, Christian T.

    2002-08-01

    Rockwell Collins serves both the military and the commercial segments by exploiting the common elements of these applications. Rockwell Collins has created a liquid crystal display family capable of 100:1 contrast ratio, 40:1 high ambient contrast, 0.25% specular reflectance, 0.1% diffuse reflectance, enhanced color stability over +/- 55H, 0-30V field of view, 300 fL with 10K:1 dimming range, color NVIS B compliance while exceeding environmental performance requirements though ruggedization. In order to meet the full range of display requirements at a system level, all the components must be understood and managed to meet the end solution of the final system. This paper details Rockwell Collins' optical performance using an avionics grade panel, third generation custom compensation, and solid state backlight.

  4. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  5. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments: A PARALLEL FEM STOKES ICE SHEET MODEL

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-04

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  6. A highly accurate and efficient algorithm for electrostatic interactions of charged particles confined by parallel metallic plates

    NASA Astrophysics Data System (ADS)

    Rostami, Samare; Ghasemi, S. Alireza; Nedaaee Oskoee, Ehsan

    2016-09-01

    We present an accurate and efficient algorithm to calculate the electrostatic interaction of charged point particles with partially periodic boundary conditions that are confined along the non-periodic direction by two parallel metallic plates. The method preserves the original boundary conditions, leading to an exact solution of the problem. In addition, the scaling complexity is quasilinear O ( N ln ( N ) ) , where N is the number of particles in the simulation box. Based on the superposition principle in electrostatics, the problem is split into two electrostatic problems where each can be calculated by the appropriate Poisson solver. The method is applied to NaCl ultra-thin films where its dielectric response with respect to an external bias voltage is investigated. Furthermore, the total charge induced on the metallic boundaries can be calculated to an arbitrary precision.

  7. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  8. TMF ultra-high rate discharge performance

    SciTech Connect

    Nelson, B.

    1997-12-01

    BOLDER Technologies Corporation has developed a valve-regulated lead-acid product line termed Thin Metal Film (TMF{trademark}) technology. It is characterized by extremely thin plates and close plate spacing that facilitate high rates of charge and discharge with minimal temperature increases, at levels unachievable with other commercially-available battery technologies. This ultra-high rate performance makes TMF technology ideal for such applications as various types of engine start, high drain rate portable devices and high-current pulsing. Data are presented on very high current continuous and pulse discharges. Power and energy relationships at various discharge rates are explored and the fast-response characteristics of the BOLDER{reg_sign} cell are qualitatively defined. Short-duration recharge experiments will show that devices powered by BOLDER batteries can be in operation for more than 90% of an extended usage period with multiple fast recharges. The BOLDER cell is ideal for applications such as engine-start, a wide range of portable devices including power tools, hybrid electric vehicles and pulse-power devices. Applications such as this are very attractive, and are well served by TMF technology, but an area of great interest and excitement is ultrahigh power delivery in excess of 1 kW/kg.

  9. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  10. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  11. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  12. High performance thyratron driver with low jitter.

    PubMed

    Verma, Rishi; Lee, P; Springham, S V; Tan, T L; Rawat, R S

    2007-08-01

    We report the design and development of insulated gate bipolar junction transistor based high performance driver for operating thyratrons in grounded grid mode. With careful design, the driver meets the specification of trigger output pulse rise time less than 30 ns, jitter less than +/-1 ns, and time delay less than 160 ns. It produces a -600 V pulse of 500 ns duration (full width at half maximum) at repetition rate ranging from 1 Hz to 1.14 kHz. The developed module also facilitates heating and biasing units along with protection circuitry in one complete package.

  13. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  14. High Performance Polymer Memory and Its Formation

    DTIC Science & Technology

    2007-04-26

    Std. Z39.18 Final Report to AFOSR High Performance Polymer Memory Device and Its Formation Fund No.: FA9550-04-1-0215 Prepared by Prof. Yang Yang...polystyrene (PS). The metal nanoparticles were prepared by the two-phase 10-5 (b) 10𔄁Polymer film 1a CC , 10, Glass 1 -2 -1 0 1 2 3 4 5 Bias (V) Fig. I...such as copper pthalocyanine (CuPc), 24 ൢ zinc pthalocyanine (ZnPc), 27󈧠 tetracene, 29 and pentacene 30 have been used as donors combined with

  15. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-06

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.

  16. High-temperature testing of high performance fiber reinforced concrete

    NASA Astrophysics Data System (ADS)

    Fořt, Jan; Vejmelková, Eva; Pavlíková, Milena; Trník, Anton; Čítek, David; Kolísko, Jiří; Černý, Robert; Pavlík, Zbyšek

    2016-06-01

    The effect of high-temperature exposure on properties of High Performance Fiber Reinforced Concrete (HPFRC) is researched in the paper. At first, reference measurements are done on HPFRC samples without high-temperature loading. Then, the HPFRC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000 °C. For the temperature loaded samples, measurement of residual mechanical and basic physical properties is done. Linear thermal expansion coefficient as function of temperature is accessed on the basis of measured thermal strain data. Additionally, simultaneous difference scanning calorimetry (DSC) and thermogravimetry (TG) analysis is performed in order to observe and explain material changes at elevated temperature. It is found that the applied high temperature loading significantly increases material porosity due to the physical, chemical and combined damage of material inner structure, and negatively affects also the mechanical strength. Linear thermal expansion coefficient exhibits significant dependence on temperature and changes of material structure. The obtained data will find use as input material parameters for modelling the damage of HPFRC structures exposed to the fire and high temperature action.

  17. A directory service for configuring high-performance distributed computations

    SciTech Connect

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  18. Performance Evaluation of Emerging High Performance Computing Technologies using WRF

    NASA Astrophysics Data System (ADS)

    Newby, G. B.; Morton, D.

    2008-12-01

    The Arctic Region Supercomputing Center (ARSC) has evaluated multicore processors and other emerging processor technologies for a variety of high performance computing applications in the earth and space sciences, especially climate and weather applications. A flagship effort has been to assess dual core processor nodes on ARSC's Midnight supercomputer, in which two-socket systems were compared to eight-socket systems. Midnight is utilized for ARSC's twice-daily weather research and forecasting (WRF) model runs, available at weather.arsc.edu. Among other findings on Midnight, it was found that the Hypertransport system for interconnecting Opteron processors, memory, and other subsystems does not scale as well on eight-socket (sixteen processor) systems as well as two-socket (four processor) systems. A fundamental limitation is the cache snooping operation performed whenever a computational thread accesses main memory. This increases memory latency as the number of processor sockets increases. This is particularly noticeable on applications such as WRF that are primarily CPU-bound, versus applications that are bound by input/output or communication. The new Cray XT5 supercomputer at ARSC features quad core processors, and will host a variety of scaling experiments for WRF, CCSM4, and other models. Early results will be presented, including a series of WRF runs for Alaska with grid resolutions under 2km. ARSC will discuss a set of standardized test cases for the Alaska domain, similar to existing test cases for CONUS. These test cases will provide different configuration sizes and resolutions, suitable for single processors up to thousands. Beyond multi-core Opteron-based supercomputers, ARSC has examined WRF and other applications on additional emerging technologies. One such technology is the graphics processing unit, or GPU. The 9800-series nVidia GPU was evaluated with the cuBLAS software library. While in-socket GPUs might be forthcoming in the future, current

  19. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  20. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1988-01-01

    Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.

  1. High Performance Data Distribution for Scientific Community

    NASA Astrophysics Data System (ADS)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  2. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  3. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  4. High Performance Orbital Propagation Using a Generic Software Architecture

    NASA Astrophysics Data System (ADS)

    Möckel, M.; Bennett, J.; Stoll, E.; Zhang, K.

    2016-09-01

    Orbital propagation is a key element in many fields of space research. Over the decades, scientists have developed numerous orbit propagation algorithms, often tailored to specific use cases that vary in available input data, desired output as well as demands of execution speed and accuracy. Conjunction assessments, for example, require highly accurate propagations of a relatively small number of objects while statistical analyses of the (untracked) space debris population need a propagator that can process large numbers of objects in a short time with only medium accuracy. Especially in the latter case, a significant increase of computation speed can be achieved by using graphics processors, devices that are designed to process hundreds or thousands of calculations in parallel. In this paper, an analytical propagator is introduced that uses graphics processing to reduce the run time for propagating a large space debris population from several hours to minutes with only a minor loss of accuracy. A second performance analysis is conducted on a parallelised version of the popular SGP4 algorithm. It is discussed how these modifications can be applied to more accurate numerical propagators. Both programs are implemented using a generic, plugin-based software architecture designed for straightforward integration of propagators into other software tools. It is shown how this architecture can be used to easily integrate, compare and combine different orbital propagators, both CPU and GPU-based.

  5. High-performance deployable structures for the support of high-concentration ratio solar array modules

    NASA Technical Reports Server (NTRS)

    Mobrem, M.

    1985-01-01

    A study conducted on high-performance deployable structures for the support of high-concentration ratio solar array modules is discussed. Serious consideration is being given to the use of high-concentration ratio solar array modules or applications such as space stations. These concentrator solar array designs offer the potential of reduced cost, reduced electrical complexity, higher power per unit area, and improved survivability. Arrays of concentrators, such as the miniaturized Cassegrainian concentrator modules, present a serious challenge to the structural design because their mass per unit area (5.7 kg/square meters) is higher than that of flexible solar array blankets, and the requirement for accurate orientation towards the Sun (plus or minus 0.5 degree) requires structures with improved accuracy potentials. In addition, use on a space station requires relatively high structural natural frequencies to avoid deleterious interactions with control systems and other large structural components. The objective here is to identify and evaluate conceptual designs of structures suitable for deploying and accurately supporting high-concentration ratio solar array modules.

  6. Process Performance of Optima XEx Single Wafer High Energy Implanter

    SciTech Connect

    Kim, J. H.; Yoon, Jongyoon; Kondratenko, S.; David, J.; Rubin, L. M.; Jang, I. S.; Cha, J. C.; Joo, Y. H.; Lee, A. B.; Jin, S. W.

    2011-01-07

    To meet the process requirements for well formation in future CMOS memory production, high energy implanters require more robust angle, dose, and energy control while maintaining high productivity. The Optima XEx high energy implanter meets these requirements by integrating a traditional LINAC beamline with a robust single wafer handling system. To achieve beam angle control, Optima XEx can control both the horizontal and vertical beam angles to within 0.1 degrees using advanced beam angle measurement and correction. Accurate energy calibration and energy trim functions accelerate process matching by eliminating energy calibration errors. The large volume process chamber and UDC (upstream dose control) using faraday cups outside of the process chamber precisely control implant dose regardless of any chamber pressure increase due to PR (photoresist) outgassing. An optimized RF LINAC accelerator improves reliability and enables singly charged phosphorus and boron energies up to 1200 keV and 1500 keV respectively with higher beam currents. A new single wafer endstation combined with increased beam performance leads to overall increased productivity. We report on the advanced performance of Optima XEx observed during tool installation and volume production at an advanced memory fab.

  7. Energy Efficient Graphene Based High Performance Capacitors.

    PubMed

    Bae, Joonwon; Lee, Chang-Soo; Kwon, Oh Seok

    2016-10-27

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study.

  8. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  9. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  10. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  11. Accurate detection for a wide range of mutation and editing sites of microRNAs from small RNA high-throughput sequencing profiles

    PubMed Central

    Zheng, Yun; Ji, Bo; Song, Renhua; Wang, Shengpeng; Li, Ting; Zhang, Xiaotuo; Chen, Kun; Li, Tianqing; Li, Jinyan

    2016-01-01

    Various types of mutation and editing (M/E) events in microRNAs (miRNAs) can change the stabilities of pre-miRNAs and/or complementarities between miRNAs and their targets. Small RNA (sRNA) high-throughput sequencing (HTS) profiles can contain many mutated and edited miRNAs. Systematic detection of miRNA mutation and editing sites from the huge volume of sRNA HTS profiles is computationally difficult, as high sensitivity and low false positive rate (FPR) are both required. We propose a novel method (named MiRME) for an accurate and fast detection of miRNA M/E sites using a progressive sequence alignment approach which refines sensitivity and improves FPR step-by-step. From 70 sRNA HTS profiles with over 1.3 billion reads, MiRME has detected thousands of statistically significant M/E sites, including 3′-editing sites, 57 A-to-I editing sites (of which 32 are novel), as well as some putative non-canonical editing sites. We demonstrated that a few non-canonical editing sites were not resulted from mutations in genome by integrating the analysis of genome HTS profiles of two human cell lines, suggesting the existence of new editing types to further diversify the functions of miRNAs. Compared with six existing studies or methods, MiRME has shown much superior performance for the identification and visualization of the M/E sites of miRNAs from the ever-increasing sRNA HTS profiles. PMID:27229138

  12. High-performance holographic technologies for fluid-dynamics experiments

    PubMed Central

    Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.

    2010-01-01

    Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881

  13. High performance electrospinning system for fabricating highly uniform polymer nanofibers

    NASA Astrophysics Data System (ADS)

    Munir, Muhammad Miftahul; Iskandar, Ferry; Khairurrijal, Okuyama, Kikuo

    2009-02-01

    A high performance electrospinning system has been successfully developed for production of highly uniform polymer nanofibers. The electrospinning system employed a proportional-integral-derivative control action to maintain a constant current during the production of polyvinyl acetate (PVAc) nanofibers from a precursor solution prepared by dissolution of the PVAc powder in dimethyl formamide so that high uniformity of the nanofibers was achieved. It was found that the cone jet length observed at the end of the needle during the injection of the precursor solution and the average diameter of the nanofibers decreased with decreasing Q /I, where Q is the flow rate of the precursor solution of the nanofibers and I is the current flowing through the electrospinning system. A power law obtained from the relation between the average diameter and Q /I is in accordance with the theoretical model.

  14. Full house of fears: evidence that people high in attachment anxiety are more accurate in detecting deceit.

    PubMed

    Ein-Dor, Tsachi; Perry, Adi

    2014-04-01

    Lying is deep-rooted in our nature, as over 90% of all people lie. Laypeople, however, do only slightly better than chance when detecting lies and deceptions. Recently, attachment anxiety was linked with people's hypervigilance toward threat-related cues. Accordingly, we tested whether attachment anxiety predicts people's ability to detect deceit and to play poker-a game that is based on players' ability to detect cheating. In Study 1, 202 participants watched a series of interpersonal interactions that comprised subtle clues to the honesty or dishonesty of the speakers. In Study 2, 58 participants watched clips in which such cues were absent. Participants were asked to decide whether the main characters were honest or dishonest. In Study 3, we asked 35 semiprofessional poker players to participate in a poker tournament, and then we predicted the amount of money won during the game. Results indicated that attachment anxiety, but not other types of anxiety, predicted more accurate detection of deceitful statements (Studies 1-2) and a greater amount of money won during a game of poker (Study 3). Results are discussed in relation to the possible adaptive functions of certain personality characteristics, such as attachment anxiety, often viewed as undesirable.

  15. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET.

    PubMed

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-06-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system.

  16. Proteogenomics produces comprehensive and highly accurate protein-coding gene annotation in a complete genome assembly of Malassezia sympodialis.

    PubMed

    Zhu, Yafeng; Engström, Pär G; Tellgren-Roth, Christian; Baudo, Charles D; Kennell, John C; Sun, Sheng; Billmyre, R Blake; Schröder, Markus S; Andersson, Anna; Holm, Tina; Sigurgeirsson, Benjamin; Wu, Guangxi; Sankaranarayanan, Sundar Ram; Siddharthan, Rahul; Sanyal, Kaustuv; Lundeberg, Joakim; Nystedt, Björn; Boekhout, Teun; Dawson, Thomas L; Heitman, Joseph; Scheynius, Annika; Lehtiö, Janne

    2017-01-18

    Complete and accurate genome assembly and annotation is a crucial foundation for comparative and functional genomics. Despite this, few complete eukaryotic genomes are available, and genome annotation remains a major challenge. Here, we present a complete genome assembly of the skin commensal yeast Malassezia sympodialis and demonstrate how proteogenomics can substantially improve gene annotation. Through long-read DNA sequencing, we obtained a gap-free genome assembly for M. sympodialis (ATCC 42132), comprising eight nuclear and one mitochondrial chromosome. We also sequenced and assembled four M. sympodialis clinical isolates, and showed their value for understanding Malassezia reproduction by confirming four alternative allele combinations at the two mating-type loci. Importantly, we demonstrated how proteomics data could be readily integrated with transcriptomics data in standard annotation tools. This increased the number of annotated protein-coding genes by 14% (from 3612 to 4113), compared to using transcriptomics evidence alone. Manual curation further increased the number of protein-coding genes by 9% (to 4493). All of these genes have RNA-seq evidence and 87% were confirmed by proteomics. The M. sympodialis genome assembly and annotation presented here is at a quality yet achieved only for a few eukaryotic organisms, and constitutes an important reference for future host-microbe interaction studies.

  17. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  18. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging

    NASA Astrophysics Data System (ADS)

    Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.

  19. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).

  20. A modified ELISA accurately measures secretion of high molecular weight hyaluronan (HA) by Graves' disease orbital cells.

    PubMed

    Krieger, Christine C; Gershengorn, Marvin C

    2014-02-01

    Excess production of hyaluronan (hyaluronic acid [HA]) in the retro-orbital space is a major component of Graves' ophthalmopathy, and regulation of HA production by orbital cells is a major research area. In most previous studies, HA was measured by ELISAs that used HA-binding proteins for detection and rooster comb HA as standards. We show that the binding efficiency of HA-binding protein in the ELISA is a function of HA polymer size. Using gel electrophoresis, we show that HA secreted from orbital cells is primarily comprised of polymers more than 500 000. We modified a commercially available ELISA by using 1 million molecular weight HA as standard to accurately measure HA of this size. We demonstrated that IL-1β-stimulated HA secretion is at least 2-fold greater than previously reported, and activation of the TSH receptor by an activating antibody M22 from a patient with Graves' disease led to more than 3-fold increase in HA production in both fibroblasts/preadipocytes and adipocytes. These effects were not consistently detected with the commercial ELISA using rooster comb HA as standard and suggest that fibroblasts/preadipocytes may play a more prominent role in HA remodeling in Graves' ophthalmopathy than previously appreciated.

  1. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy.

    PubMed

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-07

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the 'thin plate splines-robust point matching' (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  2. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    NASA Astrophysics Data System (ADS)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  3. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  4. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  5. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  6. Driving efficiency in a high-throughput metabolic stability assay through a generic high-resolution accurate mass method and automated data mining.

    PubMed

    Shui, Wenqing; Lin, Song; Zhang, Allen; Chen, Yan; Huang, Yingying; Sanders, Mark

    2011-08-01

    Improving analytical throughput is the focus of many quantitative workflows being developed for early drug discovery. For drug candidate screening, it is common practice to use ultra-high performance liquid chromatography (U-HPLC) coupled with triple quadrupole mass spectrometry. This approach certainly results in short analytical run time; however, in assessing the true throughput, all aspects of the workflow needs to be considered, including instrument optimization and the necessity to re-run samples when information is missed. Here we describe a high-throughput metabolic stability assay with a simplified instrument set-up which significantly improves the overall assay efficiency. In addition, as the data is acquired in a non-biased manner, high information content of both the parent compound and metabolites is gathered at the same time to facilitate the decision of which compounds to proceed through the drug discovery pipeline.

  7. High performance visual display for HENP detectors

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-08-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.

  8. Low-Cost High-Performance MRI

    PubMed Central

    Sarracanie, Mathieu; LaPierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5–3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  9. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  10. Thermal interface pastes nanostructured for high performance

    NASA Astrophysics Data System (ADS)

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  11. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes.

  12. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  13. Optics of high-performance electron microscopes.

    PubMed

    Rose, H H

    2008-01-01

    During recent years, the theory of charged particle optics together with advances in fabrication tolerances and experimental techniques has lead to very significant advances in high-performance electron microscopes. Here, we will describe which theoretical tools, inventions and designs have driven this development. We cover the basic theory of higher-order electron optics and of image formation in electron microscopes. This leads to a description of different methods to correct aberrations by multipole fields and to a discussion of the most advanced design that take advantage of these techniques. The theory of electron mirrors is developed and it is shown how this can be used to correct aberrations and to design energy filters. Finally, different types of energy filters are described.

  14. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  15. High-performance nanostructured MR contrast probes

    PubMed Central

    Hu, Fengqin; Joshi, Hrushikesh M.; Dravid, Vinayak P.; Meade, Thomas J.

    2011-01-01

    Magnetic resonance imaging (MRI) has become a powerful technique in biological molecular imaging and clinical diagnosis. With the rapid progress in nanoscale science and technology, nanostructure-based MR contrast agents are undergoing rapid development. This is in part due to the tuneable magnetic and cellular uptake properties, large surface area for conjugation and favourable biodistribution. In this review, we describe our recent progress in the development of high-performance nanostructured MR contrast agents. Specifically, we report on Gd-enriched nanostructured probes that exhibit T1 MR contrast and superparamagnetic Fe3O4 and CoFe2O4 nanostructures that display T2 MR contrast enhancement. The effects of nanostructure size, shape, assembly and surface modification on relaxivity are described. The potential of these contrast agents for in vitro and in vivo MR imaging with respect to colloidal stability under physiological conditions, biocompatibility, and surface functionality are also evaluated. PMID:20694208

  16. Optics of high-performance electron microscopes*

    PubMed Central

    Rose, H H

    2008-01-01

    During recent years, the theory of charged particle optics together with advances in fabrication tolerances and experimental techniques has lead to very significant advances in high-performance electron microscopes. Here, we will describe which theoretical tools, inventions and designs have driven this development. We cover the basic theory of higher-order electron optics and of image formation in electron microscopes. This leads to a description of different methods to correct aberrations by multipole fields and to a discussion of the most advanced design that take advantage of these techniques. The theory of electron mirrors is developed and it is shown how this can be used to correct aberrations and to design energy filters. Finally, different types of energy filters are described. PMID:27877933

  17. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  18. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  19. The design of high-performance gliders

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Heuermann, V.

    1985-01-01

    A high-performance glider is defined as a glider which has been designed to carry the pilot in a minimum of time a given distance, taking into account conditions which are as conveniently as possible. The present investigation has the objective to show approaches for enhancing the cross-country flight cruising speed, giving attention to the difficulties which the design engineer will have to overcome. The characteristics of the cross-country flight and their relation to the cruising speed are discussed, and a description is provided of mathematical expressions concerning the cruising speed, the sinking speed, and the optimum gliding speed. The effect of aspect ratio and wing loading on the cruising speed is illustrated with the aid of a graph. Trends in glider development are explored, taking into consideration the design of laminar profiles, the reduction of profile-related drag by plain flaps, and the variation of wing loading during the flight. A number of suggestions are made for obtaining gliders with improved performance.

  20. High performance zinc air fuel cell stack

    NASA Astrophysics Data System (ADS)

    Pei, Pucheng; Ma, Ze; Wang, Keliang; Wang, Xizhong; Song, Mancun; Xu, Huachi

    2014-03-01

    A zinc air fuel cell (ZAFC) stack with inexpensive manganese dioxide (MnO2) as the catalyst is designed, in which the circulation flowing potassium hydroxide (KOH) electrolyte carries the reaction product away and acts as a coolant. Experiments are carried out to investigate the characteristics of polarization, constant current discharge and dynamic response, as well as the factors affecting the performance and uniformity of individual cells in the stack. The results reveal that the peak power density can be as high as 435 mW cm-2 according to the area of the air cathode sheet, and the influence factors on cell performance and uniformity are cell locations, filled state of zinc pellets, contact resistance, flow rates of electrolyte and air. It is also shown that the time needed for voltages to reach steady state and that for current step-up or current step-down are both in milliseconds, indicating the ZAFC can be excellently applied to vehicles with rapid dynamic response demands.

  1. Ultra High Performance, Highly Reliable, Numeric Intensive Processors and Systems

    DTIC Science & Technology

    1989-10-01

    to design high-performance DSP/IP systems using either off-the-shelf components or application specific integrated circuitry [ ASIC ]. -9 - HSDAL . ARO...are the chirp-z transform ( CZT ) [13] and (Rader’s) Prime Factor Transform (PFT) [11]. The RNS/ CZT is being studied by a group a MITRE [14] and is given...PFT RNS/CRNS/QRNS implementation has dynamic range requirements on the order of NQ2 (vs NQ4 for the CZT and much higher for the FFT). Therefore, the

  2. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  3. High performance internal reforming unit for high temperature fuel cells

    DOEpatents

    Ma, Zhiwen; Venkataraman, Ramakrishnan; Novacco, Lawrence J.

    2008-10-07

    A fuel reformer having an enclosure with first and second opposing surfaces, a sidewall connecting the first and second opposing surfaces and an inlet port and an outlet port in the sidewall. A plate assembly supporting a catalyst and baffles are also disposed in the enclosure. A main baffle extends into the enclosure from a point of the sidewall between the inlet and outlet ports. The main baffle cooperates with the enclosure and the plate assembly to establish a path for the flow of fuel gas through the reformer from the inlet port to the outlet port. At least a first directing baffle extends in the enclosure from one of the sidewall and the main baffle and cooperates with the plate assembly and the enclosure to alter the gas flow path. Desired graded catalyst loading pattern has been defined for optimized thermal management for the internal reforming high temperature fuel cells so as to achieve high cell performance.

  4. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  5. High Power Flex-Propellant Arcjet Performance

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  6. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  7. High performance BGMI circuit for VLWIR FPAs

    NASA Astrophysics Data System (ADS)

    Hao, Li-chao; Chen, Hong-lei; Huang, Ai-bo; Zhang, Jun-ling; Ding, Rui-jun

    2013-09-01

    An improved CMOS readout integrated circuit (ROIC) for N-on-P very long wavelength (VLWIR) detectors is designed, which has the ability to operate with a simple background suppression. It increases the integration time and the signal-to-noise ratio (SNR) of image data. A buffered gate modulation input (BGMI) cell as input circuit provides a low input resistance, high injection efficiency, and precise biasing voltage to the photodiode. By theoretically analyzing the characteristic parameters of MOS device at low temperature, a high gain's feedback amplifier is devised which using a differential stage to provide the inverting gain to improve linearity and to provide tight control of the detector bias. The final chip is fabricated with HHNEC 0.35um 1P4M process technology. The measurement results of the fabricated readout chip under 50K have successfully verified both readout function and performance improvement. With the 5.0V power supply, ROIC provides the output dynamic range over 2.5V. At the same time, the total power dissipation is less than 200mW, and the maximum readout speed is more than 2.5MHz.

  8. New high performance Si for optical devices

    NASA Astrophysics Data System (ADS)

    Tenma, T.; Matsuzaka, M.; Sako, R.; Takase, K.; Chiba, K.

    2016-05-01

    Against the backdrop of a growing demand in the areas of smart buildings, security, vehicle installation, and other applications, the market for far infrared cameras is expected to grow significantly in the future. However, since germanium (Ge) and chalcogenide glass, which have been used as the lens materials of far infrared cameras, are very expensive or highly toxic, there are some problems supporting the growing demand. We have therefore focused attention on silicon, which is inexpensive and less toxic. Although silicon has been used as a lens material of far infrared cameras, there are some problems remaining to be solved: Cz silicon is inexpensive but delivers low transmittance, and Fz silicon delivers sufficient transmittance but is expensive. We have developed New Cz silicon, which delivers high transmittance as Fz silicon does, and is inexpensive as conventional Cz silicon is. We have already started its sample work at both companies in Japan and overseas and have obtained excellent performance results. Mass production is scheduled to start in this fiscal year.

  9. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  10. High Performance Anion Chromatography of Gadolinium Chelates.

    PubMed

    Hajós, Peter; Lukács, Diana; Farsang, Evelin; Horváth, Krisztian

    2016-11-01

    High performance anion chromatography (HPIC) method to separate ionic Gd chelates, [Formula: see text], [Formula: see text], [Formula: see text] and free matrix anions was developed. At alkaline pHs, polydentate complexing agents such as ethylene-diamine-tetraacetate, diethylene-triamine pentaacetate and trans-1,2-diamine-cyclohexane-tetraacetate tend to form stable Gd chelate anions and can be separated by anion exchange. Separations were studied in the simple isocratic chromatographic run over the wide range of pH and concentration of carbonate eluent using suppressed conductivity detection. The ion exchange and complex forming equilibria were quantitatively described and demonstrated in order to understand major factors in the control of selectivity of Gd chelates. Parameters of optimized resolution between concurrent ions were presented on a 3D resolution surface. The applicability of the developed method is represented by the simultaneous analysis of Gd chelates and organic/inorganic anions. Inductively coupled plasma atomic emission spectroscopy  (ICP-AES) analysis was used for confirmation of HPIC results for Gd. Collection protocols for the heart-cutting procedure of chromatograms were applied. SPE procedures were also developed not only to extract traces of free gadolinium ions from samples, but also to remove the high level of interfering anions of the complex matrices. The limit of detection, the recoverability and the linearity of the method were also presented.

  11. High Performance Circularly Polarized Microstrip Antenna

    NASA Technical Reports Server (NTRS)

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  12. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  13. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  14. High-Throughput Accurate Single-Cell Screening of Euglena gracilis with Fluorescence-Assisted Optofluidic Time-Stretch Microscopy.

    PubMed

    Guo, Baoshan; Lei, Cheng; Ito, Takuro; Jiang, Yiyue; Ozeki, Yasuyuki; Goda, Keisuke

    2016-01-01

    The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, algal biofuel is expected to play a key role in alleviating global warming since algae absorb atmospheric CO2 via photosynthesis. Among various algae for fuel production, Euglena gracilis is an attractive microalgal species as it is known to produce wax ester (good for biodiesel and aviation fuel) within lipid droplets. To date, while there exist many techniques for inducing microalgal cells to produce and accumulate lipid with high efficiency, few analytical methods are available for characterizing a population of such lipid-accumulated microalgae including E. gracilis with high throughout, high accuracy, and single-cell resolution simultaneously. Here we demonstrate high-throughput, high-accuracy, single-cell screening of E. gracilis with fluorescence-assisted optofluidic time-stretch microscopy-a method that combines the strengths of microfluidic cell focusing, optical time-stretch microscopy, and fluorescence detection used in conventional flow cytometry. Specifically, our fluorescence-assisted optofluidic time-stretch microscope consists of an optical time-stretch microscope and a fluorescence analyzer on top of a hydrodynamically focusing microfluidic device and can detect fluorescence from every E. gracilis cell in a population and simultaneously obtain its image with a high throughput of 10,000 cells/s. With the multi-dimensional information acquired by the system, we classify nitrogen-sufficient (ordinary) and nitrogen-deficient (lipid-accumulated) E. gracilis cells with a low false positive rate of 1.0%. This method holds promise for evaluating cultivation techniques and selective breeding for microalgae-based biofuel production.

  15. High-Throughput Accurate Single-Cell Screening of Euglena gracilis with Fluorescence-Assisted Optofluidic Time-Stretch Microscopy

    PubMed Central

    Guo, Baoshan; Lei, Cheng; Ito, Takuro; Jiang, Yiyue; Ozeki, Yasuyuki; Goda, Keisuke

    2016-01-01

    The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, algal biofuel is expected to play a key role in alleviating global warming since algae absorb atmospheric CO2 via photosynthesis. Among various algae for fuel production, Euglena gracilis is an attractive microalgal species as it is known to produce wax ester (good for biodiesel and aviation fuel) within lipid droplets. To date, while there exist many techniques for inducing microalgal cells to produce and accumulate lipid with high efficiency, few analytical methods are available for characterizing a population of such lipid-accumulated microalgae including E. gracilis with high throughout, high accuracy, and single-cell resolution simultaneously. Here we demonstrate high-throughput, high-accuracy, single-cell screening of E. gracilis with fluorescence-assisted optofluidic time-stretch microscopy–a method that combines the strengths of microfluidic cell focusing, optical time-stretch microscopy, and fluorescence detection used in conventional flow cytometry. Specifically, our fluorescence-assisted optofluidic time-stretch microscope consists of an optical time-stretch microscope and a fluorescence analyzer on top of a hydrodynamically focusing microfluidic device and can detect fluorescence from every E. gracilis cell in a population and simultaneously obtain its image with a high throughput of 10,000 cells/s. With the multi-dimensional information acquired by the system, we classify nitrogen-sufficient (ordinary) and nitrogen-deficient (lipid-accumulated) E. gracilis cells with a low false positive rate of 1.0%. This method holds promise for evaluating cultivation techniques and selective breeding for microalgae-based biofuel production. PMID:27846239

  16. Accurate high-throughput identification of parallel G-quadruplex topology by a new tetraaryl-substituted imidazole.

    PubMed

    Hu, Ming-Hao; Chen, Shuo-Bin; Wang, Yu-Qing; Zeng, You-Mei; Ou, Tian-Miao; Li, Ding; Gu, Lian-Quan; Huang, Zhi-Shu; Tan, Jia-Heng

    2016-09-15

    G-quadruplex nucleic acids are four-stranded DNA or RNA secondary structures that are formed in guanine-rich sequences. These structures exhibit extensive structural polymorphism and play a pivotal role in the control of a variety of cellular processes. To date, diverse approaches for high-throughput identification of G-quadruplex structures have been successfully developed, but high-throughput methods for further characterization of their topologies are still lacking. In this study, we report a new tetra-arylimidazole probe psIZCM-1, which was found to display significant and distinctive changes in both the absorption and the fluorescence spectra in the presence of parallel G-quadruplexes but show insignificant changes upon interactions with anti-parallel G-quadruplexes or other non-quadruplex oligonucleotides. In view of this dual-output feature, we used psIZCM-1 to identify the parallel G-quadruplexes from a large set of 314 oligonucleotides (including 300 G-quadruplex-forming oligonucleotides and 14 non-quadruplex oligonucleotides) via a microplate reader and accordingly established a high-throughput method for the characterization of parallel G-quadruplex topologies. The accuracy of this method was greater than 95%, which was much higher than that of the commercial probe NMM. To make the approach more practical, we further combined psIZCM-1 with another G-quadruplex probe IZCM-7 to realize the high-throughput classification of parallel, anti-parallel G-quadruplexes and non-quadruplex structures.

  17. ElVis: A System for the Accurate and Interactive Visualization of High-Order Finite Element Solutions.

    PubMed

    Nelson, B; Liu, E; Kirby, R M; Haimes, R

    2012-12-01

    This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with high-order finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations-high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular-will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of

  18. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  19. Evaluating performance of high efficiency mist eliminators

    SciTech Connect

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K.

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  20. The curing of high-performance concrete

    NASA Astrophysics Data System (ADS)

    Meeks, Kenneth Wayne

    This dissertation describes the latest information, technology, and research on the curing of high performance concrete (HPC). Expanded somewhat beyond the scope of HPC, it examines the current body of knowledge on the effects of various curing regimes on concrete. The significance and importance of curing are discussed as well as the various definitions of HPC. The current curing requirements, standards, and criteria as proposed by ACI, as well as those of other countries, are reviewed and discussed. The current prescriptive curing requirements may not be applicable to high performance concrete. The research program reported in this dissertation looked at one approach to development of curing criteria for this relatively new class of concrete. The program applies some of the basic concepts of the methodology developed by the German researcher, H. K. Hilsdorf, to the curing of HPC with the objective to determine minimum curing durations for adequate strength development. The approach is to determine what fraction of the standard-cured 28-day strength has to be attained at the end of the curing period to assure that the design strength is attained in the interior of the member. An innovative direct tension test was developed to measure the strength at specific depths from the drying surface of small mortar cylinders (50 x 127 mm (2 x 5 in.)). Two mortar mixtures were investigated, w/c = 0.30 and w/c = 0.45, and three different moist curing regimes, 1-day, 3-day, and 7-day. Specimens were stored in two environmental chambers at 25sp°C, 50% RH; and 25sp°C, 70% RH, until testing at the age of 28 days. Direct tensile tests were conducted using steel disks epoxied to the ends of the specimens. Also, the penetration of the drying front was calculated from the drying data using porosity and degree of hydration relationships. The major observation from these tests was that adequate strength is attained in both mortar mixtures with only one day of moist curing. The drying