Science.gov

Sample records for accurate high performance

  1. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry.

    PubMed

    Li, Xiu Qin; Zhang, Feng; Sun, Yan Yan; Yong, Wei; Chu, Xiao Gang; Fang, Yan Yan; Zweigenbaum, Jerry

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M+H]+ or the deprotonated molecules [M-H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0mg.kg(-1) concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0mg.kg(-1)-100mg.kg(-1) are 81-106%, with coefficients of variation <7.5%. Limits of detection (LODs) range from 0.0005 to 0.05 mg.kg(-1), which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  2. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  3. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  4. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  5. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  6. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  7. Sensitive, accurate and rapid detection of trace aliphatic amines in environmental samples with ultrasonic-assisted derivatization microextraction using a new fluorescent reagent for high performance liquid chromatography.

    PubMed

    Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao

    2014-07-25

    A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples. PMID:24925451

  8. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  9. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  10. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  11. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  12. Can Scores on an Interim High School Reading Assessment Accurately Predict Low Performance on College Readiness Exams? REL 2016-124

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2016-01-01

    During the 2013/14 school year two Florida school districts sought to develop an early warning system to identify students at risk of low performance on college readiness measures in grade 11 or 12 (such as the SAT or ACT) in order to support them with remedial coursework prior to high school graduation. The study presented in this report provides…

  13. Feasibility of ultra-high performance liquid and gas chromatography coupled to mass spectrometry for accurate determination of primary and secondary phthalate metabolites in urine samples.

    PubMed

    Herrero, Laura; Calvarro, Sagrario; Fernández, Mario A; Quintanilla-López, Jesús Eduardo; González, María José; Gómara, Belén

    2015-01-01

    Phthalates (PAEs) are ubiquitous toxic chemical compounds. During the last few years, some phthalate metabolites (MPAEs) have been proposed as appropriate biomarkers in human urine samples to determine PAE human intake and exposure. So, it is necessary to have fast, easy, robust and validated analytical methods to determine selected MPAEs in urine human samples. Two different instrumental methods based on gas (GC) and ultra-high performance liquid (UHPLC) chromatography coupled to mass spectrometry (MS) have been optimized, characterized and validated for the simultaneous determination of nine primary and secondary phthalate metabolites in urine samples. Both instrumental methods have similar sensitivity (detection limits ranged from 0.03 to 8.89 pg μL(-1) and from 0.06 to 0.49 pg μL(-1) in GC-MS and UHPLC-MS(2), respectively), precision (repeatability, expressed as relative standard deviation, which was lower than 8.4% in both systems, except for 5OH-MEHP in the case of GC-MS) and accuracy. But some advantages of the UHPLC-MS(2) method, such as more selectivity and lower time in the chromatographic runs (6.8 min vs. 28.5 min), have caused the UHPLC-MS(2) method to be chosen to analyze the twenty one human urine samples from the general Spanish population. Regarding these samples, MEP showed the highest median concentration (68.6 μg L(-1)), followed by MiBP (23.3 μg L(-1)), 5cx-MEPP (22.5 μg L(-1)) and MBP (19.3μgL(-1)). MMP (6.99 μg L(-1)), 5oxo-MEHP (6.15 μg L(-1)), 5OH-MEHP (5.30 μg L(-1)) and MEHP (4.40 μg L(-1)) showed intermediate levels. Finally, the lowest levels were found for MBzP (2.55 μg L(-1)). These data are within the same order of magnitude as those found in other similar populations. PMID:25467512

  14. Feasibility of ultra-high performance liquid and gas chromatography coupled to mass spectrometry for accurate determination of primary and secondary phthalate metabolites in urine samples.

    PubMed

    Herrero, Laura; Calvarro, Sagrario; Fernández, Mario A; Quintanilla-López, Jesús Eduardo; González, María José; Gómara, Belén

    2015-01-01

    Phthalates (PAEs) are ubiquitous toxic chemical compounds. During the last few years, some phthalate metabolites (MPAEs) have been proposed as appropriate biomarkers in human urine samples to determine PAE human intake and exposure. So, it is necessary to have fast, easy, robust and validated analytical methods to determine selected MPAEs in urine human samples. Two different instrumental methods based on gas (GC) and ultra-high performance liquid (UHPLC) chromatography coupled to mass spectrometry (MS) have been optimized, characterized and validated for the simultaneous determination of nine primary and secondary phthalate metabolites in urine samples. Both instrumental methods have similar sensitivity (detection limits ranged from 0.03 to 8.89 pg μL(-1) and from 0.06 to 0.49 pg μL(-1) in GC-MS and UHPLC-MS(2), respectively), precision (repeatability, expressed as relative standard deviation, which was lower than 8.4% in both systems, except for 5OH-MEHP in the case of GC-MS) and accuracy. But some advantages of the UHPLC-MS(2) method, such as more selectivity and lower time in the chromatographic runs (6.8 min vs. 28.5 min), have caused the UHPLC-MS(2) method to be chosen to analyze the twenty one human urine samples from the general Spanish population. Regarding these samples, MEP showed the highest median concentration (68.6 μg L(-1)), followed by MiBP (23.3 μg L(-1)), 5cx-MEPP (22.5 μg L(-1)) and MBP (19.3μgL(-1)). MMP (6.99 μg L(-1)), 5oxo-MEHP (6.15 μg L(-1)), 5OH-MEHP (5.30 μg L(-1)) and MEHP (4.40 μg L(-1)) showed intermediate levels. Finally, the lowest levels were found for MBzP (2.55 μg L(-1)). These data are within the same order of magnitude as those found in other similar populations.

  15. The potential of inductively coupled plasma mass spectrometry detection for high-performance liquid chromatography combined with accurate mass measurement of organic pharmaceutical compounds.

    PubMed

    Axelsson, B O; Jörnten-Karlsson, M; Michelsen, P; Abou-Shakra, F

    2001-01-01

    Quantification of unknown components in pharmaceutical, metabolic and environmental samples is an important but difficult task. Most commonly used detectors (like UV, RI or MS) require standards of each analyte for accurate quantification. Even if the chemical structure or elemental composition is known, the response from these detectors is difficult to predict with any accuracy. In inductively coupled plasma mass spectrometry (ICP-MS) compounds are atomised and ionised irrespective of the chemical structure(s) incorporating the element of interest. Liquid chromatography coupled with inductively coupled plasma mass spectrometry (LC/ICP-MS) has been shown to provide a generic detection for structurally non-correlated compounds with common elements like phosphorus and iodine. Detection of selected elements gives a better quantification of tested 'unknowns' than UV and organic mass spectrometric detection. It was shown that the ultrasonic nebuliser did not introduce any measurable dead volume and preserves the separation efficiency of the system. ICP-MS can be used in combination with many different mobile phases ranging from 0-100% organic modifier. The dynamic range was found to exceed 2.5 orders of magnitude. The application of LC/ICP-MS to pharmaceutical drugs and formulations has shown that impurities can be quantified below the 0.1 mol-% level.

  16. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  17. An optimized method for neurotransmitters and their metabolites analysis in mouse hypothalamus by high performance liquid chromatography-Q Exactive hybrid quadrupole-orbitrap high-resolution accurate mass spectrometry.

    PubMed

    Yang, Zong-Lin; Li, Hui; Wang, Bing; Liu, Shu-Ying

    2016-02-15

    Neurotransmitters (NTs) and their metabolites are known to play an essential role in maintaining various physiological functions in nervous system. However, there are many difficulties in the detection of NTs together with their metabolites in biological samples. A new method for NTs and their metabolites detection by high performance liquid chromatography coupled with Q Exactive hybrid quadruple-orbitrap high-resolution accurate mass spectrometry (HPLC-HRMS) was established in this paper. This method was a great development of the applying of Q Exactive MS in the quantitative analysis. This method enabled a rapid quantification of ten compounds within 18min. Good linearity was obtained with a correlation coefficient above 0.99. The concentration range of the limit of detection (LOD) and the limit of quantitation (LOQ) level were 0.0008-0.05nmol/mL and 0.002-25.0nmol/mL respectively. Precisions (relative standard deviation, RSD) of this method were at 0.36-12.70%. Recovery ranges were between 81.83% and 118.04%. Concentrations of these compounds in mouse hypothalamus were detected by Q Exactive LC-MS technology with this method.

  18. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  19. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-02-12

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.

  20. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  1. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    SciTech Connect

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  2. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  3. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  4. Second-order accurate difference schemes on highly irregular meshes

    SciTech Connect

    Manteuffel, T.A.; White, A.B. Jr.

    1988-01-01

    In this paper compact-as-possible second-order accurate difference schemes will be constructed for boundary-value problems of arbitrary order on highly irregular meshes. It will be shown that for equations of order (K) these schemes will have truncation error of order (3/endash/K). This phenomena is known as supraconvergence. 7 refs.

  5. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    SciTech Connect

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  6. A high order accurate difference scheme for complex flow fields

    SciTech Connect

    Dexun Fu; Yanwen Ma

    1997-06-01

    A high order accurate finite difference method for direct numerical simulation of coherent structure in the mixing layers is presented. The reason for oscillation production in numerical solutions is analyzed. It is caused by a nonuniform group velocity of wavepackets. A method of group velocity control for the improvement of the shock resolution is presented. In numerical simulation the fifth-order accurate upwind compact difference relation is used to approximate the derivatives in the convection terms of the compressible N-S equations, a sixth-order accurate symmetric compact difference relation is used to approximate the viscous terms, and a three-stage R-K method is used to advance in time. In order to improve the shock resolution the scheme is reconstructed with the method of diffusion analogy which is used to control the group velocity of wavepackets. 18 refs., 12 figs., 1 tab.

  7. Uniformly high order accurate essentially non-oscillatory schemes 3

    NASA Technical Reports Server (NTRS)

    Harten, A.; Engquist, B.; Osher, S.; Chakravarthy, S. R.

    1986-01-01

    In this paper (a third in a series) the construction and the analysis of essentially non-oscillatory shock capturing methods for the approximation of hyperbolic conservation laws are presented. Also presented is a hierarchy of high order accurate schemes which generalizes Godunov's scheme and its second order accurate MUSCL extension to arbitrary order of accuracy. The design involves an essentially non-oscillatory piecewise polynomial reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is derived from a new interpolation technique that when applied to piecewise smooth data gives high-order accuracy whenever the function is smooth but avoids a Gibbs phenomenon at discontinuities. Unlike standard finite difference methods this procedure uses an adaptive stencil of grid points and consequently the resulting schemes are highly nonlinear.

  8. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation

  9. Library preparation for highly accurate population sequencing of RNA viruses

    PubMed Central

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  10. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  11. Fractionating Polymer Microspheres as Highly Accurate Density Standards.

    PubMed

    Bloxham, William H; Hennek, Jonathan W; Kumar, Ashok A; Whitesides, George M

    2015-07-21

    This paper describes a method of isolating small, highly accurate density-standard beads and characterizing their densities using accurate and experimentally traceable techniques. Density standards have a variety of applications, including the characterization of density gradients, which are used to separate objects in a variety of fields. Glass density-standard beads can be very accurate (±0.0001 g cm(-3)) but are too large (3-7 mm in diameter) for many applications. When smaller density standards are needed, commercial polymer microspheres are often used. These microspheres have standard deviations in density ranging from 0.006 to 0.021 g cm(-3); these distributions in density make these microspheres impractical for applications demanding small steps in density. In this paper, commercial microspheres are fractionated using aqueous multiphase systems (AMPS), aqueous mixture of polymers and salts that spontaneously separate into phases having molecularly sharp steps in density, to isolate microspheres having much narrower distributions in density (standard deviations from 0.0003 to 0.0008 g cm(-3)) than the original microspheres. By reducing the heterogeneity in densities, this method reduces the uncertainty in the density of any specific bead and, therefore, improves the accuracy within the limits of the calibration standards used to characterize the distributions in density.

  12. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  13. Selective accurate-mass-based analysis of 11 oxy-PAHs on atmospheric particulate matter by pressurized liquid extraction followed by high-performance liquid chromatography and magnetic sector mass spectrometry.

    PubMed

    Walgraeve, C; Demeestere, K; De Wispelaere, P; Dewulf, J; Lintelmann, J; Fischer, K; Van Langenhove, H

    2012-02-01

    An innovative analytical method based on high-performance liquid chromatography and atmospheric pressure chemical ionization magnetic sector mass spectrometry was developed and optimized to determine trace concentrations of 11 compounds belonging to the group of the seldom-analyzed oxy-PAHs (phenanthrene-9,10-dione, chrysene-5,6-dione, benzo[a]pyrene-4,5-dione, benzo[a]pyrene-1,6-dione, benzo[a]pyrene-3,6-dione, benzo[a]pyrene-6,12-dione, 4-oxa-benzo[def]chrysene-5-one, pyrene-1-carboxaldehyde, benzo[de]anthracene-7-one, benzo[a]anthracene-7,12-dione, and napthacene-5,12-dione) on airborne particulate matter (PM(10)). The mass spectrometer was operated in multiple ion detection mode, allowing for selective accurate mass detection (mass resolution of 12,000 full width at half maximum) of the oxy-PAHs characteristic ions. Optimization of both the vaporizer (450 °C) and capillary temperature (350 °C) resulted into instrumental detection limits in the range between 7 (benzo[a]pyrene-1,6-dione) and 926 pg (benzo[a]anthracene-7,12-dione). The advanced pressurized liquid extraction (PLE) and the more traditionally used ultrasonic extraction (USE) were compared using ethyl acetate as an extraction solvent. For both techniques, high recoveries from spiked quartz fiber filters (PLE, 82-110%; USE, 67-97%) were obtained. Recoveries obtained from real PM(10) samples were also high (76-107%), and no significant matrix effects (ME) on the ionization process (enhancement or suppression) were found (ME, 89-123%). Method limits of quantification (S/N = 10) were in the range between 2 and 336 pg/m(3). This method was used to analyze real PM samples collected at several urban and rural locations in the Antwerp area. For the first time, concentrations for Belgium are provided. Concentrations of individual oxy-PAHs are in the lower pictograms per cubic meter to 6 ng/m(3) range. High concentration differences between individual compounds are found as exemplified by the 75th percentile

  14. The highly accurate anteriolateral portal for injecting the knee

    PubMed Central

    2011-01-01

    Background The extended knee lateral midpatellar portal for intraarticular injection of the knee is accurate but is not practical for all patients. We hypothesized that a modified anteriolateral portal where the synovial membrane of the medial femoral condyle is the target would be highly accurate and effective for intraarticular injection of the knee. Methods 83 subjects with non-effusive osteoarthritis of the knee were randomized to intraarticular injection using the modified anteriolateral bent knee versus the standard lateral midpatellar portal. After hydrodissection of the synovial membrane with lidocaine using a mechanical syringe (reciprocating procedure device), 80 mg of triamcinolone acetonide were injected into the knee with a 2.0-in (5.1-cm) 21-gauge needle. Baseline pain, procedural pain, and pain at outcome (2 weeks and 6 months) were determined with the 10 cm Visual Analogue Pain Score (VAS). The accuracy of needle placement was determined by sonographic imaging. Results The lateral midpatellar and anteriolateral portals resulted in equivalent clinical outcomes including procedural pain (VAS midpatellar: 4.6 ± 3.1 cm; anteriolateral: 4.8 ± 3.2 cm; p = 0.77), pain at outcome (VAS midpatellar: 2.6 ± 2.8 cm; anteriolateral: 1.7 ± 2.3 cm; p = 0.11), responders (midpatellar: 45%; anteriolateral: 56%; p = 0.33), duration of therapeutic effect (midpatellar: 3.9 ± 2.4 months; anteriolateral: 4.1 ± 2.2 months; p = 0.69), and time to next procedure (midpatellar: 7.3 ± 3.3 months; anteriolateral: 7.7 ± 3.7 months; p = 0.71). The anteriolateral portal was 97% accurate by real-time ultrasound imaging. Conclusion The modified anteriolateral bent knee portal is an effective, accurate, and equivalent alternative to the standard lateral midpatellar portal for intraarticular injection of the knee. Trial Registration ClinicalTrials.gov: NCT00651625 PMID:21447197

  15. A highly accurate heuristic algorithm for the haplotype assembly problem

    PubMed Central

    2013-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most common form of genetic variation in human DNA. The sequence of SNPs in each of the two copies of a given chromosome in a diploid organism is referred to as a haplotype. Haplotype information has many applications such as gene disease diagnoses, drug design, etc. The haplotype assembly problem is defined as follows: Given a set of fragments sequenced from the two copies of a chromosome of a single individual, and their locations in the chromosome, which can be pre-determined by aligning the fragments to a reference DNA sequence, the goal here is to reconstruct two haplotypes (h1, h2) from the input fragments. Existing algorithms do not work well when the error rate of fragments is high. Here we design an algorithm that can give accurate solutions, even if the error rate of fragments is high. Results We first give a dynamic programming algorithm that can give exact solutions to the haplotype assembly problem. The time complexity of the algorithm is O(n × 2t × t), where n is the number of SNPs, and t is the maximum coverage of a SNP site. The algorithm is slow when t is large. To solve the problem when t is large, we further propose a heuristic algorithm on the basis of the dynamic programming algorithm. Experiments show that our heuristic algorithm can give very accurate solutions. Conclusions We have tested our algorithm on a set of benchmark datasets. Experiments show that our algorithm can give very accurate solutions. It outperforms most of the existing programs when the error rate of the input fragments is high. PMID:23445458

  16. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    SciTech Connect

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented.

  17. Highly Accurate Inverse Consistent Registration: A Robust Approach

    PubMed Central

    Reuter, Martin; Rosas, H. Diana; Fischl, Bruce

    2010-01-01

    The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289

  18. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  19. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    SciTech Connect

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss

  1. A novel technique for highly accurate gas exchange measurements

    NASA Astrophysics Data System (ADS)

    Kalkenings, R. K.; Jähne, B. J.

    2003-04-01

    The Heidelberg Aeolotron is a circular wind-wave facility for investigating air-sea gas exchange. In this contribution a novel technique for measuring highly accurate transfer velocities k of mass transfer will be presented. Traditionally, in mass balance techniques the constant of decay for gas concentrations over time is measured. The major drawback of this concept is the long time constant. At low wind speeds and a water height greater than 1 m the period of observation has to be several days. In a gas-tight facility such as the Aeolotron, the transfer velocity k can be computed from the concentration in the water body and the change of concentration in the gas space. Owing to this fact, transfer velocities are gained while greatly reducing the measuring times to less than one hour. The transfer velocity k of a tracer can be parameterized as k=1/β \\cdot u_* \\cdot Sc^n, with the Schmidt Number Sc, shear velocity u_* and the dimensionless transfer resistance β. The Schmidt Number exponent n can be derived from simultaneous measurements of different tracers. Since these tracers are of different Schmidt number, the shear velocity is not needed. To allow for Schmidt numbers spanning a hole decade, in our experiments He, H_2, N_2O and F12 are used. The relative accuracy of measuring the transfer velocity was improved to less than 2%. In 9 consecutive experiments conducted at a wind speed of 6.2 m/s, the deviation of the Schmidt number exponent was found to be just under 0.02. This high accuracy will allow precisely determining the transition of the Schmidt number exponent from n=2/3 to n=0.5 from a flat to wavy water surface. In order to quantify gas exchange not only the wind speed is important. Surfactants have a pronounced effect on the wave field and lead to a drastic reduction in the transfer velocity. In the Aeolotron measurements were conducted with a variety of measuring devices, ranging from an imaging slope gauge (ISG) to thermal techniques with IR

  2. Accurate quantification of mercapturic acids of styrene (PHEMAs) in human urine with direct sample injection using automated column-switching high-performance liquid chromatography coupled with tandem mass spectrometry.

    PubMed

    Reska, M; Ochsmann, E; Kraus, T; Schettgen, T

    2010-08-01

    Styrene is one of the most important industrial chemicals, with an enormously high production volume worldwide. The urinary mercapturic acids of its metabolite styrene-7,8-oxide, namely N-acetyl-S-(2-hydroxy-1-phenylethyl)-L-cysteine (PHEMA 1) and N-acetyl-S-(2-hydroxy-2-phenylethyl)-L-cysteine (PHEMA 2), are specific biomarkers for the determination of individual internal exposure to this highly reactive intermediate of styrene. We have developed and validated a fast, specific and very sensitive method for the accurate determination of the sum of phenylhydroxyethyl mercapturic acids (PHEMAs) in human urine with an automated multidimensional liquid chromatography-tandem mass spectrometry method using (13)C(6)-labelled PHEMAs as internal standards. Analytes were stripped from the urinary matrix by online extraction on a restricted access material, transferred to the analytical column and subsequently determined by tandem mass spectrometry. The limit of quantification (LOQ) for the sum of PHEMAs was 0.3 microg/L urine and allowed us to quantify the background exposure of the (smoking) general population. Precision within series and between series ranged from 1.5 to 6.8% at three concentrations ranging from 3 to 30 microg/L urine; the mean accuracy was between 104 and 110%. We applied the method to spot urine samples from 40 subjects of the general population with no known occupational exposure to styrene. The median levels (range) for the sum of PHEMAs in urine of non-smokers (n = 22) were less than 0.3 microg/L (less than 0.3 to 1.1 microg/L), whereas in urine of smokers (n = 18), the median levels were 0.46 microg/L (less than 0.3 to 2.8 microg/L). Smokers showed a significantly higher excretion of the sum of PHEMAs (p = 0.02). Owing to its automation and high sensitivity, our method is well suited for application in occupational or environmental studies.

  3. Highly accurate SNR measurement using the covariance of two SEM images with the identical view.

    PubMed

    Oho, Eisaku; Suzuki, Kazuhiko

    2012-01-01

    Quality of an SEM image is strongly influenced by the extent of noise. As a well-known method in the field of SEM, the covariance is applied to measure the signal-to-noise ratio (SNR). This method has potential ability for highly accurate measurement of the SNR, which is hardly known until now. If the precautions discussed in this article are adopted, that method can demonstrate its real ability. These precautions are strongly related to "proper acquisition of two images with the identical view," "alignment of an aperture diaphragm," "reduction of charging phenomena," "elimination of particular noises," and "accurate focusing," As necessary, characteristics in SEM signal and noise are investigated from a few standpoints. When using the maximum performance of this measurement, SNR of many SEM images obtained in a variety of the SEM operating conditions and specimens can be measured accurately.

  4. CgWind: A high-order accurate simulation tool for wind turbines and wind farms

    SciTech Connect

    Chand, K K; Henshaw, W D; Lundquist, K A; Singer, M A

    2010-02-22

    CgWind is a high-fidelity large eddy simulation (LES) tool designed to meet the modeling needs of wind turbine and wind park engineers. This tool combines several advanced computational technologies in order to model accurately the complex and dynamic nature of wind energy applications. The composite grid approach provides high-quality structured grids for the efficient implementation of high-order accurate discretizations of the incompressible Navier-Stokes equations. Composite grids also provide a natural mechanism for modeling bodies in relative motion and complex geometry. Advanced algorithms such as matrix-free multigrid, compact discretizations and approximate factorization will allow CgWind to perform highly resolved calculations efficiently on a wide class of computing resources. Also in development are nonlinear LES subgrid-scale models required to simulate the many interacting scales present in large wind turbine applications. This paper outlines our approach, the current status of CgWind and future development plans.

  5. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  6. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun D.; Liu, Nan-Suey

    1993-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  7. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  8. A highly accurate ab initio potential energy surface for methane

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-01

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  9. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement. PMID:27634258

  10. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  11. Cerebral cortical activity associated with non-experts' most accurate motor performance.

    PubMed

    Dyke, Ford; Godwin, Maurice M; Goel, Paras; Rehm, Jared; Rietschel, Jeremy C; Hunt, Carly A; Miller, Matthew W

    2014-10-01

    This study's specific aim was to determine if non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. To assess this, EEG was recorded from non-expert golfers executing putts; EEG spectral power and coherence were calculated for the epoch preceding putt execution; and spectral power and coherence for the five most accurate putts were contrasted with that for the five least accurate. Results revealed marked power in the theta frequency bandwidth at all cerebral cortical regions for the most accurate putts relative to the least accurate, and considerable power in the low-beta frequency bandwidth at the left temporal region for the most accurate compared to the least. As theta power is associated with working memory and low-beta power at the left temporal region with verbal analysis, results suggest non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. PMID:25058623

  12. Highly accurate boronimeter assay of concentrated boric acid solutions

    SciTech Connect

    Ball, R.M. )

    1992-01-01

    The Random-Walk Boronimeter has successfully been used as an on-line indicator of boric acid concentration in an operating commercial pressurized water reactor. The principle has been adapted for measurement of discrete samples to high accuracy and to concentrations up to 6000 ppm natural boron in light water. Boric acid concentration in an aqueous solution is a necessary measurement in many nuclear power plants, particularly those that use boric acid dissolved in the reactor coolant as a reactivity control system. Other nuclear plants use a high-concentration boric acid solution as a backup shutdown system. Such a shutdown system depends on rapid injection of the solution and frequent surveillance of the fluid to ensure the presence of the neutron absorber. The two methods typically used to measure boric acid are the chemical and the physical methods. The chemical method uses titration to determine the ionic concentration of the BO[sub 3] ions and infers the boron concentration. The physical method uses the attenuation of neutrons by the solution and infers the boron concentration from the neutron absorption properties. This paper describes the Random-Walk Boronimeter configured to measure discrete samples to high accuracy and high concentration.

  13. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  14. An accurate continuous calibration system for high voltage current transformer

    NASA Astrophysics Data System (ADS)

    Tong, Yue; Li, Bin Hong

    2011-02-01

    A continuous calibration system for high voltage current transformers is presented in this paper. The sensor of this system is based on a kind of electronic instrument current transformer, which is a clamp-shape air core coil. This system uses an optical fiber transmission system for its signal transmission and power supply. Finally the digital integrator and fourth-order convolution window algorithm as error calculation methods are realized by the virtual instrument with a personal computer. It is found that this system can calibrate a high voltage current transformer while energized, which means avoiding a long calibrating period in the power system and the loss of power metering expense. At the same time, it has a wide dynamic range and frequency band, and it can achieve a high accuracy measurement in a complex electromagnetic field environment. The experimental results and the on-site operation results presented in the last part of the paper, prove that it can reach the 0.05 accuracy class and is easy to operate on site.

  15. Highly Accurate Photogrammetric Measurements of the Planck Reflectors

    NASA Astrophysics Data System (ADS)

    Amiri Parian, J.; Gruen, Armin; Cozzani, Alessandro

    2006-06-01

    The Planck mission of the European Space Agency (ESA) is designed to image the anisotropies of the Cosmic Background Radiation Field over the whole sky. To achieve this aim, sophisticated reflectors are used as part of the Planck telescope receiving system. The system consists of secondary and primary reflectors which are sections of two different ellipsoids of revolution with mean diameters of 1 and 1.6 meters. Deformations of the reflectors which influence the optical parameters and the gain of receiving signals are investigated in vacuum and at very low temperatures. For this investigation, among the various high accuracy measurement techniques, photogrammetry was selected. With respect to the photogrammetric measurements, special considerations had to be taken into account in design steps, measurement arrangement and data processing to achieve very high accuracies. The determinability of additional parameters of the camera under the given network configuration, datum definition, reliability and precision issues as well as workspace limits and propagating errors from different sources are considered. We have designed an optimal photogrammetric network by heuristic simulation for the flight model of the primary and the secondary reflectors with relative precisions better than 1:1000000 and 1:400000 to achieve the requested accuracies. A least squares best fit ellipsoid method was developed to determine the optical parameters of the reflectors. In this paper we will report about the procedures, the network design and the results of real measurements.

  16. Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems

    NASA Astrophysics Data System (ADS)

    Oden, J. T.

    1992-08-01

    This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.

  17. Automated generation of highly accurate, efficient and transferable pseudopotentials

    NASA Astrophysics Data System (ADS)

    Hansel, R. A.; Brock, C. N.; Paikoff, B. C.; Tackett, A. R.; Walker, D. G.

    2015-11-01

    A multi-objective genetic algorithm (MOGA) was used to automate a search for optimized pseudopotential parameters. Pseudopotentials were generated using the atomPAW program and density functional theory (DFT) simulations were conducted using the pwPAW program. The optimized parameters were the cutoff radius and projector energies for the s and p orbitals. The two objectives were low pseudopotential error and low computational work requirements. The error was determined from (1) the root mean square difference between the all-electron and pseudized-electron log derivative, (2) the calculated lattice constant versus reference data of Holzwarth et al., and (3) the calculated bulk modulus versus reference potentials. The computational work was defined as the number of flops required to perform the DFT simulation. Pseudopotential transferability was encouraged by optimizing each element in different lattices: (1) nitrogen in GaN, AlN, and YN, (2) oxygen in NO, ZnO, and SiO4, and (3) fluorine in LiF, NaF, and KF. The optimal solutions were equivalent in error and required significantly less computational work than the reference data. This proof-of-concept study demonstrates that the combination of MOGA and ab-initio simulations is a powerful tool that can generate a set of transferable potentials with a trade-off between accuracy (error) and computational efficiency (work).

  18. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  19. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  20. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  1. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  2. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    PubMed

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  3. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    SciTech Connect

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  4. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  5. Indirect Terahertz Spectroscopy of Molecular Ions Using Highly Accurate and Precise Mid-Ir Spectroscopy

    NASA Astrophysics Data System (ADS)

    Mills, Andrew A.; Ford, Kyle B.; Kreckel, Holger; Perera, Manori; Crabtree, Kyle N.; McCall, Benjamin J.

    2009-06-01

    With the advent of Herschel and SOFIA, laboratory methods capable of providing molecular rest frequencies in the terahertz and sub-millimeter regime are increasingly important. As of yet, it has been difficult to perform spectroscopy in this wavelength region due to the limited availability of radiation sources, optics, and detectors. Our goal is to provide accurate THz rest frequencies for molecular ions by combining previously recorded microwave transitions with combination differences obtained from high precision mid-IR spectroscopy. We are constructing a Sensitive Resolved Ion Beam Spectroscopy setup which will harness the benefits of kinematic compression in a molecular ion beam to enable very high resolution spectroscopy. This ion beam is interrogated by continuous-wave cavity ringdown spectroscopy using a home-made widely tunable difference frequency laser that utilizes two near-IR lasers and a periodically-poled lithium niobate crystal. Here, we report our efforts to optimize our ion beam spectrometer and to perform high-precision and high-accuracy frequency measurements using an optical frequency comb. footnote

  6. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  7. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  8. Procedure for accurate fabrication of tissue compensators with high-density material

    NASA Astrophysics Data System (ADS)

    Mejaddem, Younes; Lax, Ingmar; Adakkai K, Shamsuddin

    1997-02-01

    An accurate method for producing compensating filters using high-density material (Cerrobend) is described. The procedure consists of two cutting steps in a Styrofoam block: (i) levelling a surface of the block to a reference level; (ii) depth-modulated milling of the levelled block in accordance with pre-calculated thickness profiles of the compensator. The calculated thickness (generated by a dose planning system) can be reproduced within acceptable accuracy. The desired compensator thickness manufactured according to this procedure is reproduced to within 0.1 mm, corresponding to a 0.5% change in dose at a beam quality of 6 MV. The results of our quality control checks performed with the technique of stylus profiling measurements show an accuracy of 0.04 mm in the milling process over an arbitrary profile along the milled-out Styrofoam block.

  9. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  10. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    PubMed

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  11. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance

    PubMed Central

    Talamas, Sean N.; Mavor, Kenneth I.; Perrett, David I.

    2016-01-01

    Despite the old adage not to ‘judge a book by its cover’, facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone’s health or intelligence, but such cues are overshadowed by an ‘attractiveness halo’ whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students’ future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  12. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli.

  13. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  14. Highly accurate isotope measurements of surface material on planetary objects in situ

    NASA Astrophysics Data System (ADS)

    Riedo, Andreas; Neuland, Maike; Meyer, Stefan; Tulej, Marek; Wurz, Peter

    2013-04-01

    Studies of isotope variations in solar system objects are of particular interest and importance. Highly accurate isotope measurements provide insight into geochemical processes, constrain the time of formation of planetary material (crystallization ages) and can be robust tracers of pre-solar events and processes. A detailed understanding of the chronology of the early solar system and dating of planetary materials require precise and accurate measurements of isotope ratios, e.g. lead, and abundance of trace element. However, such measurements are extremely challenging and until now, they never have been attempted in space research. Our group designed a highly miniaturized and self-optimizing laser ablation time-of-flight mass spectrometer for space flight for sensitive and accurate measurements of the elemental and isotopic composition of extraterrestrial materials in situ. Current studies were performed by using UV radiation for ablation and ionization of sample material. High spatial resolution is achieved by focusing the laser beam to about Ø 20μm onto the sample surface. The instrument supports a dynamic range of at least 8 orders of magnitude and a mass resolution m/Δm of up to 800—900, measured at iron peak. We developed a measurement procedure, which will be discussed in detail, that allows for the first time to measure with the instrument the isotope distribution of elements, e.g. Ti, Pb, etc., with a measurement accuracy and precision in the per mill and sub per mill level, which is comparable to well-known and accepted measurement techniques, such as TIMS, SIMS and LA-ICP-MS. The present instrument performance offers together with the measurement procedure in situ measurements of 207Pb/206Pb ages with the accuracy for age in the range of tens of millions of years. Furthermore, and in contrast to other space instrumentation, our instrument can measure all elements present in the sample above 10 ppb concentration, which offers versatile applications

  15. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  16. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  17. High performance satellite networks

    NASA Astrophysics Data System (ADS)

    Helm, Neil R.; Edelson, Burton I.

    1997-06-01

    The high performance satellite communications networks of the future will have to be interoperable with terrestrial fiber cables. These satellite networks will evolve from narrowband analogue formats to broadband digital transmission schemes, with protocols, algorithms and transmission architectures that will segment the data into uniform cells and frames, and then transmit these data via larger and more efficient synchronous optional (SONET) and asynchronous transfer mode (ATM) networks that are being developed for the information "superhighway". These high performance satellite communications and information networks are required for modern applications, such as electronic commerce, digital libraries, medical imaging, distance learning, and the distribution of science data. In order for satellites to participate in these information superhighway networks, it is essential that they demonstrate their ability to: (1) operate seamlessly with heterogeneous architectures and applications, (2) carry data at SONET rates with the same quality of service as optical fibers, (3) qualify transmission delay as a parameter not a problem, and (4) show that satellites have several performance and economic advantages over fiber cable networks.

  18. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  19. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  20. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  1. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  2. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  3. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  4. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  5. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  6. Gold nanospikes based microsensor as a highly accurate mercury emission monitoring system

    PubMed Central

    Sabri, Ylias M.; Ippolito, Samuel J.; Tardio, James; Bansal, Vipul; O'Mullane, Anthony P.; Bhargava, Suresh K.

    2014-01-01

    Anthropogenic elemental mercury (Hg0) emission is a serious worldwide environmental problem due to the extreme toxicity of the heavy metal to humans, plants and wildlife. Development of an accurate and cheap microsensor based online monitoring system which can be integrated as part of Hg0 removal and control processes in industry is still a major challenge. Here, we demonstrate that forming Au nanospike structures directly onto the electrodes of a quartz crystal microbalance (QCM) using a novel electrochemical route results in a self-regenerating, highly robust, stable, sensitive and selective Hg0 vapor sensor. The data from a 127 day continuous test performed in the presence of volatile organic compounds and high humidity levels, showed that the sensor with an electrodeposted sensitive layer had 260% higher response magnitude, 3.4 times lower detection limit (~22 μg/m3 or ~2.46 ppbv) and higher accuracy (98% Vs 35%) over a Au control based QCM (unmodified) when exposed to a Hg0 vapor concentration of 10.55 mg/m3 at 101°C. Statistical analysis of the long term data showed that the nano-engineered Hg0 sorption sites on the developed Au nanospikes sensitive layer play a critical role in the enhanced sensitivity and selectivity of the developed sensor towards Hg0 vapor. PMID:25338965

  7. An accurate dynamical electron diffraction algorithm for reflection high-energy electron diffraction

    NASA Astrophysics Data System (ADS)

    Huang, J.; Cai, C. Y.; Lv, C. L.; Zhou, G. W.; Wang, Y. G.

    2015-12-01

    The conventional multislice method (CMS) method, one of the most popular dynamical electron diffraction calculation procedures in transmission electron microscopy, was introduced to calculate reflection high-energy electron diffraction (RHEED) as it is well adapted to deal with the deviations from the periodicity in the direction parallel to the surface. However, in the present work, we show that the CMS method is no longer sufficiently accurate for simulating RHEED with the accelerating voltage 3-100 kV because of the high-energy approximation. An accurate multislice (AMS) method can be an alternative for more accurate RHEED calculations with reasonable computing time. A detailed comparison of the numerical calculation of the AMS method and the CMS method is carried out with respect to different accelerating voltages, surface structure models, Debye-Waller factors and glancing angles.

  8. An improved method for accurate and rapid measurement of flight performance in Drosophila.

    PubMed

    Babcock, Daniel T; Ganetzky, Barry

    2014-01-01

    Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810

  9. A Highly Accurate Inclusive Cancer Screening Test Using Caenorhabditis elegans Scent Detection

    PubMed Central

    Uozumi, Takayuki; Shinden, Yoshiaki; Mimori, Koshi; Maehara, Yoshihiko; Ueda, Naoko; Hamakawa, Masayuki

    2015-01-01

    Early detection and treatment are of vital importance to the successful eradication of various cancers, and development of economical and non-invasive novel cancer screening systems is critical. Previous reports using canine scent detection demonstrated the existence of cancer-specific odours. However, it is difficult to introduce canine scent recognition into clinical practice because of the need to maintain accuracy. In this study, we developed a Nematode Scent Detection Test (NSDT) using Caenorhabditis elegans to provide a novel highly accurate cancer detection system that is economical, painless, rapid and convenient. We demonstrated wild-type C. elegans displayed attractive chemotaxis towards human cancer cell secretions, cancer tissues and urine from cancer patients but avoided control urine; in parallel, the response of the olfactory neurons of C. elegans to the urine from cancer patients was significantly stronger than to control urine. In contrast, G protein α mutants and olfactory neurons-ablated animals were not attracted to cancer patient urine, suggesting that C. elegans senses odours in urine. We tested 242 samples to measure the performance of the NSDT, and found the sensitivity was 95.8%; this is markedly higher than that of other existing tumour markers. Furthermore, the specificity was 95.0%. Importantly, this test was able to diagnose various cancer types tested at the early stage (stage 0 or 1). To conclude, C. elegans scent-based analyses might provide a new strategy to detect and study disease-associated scents. PMID:25760772

  10. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  11. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  12. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  13. Teacher Performance Pay Signals and Student Achievement: Are Signals Accurate, and How well Do They Work?

    ERIC Educational Resources Information Center

    Manzeske, David; Garland, Marshall; Williams, Ryan; West, Benjamin; Kistner, Alexandra Manzella; Rapaport, Amie

    2016-01-01

    High-performing teachers tend to seek out positions at more affluent or academically challenging schools, which tend to hire more experienced, effective educators. Consequently, low-income and minority students are more likely to attend schools with less experienced and less effective educators (see, for example, DeMonte & Hanna, 2014; Office…

  14. Being aware of own performance: how accurately do children with autism spectrum disorder judge own memory performance?

    PubMed

    Elmose, Mette; Happé, Francesca

    2014-12-01

    Self-awareness was investigated by assessing accuracy of judging own memory performance in a group of children with autism spectrum disorder (ASD) compared with a group of typically developing (TD) children. Effects of stimulus type (social vs. nonsocial), and availability of feedback information as the task progressed, were examined. Results overall showed comparable levels and patterns of accuracy in the ASD and TD groups. A trend level effect (p = 061, d = 0.60) was found, with ASD participants being more accurate in judging own memory for nonsocial than social stimuli and the opposite pattern for TD participants. These findings suggest that awareness of own memory can be good in children with ASD. It is discussed how this finding may be interpreted, and it is suggested that further investigation into the relation between content, frequency, and quality of self-awareness, and the context of self-awareness, is needed.

  15. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  16. A High-Order Accurate Parallel Solver for Maxwell's Equations on Overlapping Grids

    SciTech Connect

    Henshaw, W D

    2005-09-23

    A scheme for the solution of the time dependent Maxwell's equations on composite overlapping grids is described. The method uses high-order accurate approximations in space and time for Maxwell's equations written as a second-order vector wave equation. High-order accurate symmetric difference approximations to the generalized Laplace operator are constructed for curvilinear component grids. The modified equation approach is used to develop high-order accurate approximations that only use three time levels and have the same time-stepping restriction as the second-order scheme. Discrete boundary conditions for perfect electrical conductors and for material interfaces are developed and analyzed. The implementation is optimized for component grids that are Cartesian, resulting in a fast and efficient method. The solver runs on parallel machines with each component grid distributed across one or more processors. Numerical results in two- and three-dimensions are presented for the fourth-order accurate version of the method. These results demonstrate the accuracy and efficiency of the approach.

  17. Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments

    SciTech Connect

    Kuruganti, Phani Teja

    2007-01-01

    As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.

  18. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  19. High Performance Computing Today

    SciTech Connect

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  20. Highly accurate spectral retardance characterization of a liquid crystal retarder including Fabry-Perot interference effects

    SciTech Connect

    Vargas, Asticio; Mar Sánchez-López, María del; García-Martínez, Pascuala; Arias, Julia; Moreno, Ignacio

    2014-01-21

    Multiple-beam Fabry-Perot (FP) interferences occur in liquid crystal retarders (LCR) devoid of an antireflective coating. In this work, a highly accurate method to obtain the spectral retardance of such devices is presented. On the basis of a simple model of the LCR that includes FP effects and by using a voltage transfer function, we show how the FP features in the transmission spectrum can be used to accurately retrieve the ordinary and extraordinary spectral phase delays, and the voltage dependence of the latter. As a consequence, the modulation characteristics of the device are fully determined with high accuracy by means of a few off-state physical parameters which are wavelength-dependent, and a single voltage transfer function that is valid within the spectral range of characterization.

  1. Detailed and Highly Accurate 3d Models of High Mountain Areas by the Macs-Himalaya Aerial Camera Platform

    NASA Astrophysics Data System (ADS)

    Brauchle, J.; Hein, D.; Berger, R.

    2015-04-01

    Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not achieved. It is for this reason that the DLR Institute of Optical Sensor Systems has developed a new aerial camera, the MACS-Himalaya. The measuring unit comprises four camera modules with an overall aperture angle of 116° perpendicular to the direction of flight. A High Dynamic Range (HDR) mode was introduced so that within a scene, bright areas such as sun-flooded snow and dark areas such as shaded stone can be imaged. In 2014, a measuring survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at heights up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced. Special challenges and gaps in the investigation of high mountain areas, approaches for resolution of these problems, the camera system and the state of evaluation are presented with examples.

  2. Optimal monotonization of a high-order accurate bicompact scheme for the nonstationary multidimensional transport equation

    NASA Astrophysics Data System (ADS)

    Aristova, E. N.; Rogov, B. V.; Chikitkin, A. V.

    2016-06-01

    A hybrid scheme is proposed for solving the nonstationary inhomogeneous transport equation. The hybridization procedure is based on two baseline schemes: (1) a bicompact one that is fourth-order accurate in all space variables and third-order accurate in time and (2) a monotone first-order accurate scheme from the family of short characteristic methods with interpolation over illuminated faces. It is shown that the first-order accurate scheme has minimal dissipation, so it is called optimal. The solution of the hybrid scheme depends locally on the solutions of the baseline schemes at each node of the space-time grid. A monotonization procedure is constructed continuously and uniformly in all mesh cells so as to keep fourth-order accuracy in space and third-order accuracy in time in domains where the solution is smooth, while maintaining a high level of accuracy in domains of discontinuous solution. Due to its logical simplicity and uniformity, the algorithm is well suited for supercomputer simulation.

  3. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    NASA Astrophysics Data System (ADS)

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-10-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement.

  4. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    PubMed Central

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-01-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496

  5. Revisit to three-dimensional percolation theory: Accurate analysis for highly stretchable conductive composite materials

    PubMed Central

    Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek

    2016-01-01

    A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties. PMID:27694856

  6. High Performance Arcjet Engines

    NASA Technical Reports Server (NTRS)

    Kennel, Elliot B.; Ivanov, Alexey Nikolayevich; Nikolayev, Yuri Vyacheslavovich

    1994-01-01

    This effort sought to exploit advanced single crystal tungsten-tantalum alloy material for fabrication of a high strength, high temperature arcjet anode. The use of this material is expected to result in improved strength, temperature resistance, and lifetime compared to state of the art polycrystalline alloys. In addition, the use of high electrical and thermal conductivity carbon-carbon composites was considered, and is believed to be a feasible approach. Highly conductive carbon-carbon composite anode capability represents enabling technology for rotating-arc designs derived from the Russian Scientific Research Institute of Thermal Processes (NIITP) because of high heat fluxes at the anode surface. However, for US designs the anode heat flux is much smaller, and thus the benefits are not as great as in the case of NIITP-derived designs. Still, it does appear that the tensile properties of carbon-carbon can be even better than those of single crystal tungsten alloys, especially when nearly-single-crystal fibers such as vapor grown carbon fiber (VGCF) are used. Composites fabricated from such materials must be coated with a refractory carbide coating in order to ensure compatibility with high temperature hydrogen. Fabrication of tungsten alloy single crystals in the sizes required for fabrication of an arcjet anode has been shown to be feasible. Test data indicate that the material can be expected to be at least the equal of W-Re-HfC polycrystalline alloy in terms of its tensile properties, and possibly superior. We are also informed by our colleagues at Scientific Production Association Luch (NP0 Luch) that it is possible to use Russian technology to fabricate polycrystalline W-Re-HfC or other high strength alloys if desired. This is important because existing engines must rely on previously accumulated stocks of these materials, and a fabrication capability for future requirements is not assured.

  7. High performance cyclone development

    SciTech Connect

    Giles, W.B.

    1981-01-01

    The results of cold flow experiments at atmospheric conditions of an air-shielded 18 in-dia electrocyclone with a central cusped electrode are reported using fine test dusts of both flyash and nickel powder. These results are found to confirm expectations of enhanced performance, similar to earlier work on a 12 in-dia model. An analysis of the combined inertial-electrostatic force field is also presented which identifies general design goals and scaling laws. From this, it is found that electrostatic enhancement will be particularly beneficial for fine dusts in large cyclones. Recommendations for further improvement in cyclone collection efficiency are proposed.

  8. RTCR: a pipeline for complete and accurate recovery of T cell repertoires from high throughput sequencing data

    PubMed Central

    Gerritsen, Bram; Pandit, Aridaman; Andeweg, Arno C.; de Boer, Rob J.

    2016-01-01

    Motivation: High Throughput Sequencing (HTS) has enabled researchers to probe the human T cell receptor (TCR) repertoire, which consists of many rare sequences. Distinguishing between true but rare TCR sequences and variants generated by polymerase chain reaction (PCR) and sequencing errors remains a formidable challenge. The conventional approach to handle errors is to remove low quality reads, and/or rare TCR sequences. Such filtering discards a large number of true and often rare TCR sequences. However, accurate identification and quantification of rare TCR sequences is essential for repertoire diversity estimation. Results: We devised a pipeline, called Recover TCR (RTCR), that accurately recovers TCR sequences, including rare TCR sequences, from HTS data (including barcoded data) even at low coverage. RTCR employs a data-driven statistical model to rectify PCR and sequencing errors in an adaptive manner. Using simulations, we demonstrate that RTCR can easily adapt to the error profiles of different types of sequencers and exhibits consistently high recall and high precision even at low coverages where other pipelines perform poorly. Using published real data, we show that RTCR accurately resolves sequencing errors and outperforms all other pipelines. Availability and Implementation: The RTCR pipeline is implemented in Python (v2.7) and C and is freely available at http://uubram.github.io/RTCR/along with documentation and examples of typical usage. Contact: b.gerritsen@uu.nl PMID:27324198

  9. High Voltage SPT Performance

    NASA Technical Reports Server (NTRS)

    Manzella, David; Jacobson, David; Jankovsky, Robert

    2001-01-01

    A 2.3 kW stationary plasma thruster designed to operate at high voltage was tested at discharge voltages between 300 and 1250 V. Discharge specific impulses between 1600 and 3700 sec were demonstrated with thrust between 40 and 145 mN. Test data indicated that discharge voltage can be optimized for maximum discharge efficiency. The optimum discharge voltage was between 500 and 700 V for the various anode mass flow rates considered. The effect of operating voltage on optimal magnet field strength was investigated. The effect of cathode flow rate on thruster efficiency was considered for an 800 V discharge.

  10. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-10-01

    Over 30 years ago U.S. industry introduced the world`s highest temperature (1200{degrees}F at 5000 psig) and most efficient power plant, the Eddystone coal-burning steam plant. The highest alloy material used in the plant was 316 stainless steel. Problems during the first few years of operation caused a reduction in operating temperature to 1100{degrees}F which has generally become the highest temperature used in plants around the world. Leadership in high temperature steam has moved to Japan and Europe over the last 30 years.

  11. A highly accurate method for determination of dissolved oxygen: gravimetric Winkler method.

    PubMed

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-09-01

    A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012-0.018 mg dm(-3) corresponding to the k=2 expanded uncertainty in the range of 0.023-0.035 mg dm(-3) (0.27-0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  12. Defining allowable physical property variations for high accurate measurements on polymer parts

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Sonne, M. R.; Madruga, D. G.; De Chiffre, L.; Hattel, J. H.

    2016-06-01

    Measurement conditions and material properties have a significant impact on the dimensions of a part, especially for polymers parts. Temperature variation causes part deformations that increase the uncertainty of the measurement process. Current industrial tolerances of a few micrometres demand high accurate measurements in non-controlled ambient. Most of polymer parts are manufactured by injection moulding and their inspection is carried out after stabilization, around 200 hours. The overall goal of this work is to reach ±5μm in uncertainty measurements a polymer products which is a challenge in today`s production and metrology environments. The residual deformations in polymer products at room temperature after injection molding are important when micrometer accuracy needs to be achieved. Numerical modelling can give a valuable insight to what is happening in the polymer during cooling down after injection molding. In order to obtain accurate simulations, accurate inputs to the model are crucial. In reality however, the material and physical properties will have some variations. Although these variations may be small, they can act as a source of uncertainty for the measurement. In this paper, we investigated how big the variation in material and physical properties are allowed in order to reach the 5 μm target on the uncertainty.

  13. The Use of Accurate Mass Tags for High-Throughput Microbial Proteomics

    SciTech Connect

    Smith, Richard D. ); Anderson, Gordon A. ); Lipton, Mary S. ); Masselon, Christophe D. ); Pasa Tolic, Ljiljana ); Shen, Yufeng ); Udseth, Harold R. )

    2002-08-01

    We describe and demonstrate a global strategy that extends the sensitivity, dynamic range, comprehensiveness, and throughput of proteomic measurements based upon the use of peptide accurate mass tags (AMTs) produced by global protein enzymatic digestion. The two-stage strategy exploits Fourier transform-ion cyclotron resonance (FT-ICR) mass spectrometry to validate peptide AMTs for a specific organism, tissue or cell type from potential mass tags identified using conventional tandem mass spectrometry (MS/MS) methods, providing greater confidence in identifications as well as the basis for subsequent measurements without the need for MS/MS, and thus with greater sensitivity and increased throughput. A single high resolution capillary liquid chromatography separation combined with high sensitivity, high resolution and ac-curate FT-ICR measurements has been shown capable of characterizing peptide mixtures of significantly more than 10 5 components with mass accuracies of -1 ppm, sufficient for broad protein identification using AMTs. Other attractions of the approach include the broad and relatively unbiased proteome coverage, the capability for exploiting stable isotope labeling methods to realize high precision for relative protein abundance measurements, and the projected potential for study of mammalian proteomes when combined with additional sample fractionation. Using this strategy, in our first application we have been able to identify AMTs for 60% of the potentially expressed proteins in the organism Deinococcus radiodurans.

  14. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  15. High Performance Astrophysics Computing

    NASA Astrophysics Data System (ADS)

    Capuzzo-Dolcetta, R.; Arca-Sedda, M.; Mastrobuono-Battisti, A.; Punzo, D.; Spera, M.

    2012-07-01

    The application of high end computing to astrophysical problems, mainly in the galactic environment, is developing for many years at the Dep. of Physics of Sapienza Univ. of Roma. The main scientific topic is the physics of self gravitating systems, whose specific subtopics are: i) celestial mechanics and interplanetary probe transfers in the solar system; ii) dynamics of globular clusters and of globular cluster systems in their parent galaxies; iii) nuclear clusters formation and evolution; iv) massive black hole formation and evolution; v) young star cluster early evolution. In this poster we describe the software and hardware computational resources available in our group and how we are developing both software and hardware to reach the scientific aims above itemized.

  16. High performance alloy electroforming

    NASA Technical Reports Server (NTRS)

    Malone, G. A.; Winkelman, D. M.

    1989-01-01

    Electroformed copper and nickel are used in structural applications for advanced propellant combustion chambers. An improved process has been developed by Bell Aerospace Textron, Inc. wherein electroformed nickel-manganese alloy has demonstrated superior mechanical and thermal stability when compared to previously reported deposits from known nickel plating processes. Solution chemistry and parametric operating procedures are now established and material property data is established for deposition of thick, large complex shapes such as the Space Shuttle Main Engine. The critical operating variables are those governing the ratio of codeposited nickel and manganese. The deposition uniformity which in turn affects the manganese concentration distribution is affected by solution resistance and geometric effects as well as solution agitation. The manganese concentration in the deposit must be between 2000 and 3000 ppm for optimum physical properties to be realized. The study also includes data regarding deposition procedures for achieving excellent bond strength at an interface with copper, nickel-manganese or INCONEL 718. Applications for this electroformed material include fabrication of complex or re-entry shapes which would be difficult or impossible to form from high strength alloys such as INCONEL 718.

  17. Highly accurate apparatus for electrochemical characterization of the felt electrodes used in redox flow batteries

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon

    2016-04-01

    Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.

  18. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    NASA Astrophysics Data System (ADS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-08-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  19. A new benchmark with high accurate solution for hot-cold fluids mixing

    NASA Astrophysics Data System (ADS)

    Younes, Anis; Fahs, Marwan; Zidane, Ali; Huggenberger, Peter; Zechner, Eric

    2015-09-01

    A new benchmark is proposed for the verification of buoyancy-driven flow codes. The benchmark deals with mixing hot and cold fluids from the opposite boundaries of an open channel. A high accurate solution is developed using the Fourier-Galerkin (FG) method and compared to the results of an advanced finite element (FE) model. An excellent agreement is observed between the FG and FE solutions for different Reynolds numbers which demonstrates the viability of the solutions in benchmarking buoyancy-driven flow numerical codes.

  20. Highly accurate video coordinate generation for automatic 3-D trajectory calculation

    NASA Astrophysics Data System (ADS)

    Macleod, A.; Morris, Julian R. W.; Lyster, M.

    1990-08-01

    Most TV-based motion analysis systems, including the original version of 1/ICON, produce 3D coordinates by combining pre-tracked 2D trajectories from each camera. The latest version of the system, VICON-VX, uses totally automatic 3D trajectory calculation using the Geometric Self Identification (GSI) technique. This is achieved by matching unsorted 2D image coordinates from all cameras, looking for intersecting marker 'rays', and matching intersections into 3D trajectories. Effective GSI, with low false-positive intersection rates is only possible with highly accurate 2D data, produced by stable, high-resolution coordinate generators, and incorporating appropriate compensation for lens distortions. Data capture software and hardware have been completely redesigned to achieve this accuracy, together with higher throughput rates and better resistance to errors. In addition, a new ADC facility has been incorporated to allow very high speed analog data acquisition, synchronised with video measurements.

  1. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  2. Improved highly accurate localized motion imaging for monitoring high-intensity focused ultrasound therapy

    NASA Astrophysics Data System (ADS)

    Qu, Xiaolei; Azuma, Takashi; Sugiyama, Ryusuke; Kanazawa, Kengo; Seki, Mika; Sasaki, Akira; Takeuchi, Hideki; Fujiwara, Keisuke; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro; Matsumoto, Yoichiro

    2016-07-01

    Visualizing an area subjected to high-intensity focused ultrasound (HIFU) therapy is necessary for controlling the amount of HIFU exposure. One of the promising monitoring methods is localized motion imaging (LMI), which estimates coagulation length by detecting the change in stiffness. In this study, we improved the accuracy of our previous LMI by dynamic cross-correlation window (DCCW) and maximum vibration amount (MVA) methods. The DCCW method was used to increase the accuracy of estimating vibration amplitude, and the MVA method was employed to increase signal-noise ratio of the decrease ratio at the coagulated area. The qualitative comparison of results indicated that the two proposed methods could suppress the effect of noise. Regarding the results of the quantitative comparison, coagulation length was estimated with higher accuracy by the improved LMI method, and the root-mean-square error (RMSE) was reduced from 2.51 to 1.69 mm.

  3. Improved highly accurate localized motion imaging for monitoring high-intensity focused ultrasound therapy

    NASA Astrophysics Data System (ADS)

    Qu, Xiaolei; Azuma, Takashi; Sugiyama, Ryusuke; Kanazawa, Kengo; Seki, Mika; Sasaki, Akira; Takeuchi, Hideki; Fujiwara, Keisuke; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro; Matsumoto, Yoichiro

    2016-07-01

    Visualizing an area subjected to high-intensity focused ultrasound (HIFU) therapy is necessary for controlling the amount of HIFU exposure. One of the promising monitoring methods is localized motion imaging (LMI), which estimates coagulation length by detecting the change in stiffness. In this study, we improved the accuracy of our previous LMI by dynamic cross-correlation window (DCCW) and maximum vibration amount (MVA) methods. The DCCW method was used to increase the accuracy of estimating vibration amplitude, and the MVA method was employed to increase signal–noise ratio of the decrease ratio at the coagulated area. The qualitative comparison of results indicated that the two proposed methods could suppress the effect of noise. Regarding the results of the quantitative comparison, coagulation length was estimated with higher accuracy by the improved LMI method, and the root-mean-square error (RMSE) was reduced from 2.51 to 1.69 mm.

  4. High- and low-pressure pneumotachometers measure respiration rates accurately in adverse environments

    NASA Technical Reports Server (NTRS)

    Fagot, R. J.; Mc Donald, R. T.; Roman, J. A.

    1968-01-01

    Respiration-rate transducers in the form of pneumotachometers measure respiration rates of pilots operating high performance research aircraft. In each low pressure or high pressure oxygen system a sensor is placed in series with the pilots oxygen supply line to detect gas flow accompanying respiration.

  5. Use of Monocrystalline Silicon as Tool Material for Highly Accurate Blanking of Thin Metal Foils

    SciTech Connect

    Hildering, Sven; Engel, Ulf; Merklein, Marion

    2011-05-04

    The trend towards miniaturisation of metallic mass production components combined with increased component functionality is still unbroken. Manufacturing these components by forming and blanking offers economical and ecological advantages combined with the needed accuracy. The complexity of producing tools with geometries below 50 {mu}m by conventional manufacturing methods becomes disproportional higher. Expensive serial finishing operations are required to achieve an adequate surface roughness combined with accurate geometry details. A novel approach for producing such tools is the use of advanced etching technologies for monocrystalline silicon that are well-established in the microsystems technology. High-precision vertical geometries with a width down to 5 {mu}m are possible. The present study shows a novel concept using this potential for the blanking of thin copper foils with monocrystallline silicon as a tool material. A self-contained machine-tool with compact outer dimensions was designed to avoid tensile stresses in the brittle silicon punch by an accurate, careful alignment of the punch, die and metal foil. A microscopic analysis of the monocrystalline silicon punch shows appropriate properties regarding flank angle, edge geometry and surface quality for the blanking process. Using a monocrystalline silicon punch with a width of 70 {mu}m blanking experiments on as-rolled copper foils with a thickness of 20 {mu}m demonstrate the general applicability of this material for micro production processes.

  6. Can Young Children Be More Accurate Predictors of Their Recall Performance?

    ERIC Educational Resources Information Center

    Lipko-Speed, Amanda R.

    2013-01-01

    Preschoolers persistently predict that they will perform better than they actually can perform on a picture recall task. The current investigation sought to explore a condition under which young children might be able to improve their predictive accuracy. Namely, children were asked to predict their recall twice for the same set of items.…

  7. Accurate Event-Driven Motion Compensation in High-Resolution PET Incorporating Scattered and Random Events

    PubMed Central

    Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna

    2010-01-01

    With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420

  8. Highly accurate thickness measurement of multi-layered automotive paints using terahertz technology

    NASA Astrophysics Data System (ADS)

    Krimi, Soufiene; Klier, Jens; Jonuscheit, Joachim; von Freymann, Georg; Urbansky, Ralph; Beigang, René

    2016-07-01

    In this contribution, we present a highly accurate approach for thickness measurements of multi-layered automotive paints using terahertz time domain spectroscopy in reflection geometry. The proposed method combines the benefits of a model-based material parameters extraction method to calibrate the paint coatings, a generalized Rouard's method to simulate the terahertz radiation behavior within arbitrary thin films, and the robustness of a powerful evolutionary optimization algorithm to increase the sensitivity of the minimum thickness measurement limit. Within the framework of this work, a self-calibration model is introduced, which takes into consideration the real industrial challenges such as the effect of wet-on-wet spray in the painting process.

  9. Geometrically invariant and high capacity image watermarking scheme using accurate radial transform

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Ranade, Sukhjeet K.

    2013-12-01

    Angular radial transform (ART) is a region based descriptor and possesses many attractive features such as rotation invariance, low computational complexity and resilience to noise which make them more suitable for invariant image watermarking than that of many transform domain based image watermarking techniques. In this paper, we introduce ART for fast and geometrically invariant image watermarking scheme with high embedding capacity. We also develop an accurate and fast framework for the computation of ART coefficients based on Gaussian quadrature numerical integration, 8-way symmetry/anti-symmetry properties and recursive relations for the calculation of sinusoidal kernel functions. ART coefficients so computed are then used for embedding the binary watermark using dither modulation. Experimental studies reveal that the proposed watermarking scheme not only provides better robustness against geometric transformations and other signal processing distortions, but also has superior advantages over the existing ones in terms of embedding capacity, speed and visual imperceptibility.

  10. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  11. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  12. A new high-order accurate continuous Galerkin method for linear elastodynamics problems

    NASA Astrophysics Data System (ADS)

    Idesman, Alexander V.

    2007-07-01

    A new high-order accurate time-continuous Galerkin (TCG) method for elastodynamics is suggested. The accuracy of the new implicit TCG method is increased by a factor of two in comparison to that of the standard TCG method and is one order higher than the accuracy of the standard time-discontinuous Galerkin (TDG) method at the same number of degrees of freedom. The new method is unconditionally stable and has controllable numerical dissipation at high frequencies. An iterative predictor/multi-corrector solver that includes the factorization of the effective mass matrix of the same dimension as that of the mass matrix for the second-order methods is developed for the new TCG method. A new strategy combining numerical methods with small and large numerical dissipation is developed for elastodynamics. Simple numerical tests show a significant reduction in the computation time (by 5 25 times) for the new TCG method in comparison to that for second-order methods, and the suppression of spurious high-frequency oscillations.

  13. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  14. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  15. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  16. High-precision topography measurement through accurate in-focus plane detection with hybrid digital holographic microscope and white light interferometer module.

    PubMed

    Liżewski, Kamil; Tomczewski, Sławomir; Kozacki, Tomasz; Kostencka, Julianna

    2014-04-10

    High-precision topography measurement of micro-objects using interferometric and holographic techniques can be realized provided that the in-focus plane of an imaging system is very accurately determined. Therefore, in this paper we propose an accurate technique for in-focus plane determination, which is based on coherent and incoherent light. The proposed method consists of two major steps. First, a calibration of the imaging system with an amplitude object is performed with a common autofocusing method using coherent illumination, which allows for accurate localization of the in-focus plane position. In the second step, the position of the detected in-focus plane with respect to the imaging system is measured with white light interferometry. The obtained distance is used to accurately adjust a sample with the precision required for the measurement. The experimental validation of the proposed method is given for measurement of high-numerical-aperture microlenses with subwavelength accuracy.

  17. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  18. Can medical students accurately predict their learning? A study comparing perceived and actual performance in neuroanatomy.

    PubMed

    Hall, Samuel R; Stephens, Jonny R; Seaby, Eleanor G; Andrade, Matheus Gesteira; Lowry, Andrew F; Parton, Will J C; Smith, Claire F; Border, Scott

    2016-10-01

    It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second year medical students attending neuroanatomy revision sessions at the University of Southampton and the competitors of the National Undergraduate Neuroanatomy Competition were asked to rate their level of knowledge in neuroanatomy. The responses from the former group were compared to performance on a ten item multiple choice question examination and the latter group were compared to their performance within the competition. In both cohorts, self-assessments of perceived level of knowledge correlated weakly to their performance in their respective objective knowledge assessments (r = 0.30 and r = 0.44). Within the NUNC, this correlation improved when students were instead asked to rate their performance on a specific examination within the competition (spotter, rS = 0.68; MCQ, rS = 0.58). Despite its inherent difficulty, medical student self-assessment accuracy in neuroanatomy is comparable to other subjects within the medical curriculum. Anat Sci Educ 9: 488-495. © 2016 American Association of Anatomists.

  19. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  20. High-power CMOS current driver with accurate transconductance for electrical impedance tomography.

    PubMed

    Constantinou, Loucas; Triantis, Iasonas F; Bayford, Richard; Demosthenous, Andreas

    2014-08-01

    Current drivers are fundamental circuits in bioimpedance measurements including electrical impedance tomography (EIT). In the case of EIT, the current driver is required to have a large output impedance to guarantee high current accuracy over a wide range of load impedance values. This paper presents an integrated current driver which meets these requirements and is capable of delivering large sinusoidal currents to the load. The current driver employs a differential architecture and negative feedback, the latter allowing the output current to be accurately set by the ratio of the input voltage to a resistor value. The circuit was fabricated in a 0.6- μm high-voltage CMOS process technology and its core occupies a silicon area of 0.64 mm (2) . It operates from a ± 9 V power supply and can deliver output currents up to 5 mA p-p. The accuracy of the maximum output current is within 0.41% up to 500 kHz, reducing to 0.47% at 1 MHz with a total harmonic distortion of 0.69%. The output impedance is 665 k Ω at 100 kHz and 372 k Ω at 500 kHz.

  1. High-order accurate physical-constraints-preserving finite difference WENO schemes for special relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Wu, Kailiang; Tang, Huazhong

    2015-10-01

    The paper develops high-order accurate physical-constraints-preserving finite difference WENO schemes for special relativistic hydrodynamical (RHD) equations, built on the local Lax-Friedrichs splitting, the WENO reconstruction, the physical-constraints-preserving flux limiter, and the high-order strong stability preserving time discretization. They are extensions of the positivity-preserving finite difference WENO schemes for the non-relativistic Euler equations [20]. However, developing physical-constraints-preserving methods for the RHD system becomes much more difficult than the non-relativistic case because of the strongly coupling between the RHD equations, no explicit formulas of the primitive variables and the flux vectors with respect to the conservative vector, and one more physical constraint for the fluid velocity in addition to the positivity of the rest-mass density and the pressure. The key is to prove the convexity and other properties of the admissible state set and discover a concave function with respect to the conservative vector instead of the pressure which is an important ingredient to enforce the positivity-preserving property for the non-relativistic case. Several one- and two-dimensional numerical examples are used to demonstrate accuracy, robustness, and effectiveness of the proposed physical-constraints-preserving schemes in solving RHD problems with large Lorentz factor, or strong discontinuities, or low rest-mass density or pressure etc.

  2. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  3. Assessing temporal flux of plant hormones in stored processing potatoes using high definition accurate mass spectrometry

    PubMed Central

    Ordaz-Ortiz, José Juan; Foukaraki, Sofia; Terry, Leon Alexander

    2015-01-01

    Plant hormones are important molecules which at low concentration can regulate various physiological processes. Mass spectrometry has become a powerful technique for the quantification of multiple classes of plant hormones because of its high sensitivity and selectivity. We developed a new ultrahigh pressure liquid chromatography–full-scan high-definition accurate mass spectrometry method, for simultaneous determination of abscisic acid and four metabolites phaseic acid, dihydrophaseic acid, 7′-hydroxy-abscisic acid and abscisic acid glucose ester, cytokinins zeatin, zeatin riboside, gibberellins (GA1, GA3, GA4 and GA7) and indole-3-acetyl-L-aspartic acid. We measured the amount of plant hormones in the flesh and skin of two processing potato cvs. Sylvana and Russet Burbank stored for up to 30 weeks at 6 °C under ambient air conditions. Herein, we report for the first time that abscisic acid glucose ester seems to accumulate in the skin of potato tubers throughout storage time. The method achieved a lowest limit of detection of 0.22 ng g−1 of dry weight and a limit of quantification of 0.74 ng g−1 dry weight (zeatin riboside), and was able to recover, detect and quantify a total of 12 plant hormones spiked on flesh and skin of potato tubers. In addition, the mass accuracy for all compounds (<5 ppm) was evaluated. PMID:26504563

  4. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-06-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  5. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  6. High resolution DEM from Tandem-X interferometry: an accurate tool to characterize volcanic activity

    NASA Astrophysics Data System (ADS)

    Albino, Fabien; Kervyn, Francois

    2013-04-01

    Tandem-X mission was launched by the German agency (DLR) in June 2010. It is a new generation high resolution SAR sensor mainly dedicated to topographic applications. For the purpose of our researches focused on the study of the volcano-tectonic activity in the Kivu Rift area, a set of Tandem-X bistatic radar images were used to produce a high resolution InSAR DEM of the Virunga Volcanic Province (VVP). The VVP is part of the Western branch of the African rift, situated at the boundary between D.R. Congo, Rwanda and Uganda. It has two highly active volcanoes, Nyiragongo and Nyamulagira. A first task concerns the quantitative assessment of the vertical accuracy that can be achieved with these new data. The new DEMs are compared to other space borne datasets (SRTM, ASTER) but also to field measurements given by differential GPS. Multi-temporal radar acquisitions allow us to produce several DEM of the same area. This appeared to be very useful in the context of an active volcanic context where new geomorphological features (faults, fissures, volcanic cones and lava flows) appear continuously through time. For example, since the year 2000, time of the SRTM acquisition, we had one eruption at Nyiragongo (2002) and six eruptions at Nyamulagira (2001, 2002, 2004, 2006, 2010 and 2011) which all induce large changes in the landscape with the emplacement of new lava fields and scoria cones. From our repetitive Tandem-X DEM production, we have a tool to identify and also quantify in term of size and volume all the topographic changes relative to this past volcanic activity. These parameters are high value information to improve the understanding of the Virunga volcanoes; the accurate estimation of erupted volume and knowledge of structural features associated to past eruptions are key parameters to understand the volcanic system, to ameliorate the hazard assessment, and finally contribute to risk mitigation in a densely populated area.

  7. Developing a second generation Laue lens prototype: high-reflectivity crystals and accurate assembly

    NASA Astrophysics Data System (ADS)

    Barrière, Nicolas M.; Tomsick, John A.; Boggs, Steven E.; Lowell, Alexander; von Ballmoos, Peter

    2011-09-01

    Laue lenses are an emerging technology that will enhance gamma-ray telescope sensitivity by one to two orders of magnitude in selected energy bands of the ~100 keV to ~1.5 MeV range. This optic would be particularly well adapted to the observation of faint gamma ray lines, as required for the study of Supernovae and Galactic positron annihilation. It could also prove very useful for the study of hard X-ray tails from a variety of compact objects, especially making a difference by providing sufficient sensitivity for polarization to be measured by the focal plane detector. Our group has been addressing the two key issues relevant to improve performance with respect to the first generation of Laue lens prototypes: obtaining large numbers of efficient crystals and developing a method to fix them with accurate orientation and dense packing factor onto a substrate. We present preliminary results of an on-going study aiming to enable a large number of crystals suitable for diffraction at energies above 500 keV. In addition, we show the first results of the Laue lens prototype assembled using our beamline at SSL/UC Berkeley, which demonstrates our ability to orient and glue crystals with accuracy of a few arcsec, as required for an efficient Laue lens telescope.

  8. High-Performance Ball Bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Roger W., Jr.; Haluck, David A.; Olinger, John B.; Owen, Samuel S.; Poole, William E.

    1995-01-01

    High-performance bearing features strong, lightweight, self-lubricating cage with self-lubricating liners in ball apertures. Designed to operate at high speed (tens of thousands of revolutions per minute) in cryogenic environment like liquid-oxygen or liquid-hydrogen turbopump. Includes inner race, outer race, and cage keeping bearing balls equally spaced.

  9. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  10. Ion chromatography as highly suitable method for rapid and accurate determination of antibiotic fosfomycin in pharmaceutical wastewater.

    PubMed

    Zeng, Ping; Xie, Xiaolin; Song, Yonghui; Liu, Ruixia; Zhu, Chaowei; Galarneau, Anne; Pic, Jean-Stéphane

    2014-01-01

    A rapid and accurate ion chromatography (IC) method (limit of detection as low as 0.06 mg L(-1)) for fosfomycin concentration determination in pharmaceutical industrial wastewater was developed. This method was compared with the performance of high performance liquid chromatography determination (with a high detection limit of 96.0 mg L(-1)) and ultraviolet spectrometry after reacting with alizarin (difficult to perform in colored solutions). The accuracy of the IC method was established in the linear range of 1.0-15.0 mg L(-1) and a linear correlation was found with a correlation coefficient of 0.9998. The recoveries of fosfomycin from industrial pharmaceutical wastewater at spiking concentrations of 2.0, 5.0 and 8.0 mg L(-1) ranged from 81.91 to 94.74%, with a relative standard deviation (RSD) from 1 to 4%. The recoveries of effluent from a sequencing batch reactor treated fosfomycin with activated sludge at spiking concentrations of 5.0, 8.0, 10.0 mg L(-1) ranging from 98.25 to 99.91%, with a RSD from 1 to 2%. The developed IC procedure provided a rapid, reliable and sensitive method for the determination of fosfomycin concentration in industrial pharmaceutical wastewater and samples containing complex components.

  11. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  12. Continuous Digital Light Processing (cDLP): Highly Accurate Additive Manufacturing of Tissue Engineered Bone Scaffolds.

    PubMed

    Dean, David; Jonathan, Wallace; Siblani, Ali; Wang, Martha O; Kim, Kyobum; Mikos, Antonios G; Fisher, John P

    2012-03-01

    Highly accurate rendering of the external and internal geometry of bone tissue engineering scaffolds effects fit at the defect site, loading of internal pore spaces with cells, bioreactor-delivered nutrient and growth factor circulation, and scaffold resorption. It may be necessary to render resorbable polymer scaffolds with 50 μm or less accuracy to achieve these goals. This level of accuracy is available using Continuous Digital Light processing (cDLP) which utilizes a DLP(®) (Texas Instruments, Dallas, TX) chip. One such additive manufacturing device is the envisionTEC (Ferndale, MI) Perfactory(®). To use cDLP we integrate a photo-crosslinkable polymer, a photo-initiator, and a biocompatible dye. The dye attenuates light, thereby limiting the depth of polymerization. In this study we fabricated scaffolds using the well-studied resorbable polymer, poly(propylene fumarate) (PPF), titanium dioxide (TiO(2)) as a dye, Irgacure(®) 819 (BASF [Ciba], Florham Park, NJ) as an initiator, and diethyl fumarate as a solvent to control viscosity. PMID:23066427

  13. Continuous Digital Light Processing (cDLP): Highly Accurate Additive Manufacturing of Tissue Engineered Bone Scaffolds

    PubMed Central

    Dean, David; Wallace, Jonathan; Siblani, Ali; Wang, Martha O.; Kim, Kyobum; Mikos, Antonios G.; Fisher, John P.

    2012-01-01

    Highly accurate rendering of the external and internal geometry of bone tissue engineering scaffolds effects fit at the defect site, loading of internal pore spaces with cells, bioreactor-delivered nutrient and growth factor circulation, and scaffold resorption. It may be necessary to render resorbable polymer scaffolds with 50 μm or less accuracy to achieve these goals. This level of accuracy is available using Continuous Digital Light processing (cDLP) which utilizes a DLP® (Texas Instruments, Dallas, TX) chip. One such additive manufacturing device is the envisionTEC (Ferndale, MI) Perfactory®. To use cDLP we integrate a photo-crosslinkable polymer, a photo-initiator, and a biocompatible dye. The dye attenuates light, thereby limiting the depth of polymerization. In this study we fabricated scaffolds using the well-studied resorbable polymer, poly(propylene fumarate) (PPF), titanium dioxide (TiO2) as a dye, Irgacure® 819 (BASF [Ciba], Florham Park, NJ) as an initiator, and diethyl fumarate as a solvent to control viscosity. PMID:23066427

  14. Development of New Accurate, High Resolution DEMs and Merged Topographic-Bathymetric Grids for Inundation Mapping in Seward Alaska

    NASA Astrophysics Data System (ADS)

    Marriott, D.; Suleimani, E.; Hansen, R.

    2004-05-01

    The Geophysical Institute of the University of Alaska Fairbanks and the Alaska Division of Geological and Geophysical Surveys continue to participate in the National Tsunami Hazard Mitigation Program by evaluating and mapping potential inundation of selected coastal communities in Alaska. Seward, the next Alaskan community to be mapped, has excellent bathymetric data but very poor topographic data available. Since one of the most significant sources of errors in tsunami inundation mapping is inaccuracy of topographic and bathymetric data, the Alaska Tsunami Modeling Team cooperated with the local USGS glaciology office to perform photogrammetry in the Seward area to produce a new DEM. Using ten air photos and the APEX photogrammetry and analysis software, along with several precisely located GPS points, we developed a new georeferenced and highly accurate DEM with a 5-meter grid spacing. A variety of techniques were used to remove the effects of buildings and trees to yield a bald earth model. Finally, we resampled the new DEM to match the finest resolution model grid, and combined it with all other data, using the most recent and accurate data in each region. The new dataset has contours that deviate by more than 100 meters in some places from the contours in the previous dataset, showing significant improvement in accuracy for the purpose of tsunami modeling.

  15. Honey bees can perform accurately directed waggle dances based solely on information from a homeward trip.

    PubMed

    Edrich, Wolfgang

    2015-10-01

    Honey bees were displaced several 100 m from their hive to an unfamiliar site and provisioned with honey. After feeding, almost two-thirds of the bees flew home to their hive within a 50 min observation time. About half of these returning, bees signalled the direction of the release site in waggle dances thus demonstrating that the dance can be guided entirely by information gathered on a single homeward trip. The likely reason for the bees' enthusiastic dancing on their initial return from this new site was the highly rewarding honeycomb that they were given there. The attractive nature of the site is confirmed by many of these bees revisiting the site and continuing to forage there.

  16. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    PubMed Central

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  17. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  18. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research.

  19. Short-term retention of relational memory in amnesia revisited: accurate performance depends on hippocampal integrity.

    PubMed

    Yee, Lydia T S; Hannula, Deborah E; Tranel, Daniel; Cohen, Neal J

    2014-01-01

    Traditionally, it has been proposed that the hippocampus and adjacent medial temporal lobe cortical structures are selectively critical for long-term declarative memory, which entails memory for inter-item and item-context relationships. Whether the hippocampus might also contribute to short-term retention of relational memory representations has remained controversial. In two experiments, we revisit this question by testing memory for relationships among items embedded in scenes using a standard working memory trial structure in which a sample stimulus is followed by a brief delay and the corresponding test stimulus. In each experimental block, eight trials using different exemplars of the same scene were presented. The exemplars contained the same items but with different spatial relationships among them. By repeating the pictures across trials, any potential contributions of item or scene memory to performance were minimized, and relational memory could be assessed more directly than has been done previously. When test displays were presented, participants indicated whether any of the item-location relationships had changed. Then, regardless of their responses (and whether any item did change its location), participants indicated on a forced-choice test, which item might have moved, guessing if necessary. Amnesic patients were impaired on the change detection test, and were frequently unable to specify the change after having reported correctly that a change had taken place. Comparison participants, by contrast, frequently identified the change even when they failed to report the mismatch, an outcome that speaks to the sensitivity of the change specification measure. These results confirm past reports of hippocampal contributions to short-term retention of relational memory representations, and suggest that the role of the hippocampus in memory has more to do with relational memory requirements than the length of a retention interval.

  20. Short-term retention of relational memory in amnesia revisited: accurate performance depends on hippocampal integrity

    PubMed Central

    Yee, Lydia T. S.; Hannula, Deborah E.; Tranel, Daniel; Cohen, Neal J.

    2014-01-01

    Traditionally, it has been proposed that the hippocampus and adjacent medial temporal lobe cortical structures are selectively critical for long-term declarative memory, which entails memory for inter-item and item-context relationships. Whether the hippocampus might also contribute to short-term retention of relational memory representations has remained controversial. In two experiments, we revisit this question by testing memory for relationships among items embedded in scenes using a standard working memory trial structure in which a sample stimulus is followed by a brief delay and the corresponding test stimulus. In each experimental block, eight trials using different exemplars of the same scene were presented. The exemplars contained the same items but with different spatial relationships among them. By repeating the pictures across trials, any potential contributions of item or scene memory to performance were minimized, and relational memory could be assessed more directly than has been done previously. When test displays were presented, participants indicated whether any of the item-location relationships had changed. Then, regardless of their responses (and whether any item did change its location), participants indicated on a forced-choice test, which item might have moved, guessing if necessary. Amnesic patients were impaired on the change detection test, and were frequently unable to specify the change after having reported correctly that a change had taken place. Comparison participants, by contrast, frequently identified the change even when they failed to report the mismatch, an outcome that speaks to the sensitivity of the change specification measure. These results confirm past reports of hippocampal contributions to short-term retention of relational memory representations, and suggest that the role of the hippocampus in memory has more to do with relational memory requirements than the length of a retention interval. PMID:24478681

  1. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  2. Accurate and highly efficient calculation of the highly excited pure OH stretching resonances of O(1D)HCl, using a combination of methods.

    PubMed

    Bian, Wensheng; Poirier, Bill

    2004-09-01

    Accurate calculation of the energies and widths of the resonances of HOCl--an important intermediate in the O(1D)HCl reactive system--poses a challenging benchmark for computational methods. The need for very large direct product basis sets, combined with an extremely high density of states, results in difficult convergence for iterative methods. A recent calculation of the highly excited OH stretch mode resonances using the filter diagonalization method, for example, required 462,000 basis functions, and 180,000 iterations. In contrast, using a combination of new methods, we are able to compute the same resonance states to higher accuracy with a basis less than half the size, using only a few hundred iterations-although the CPU cost per iteration is substantially greater. Similar performance enhancements are observed for calculations of the high-lying bound states, as reported in a previous paper [J. Theo. Comput. Chem. 2, 583 (2003)].

  3. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  4. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  5. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  6. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    NASA Astrophysics Data System (ADS)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  7. In-Depth Glycoproteomic Characterization of γ-Conglutin by High-Resolution Accurate Mass Spectrometry

    PubMed Central

    Schiarea, Silvia; Arnoldi, Lolita; Fanelli, Roberto; De Combarieu, Eric; Chiabrando, Chiara

    2013-01-01

    The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s), and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation) were analyzed by high-resolution accurate mass liquid chromatography–multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl) (Fuc) GlcNAc2, Man3(Xyl) (Fuc) GlcNAc2, GlcNAcMan3(Xyl) (Fuc) GlcNAc2 and GlcNAc 2Man3(Xyl) (Fuc) GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants), but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit. PMID:24069245

  8. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  9. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  10. High performance storable propellant resistojet

    NASA Astrophysics Data System (ADS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  11. Accurate human microsatellite genotypes from high-throughput resequencing data using informed error profiles

    PubMed Central

    Highnam, Gareth; Franck, Christopher; Martin, Andy; Stephens, Calvin; Puthige, Ashwin; Mittelman, David

    2013-01-01

    Repetitive sequences are biologically and clinically important because they can influence traits and disease, but repeats are challenging to analyse using short-read sequencing technology. We present a tool for genotyping microsatellite repeats called RepeatSeq, which uses Bayesian model selection guided by an empirically derived error model that incorporates sequence and read properties. Next, we apply RepeatSeq to high-coverage genomes from the 1000 Genomes Project to evaluate performance and accuracy. The software uses common formats, such as VCF, for compatibility with existing genome analysis pipelines. Source code and binaries are available at http://github.com/adaptivegenome/repeatseq. PMID:23090981

  12. Navier-Stokes simulations of blade-vortex interaction using high-order accurate upwind schemes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    1987-01-01

    Conventional, spatially second-order-accurate, finite-difference schemes are much too dissipative for calculations involving vortices that travel large distances (relative to some measure of the size of the vortex). This study presents a fifth-order-accurate upwind-biased scheme that preserves vortex structure for much longer times than existing second-order-accurate central and upwind difference schemes. Vortex calculations demonstrating this aspect of the fifth-order scheme are also presented. The method is then applied to the blade-vortex interaction problem. Results for strong interactions wherein the vortex impinges directly on the airfoil or a shock associated with the airfoil are presented. None of these calculations required any modeling of the shape, size, and trajectory of the interacting vortex.

  13. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  14. Factors affecting the accurate determination of cerebrovascular blood flow using high-speed droplet imaging

    NASA Astrophysics Data System (ADS)

    Rudin, Stephen; Divani, Afshin; Wakhloo, Ajay K.; Lieber, Baruch B.; Granger, William; Bednarek, Daniel R.; Yang, Chang-Ying J.

    1998-07-01

    Detailed cerebrovascular blood flow can be more accurately determined radiographically from the new droplet tracking method previously introduced by the authors than from standard soluble contrast techniques. For example, arteriovenous malformation (AVM) transit times which are crucial for proper glue embolization treatments, were shown to be about half when using droplets compared to those measured using soluble contrast techniques. In this work, factors such as x-ray pulse duration, frame rate, system spatial resolution (focal spot size), droplet size, droplet and system contrast parameters, and system noise are considered in relation to their affect on the accurate determination of droplet location and velocity.

  15. SINA: Accurate high-throughput multiple sequence alignment of ribosomal RNA genes

    PubMed Central

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-01-01

    Motivation: In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. Results: In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Availability: Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license. Contact: epruesse@mpi-bremen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22556368

  16. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  17. Accurate inference of shoot biomass from high-throughput images of cereal plants

    PubMed Central

    2011-01-01

    With the establishment of advanced technology facilities for high throughput plant phenotyping, the problem of estimating plant biomass of individual plants from their two dimensional images is becoming increasingly important. The approach predominantly cited in literature is to estimate the biomass of a plant as a linear function of the projected shoot area of plants in the images. However, the estimation error from this model, which is solely a function of projected shoot area, is large, prohibiting accurate estimation of the biomass of plants, particularly for the salt-stressed plants. In this paper, we propose a method based on plant specific weight for improving the accuracy of the linear model and reducing the estimation bias (the difference between actual shoot dry weight and the value of the shoot dry weight estimated with a predictive model). For the proposed method in this study, we modeled the plant shoot dry weight as a function of plant area and plant age. The data used for developing our model and comparing the results with the linear model were collected from a completely randomized block design experiment. A total of 320 plants from two bread wheat varieties were grown in a supported hydroponics system in a greenhouse. The plants were exposed to two levels of hydroponic salt treatments (NaCl at 0 and 100 mM) for 6 weeks. Five harvests were carried out. Each time 64 randomly selected plants were imaged and then harvested to measure the shoot fresh weight and shoot dry weight. The results of statistical analysis showed that with our proposed method, most of the observed variance can be explained, and moreover only a small difference between actual and estimated shoot dry weight was obtained. The low estimation bias indicates that our proposed method can be used to estimate biomass of individual plants regardless of what variety the plant is and what salt treatment has been applied. We validated this model on an independent set of barley data. The

  18. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  19. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  20. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  1. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    NASA Astrophysics Data System (ADS)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  2. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  3. An accurate single-electron pump based on a highly tunable silicon quantum dot.

    PubMed

    Rossi, Alessandro; Tanttu, Tuomo; Tan, Kuan Yen; Iisakka, Ilkka; Zhao, Ruichen; Chan, Kok Wai; Tettamanzi, Giuseppe C; Rogge, Sven; Dzurak, Andrew S; Möttönen, Mikko

    2014-06-11

    Nanoscale single-electron pumps can be used to generate accurate currents, and can potentially serve to realize a new standard of electrical current based on elementary charge. Here, we use a silicon-based quantum dot with tunable tunnel barriers as an accurate source of quantized current. The charge transfer accuracy of our pump can be dramatically enhanced by controlling the electrostatic confinement of the dot using purposely engineered gate electrodes. Improvements in the operational robustness, as well as suppression of nonadiabatic transitions that reduce pumping accuracy, are achieved via small adjustments of the gate voltages. We can produce an output current in excess of 80 pA with experimentally determined relative uncertainty below 50 parts per million.

  4. Accurate determination of specific heat at high temperatures using the flash diffusivity method

    NASA Technical Reports Server (NTRS)

    Vandersande, J. W.; Zoltan, A.; Wood, C.

    1989-01-01

    The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.

  5. Highly sensitive and accurate screening of 40 dyes in soft drinks by liquid chromatography-electrospray tandem mass spectrometry.

    PubMed

    Feng, Feng; Zhao, Yansheng; Yong, Wei; Sun, Li; Jiang, Guibin; Chu, Xiaogang

    2011-06-15

    A method combining solid phase extraction with high performance liquid chromatography-electrospray ionization tandem mass spectrometry was developed for the highly sensitive and accurate screening of 40 dyes, most of which are banned in foods. Electrospray ionization tandem mass spectrometry was used to identify and quantify a large number of dyes for the first time, and demonstrated greater accuracy and sensitivity than the conventional liquid chromatography-ultraviolet/visible methods. The limits of detection at a signal-to-noise ratio of 3 for the dyes are 0.0001-0.01 mg/L except for Tartrazine, Amaranth, New Red and Ponceau 4R, with detection limits of 0.5, 0.25, 0.125 and 0.125 mg/L, respectively. When this method was applied to screening of dyes in soft drinks, the recoveries ranged from 91.1 to 105%. This method has been successfully applied to screening of illegal dyes in commercial soft drink samples, and it is valuable to ensure the safety of food.

  6. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  7. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  8. Performance Patterns of High, Medium, and Low Performers during and following a Reward versus Non-Reward Contingency Phase

    ERIC Educational Resources Information Center

    Oliver, Renee; Williams, Robert L.

    2006-01-01

    Three contingency conditions were applied to the math performance of 4th and 5th graders: bonus credit for accurately solving math problems, bonus credit for completing math problems, and no bonus credit for accurately answering or completing math problems. Mixed ANOVAs were used in tracking the performance of high, medium, and low performers…

  9. High-Frequency CTD Measurements for Accurate GPS/acoustic Sea-floor Crustal Deformation Measurement System

    NASA Astrophysics Data System (ADS)

    Tadokoro, K.; Yasuda, K.; Taniguchi, S.; Uemura, Y.; Matsuhiro, K.

    2015-12-01

    The GPS/acoustic sea-floor crustal deformation measurement system has developed as a useful tool to observe tectonic deformation especially at subduction zones. One of the factors preventing accurate GPS/acoustic sea-floor crustal deformation measurement is horizontal heterogeneity of sound speed in the ocean. It is therefore necessary to measure the gradient directly from sound speed structure. We report results of high-frequency CTD measurements using Underway CTD (UCTD) in the Kuroshio region. We perform the UCTD measurements on May 2nd, 2015 at two stations (TCA and TOA) above the sea-floor benchmarks installed across the Nankai Trough, off the south-east of Kii Peninsula, middle Japan. The number of measurement points is six at each station along circles with a diameter of 1.8 nautical miles around the sea-floor benchmark. The stations TCA and TOA are located on the edge and the interior of the Kuroshio current, respectively, judging from difference in sea water density measured at the two stations, as well as a satellite image of sea-surface temperature distribution. We detect a sound speed gradient of high speeds in the southern part and low speeds in the northern part at the two stations. At the TCA station, the gradient is noticeable down to 300 m in depth; the maximum difference in sound speed is +/- 5 m/s. The sound speed difference is as small as +/- 1.3 m/s at depths below 300 m, which causes seafloor benchmark positioning error as large as 1 m. At the TOA station, the gradient is extremely small down to 100 m in depth. The maximum difference in sound speed is less than +/- 0.3 m/s that is negligible small for seafloor benchmark positioning error. Clear gradient of high speed is observed to the depths; the maximum difference in sound speed is +/- 0.8-0.9 m/s, causing seafloor benchmark positioning error of several tens centimeters. The UCTD measurement is effective tool to detect sound speed gradient. We establish a method for accurate sea

  10. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  11. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  12. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  13. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  14. Transrectal high-intensity focused ultrasound ablation of prostate cancer: effective treatment requiring accurate imaging.

    PubMed

    Rouvière, Olivier; Souchon, Rémi; Salomir, Rarès; Gelet, Albert; Chapelon, Jean-Yves; Lyonnet, Denis

    2007-09-01

    Transrectal HIFU ablation has become a reasonable option for the treatment of localized prostate cancer in non-surgical patients, with 5-year disease-free survival similar to that of radiation therapy. It is also a promising salvage therapy of local recurrence after radiation therapy. These favourable results are partly due to recent improvements in prostate cancer imaging. However, further improvements are needed in patient selection, pre-operative localization of the tumor foci, assessment of the volume treated and early detection of recurrence. A better knowledge of the factors influencing the HIFU-induced tissue destruction and a better pre-operative assessment of them by imaging techniques should improve treatment outcome. Whereas prostate HIFU ablation is currently performed under transrectal ultrasound guidance, MR guidance with real-time operative monitoring of temperature will be available in the near future. If this technique will give better targeting and more uniform tissue destruction, its cost-effectiveness will have to be carefully evaluated. Finally, a recently reported synergistic effect between HIFU ablation and chemotherapy opens possibilities for treatment in high-risk or clinically advanced tumors.

  15. High Resolution Melting Analysis: A Rapid and Accurate Method to Detect CALR Mutations

    PubMed Central

    Moreno, Melania; Torres, Laura; Santana-Lopez, Gonzalo; Rodriguez-Medina, Carlos; Perera, María; Bellosillo, Beatriz; de la Iglesia, Silvia; Molero, Teresa; Gomez-Casares, Maria Teresa

    2014-01-01

    Background The recent discovery of CALR mutations in essential thrombocythemia (ET) and primary myelofibrosis (PMF) patients without JAK2/MPL mutations has emerged as a relevant finding for the molecular diagnosis of these myeloproliferative neoplasms (MPN). We tested the feasibility of high-resolution melting (HRM) as a screening method for rapid detection of CALR mutations. Methods CALR was studied in wild-type JAK2/MPL patients including 34 ET, 21 persistent thrombocytosis suggestive of MPN and 98 suspected secondary thrombocytosis. CALR mutation analysis was performed through HRM and Sanger sequencing. We compared clinical features of CALR-mutated versus 45 JAK2/MPL-mutated subjects in ET. Results Nineteen samples showed distinct HRM patterns from wild-type. Of them, 18 were mutations and one a polymorphism as confirmed by direct sequencing. CALR mutations were present in 44% of ET (15/34), 14% of persistent thrombocytosis suggestive of MPN (3/21) and none of the secondary thrombocytosis (0/98). Of the 18 mutants, 9 were 52 bp deletions, 8 were 5 bp insertions and other was a complex mutation with insertion/deletion. No mutations were found after sequencing analysis of 45 samples displaying wild-type HRM curves. HRM technique was reproducible, no false positive or negative were detected and the limit of detection was of 3%. Conclusions This study establishes a sensitive, reliable and rapid HRM method to screen for the presence of CALR mutations. PMID:25068507

  16. Accurate High-Temperature Reaction Networks for Alternative Fuels: Butanol Isomers

    SciTech Connect

    Van Geem, K. M.; Pyl, S. P.; Marin, G. B.; Harper, M. R.; Green, W. H.

    2010-11-03

    Oxygenated hydrocarbons, particularly alcohol compounds, are being studied extensively as alternatives and additives to conventional fuels due to their propensity of decreasing soot formation and improving the octane number of gasoline. However, oxygenated fuels also increase the production of toxic byproducts, such as formaldehyde. To gain a better understanding of the oxygenated functional group’s influence on combustion properties—e.g., ignition delay at temperatures above the negative temperature coefficient regime, and the rate of benzene production, which is the common precursor to soot formation—a detailed pressure-dependent reaction network for n-butanol, sec-butanol, and tert-butanol consisting of 281 species and 3608 reactions is presented. The reaction network is validated against shock tube ignition delays and doped methane flame concentration profiles reported previously in the literature, in addition to newly acquired pyrolysis data. Good agreement between simulated and experimental data is achieved in all cases. Flux and sensitivity analyses for each set of experiments have been performed, and high-pressure-limit reaction rate coefficients for important pathways, e.g., the dehydration reactions of the butanol isomers, have been computed using statistical mechanics and quantum chemistry. The different alcohol decomposition pathways, i.e., the pathways from primary, secondary, and tertiary alcohols, are discussed. Furthermore, comparisons between ethanol and n-butanol, two primary alcohols, are presented, as they relate to ignition delay.

  17. Internal Mammary Sentinel Lymph Node Biopsy With Modified Injection Technique: High Visualization Rate and Accurate Staging.

    PubMed

    Qiu, Peng-Fei; Cong, Bin-Bin; Zhao, Rong-Rong; Yang, Guo-Ren; Liu, Yan-Bing; Chen, Peng; Wang, Yong-Sheng

    2015-10-01

    Although the 2009 American Joint Committee on Cancer incorporated the internal mammary sentinel lymph node biopsy (IM-SLNB) concept, there has been little change in surgical practice patterns because of the low visualization rate of internal mammary sentinel lymph nodes (IMSLN) with the traditional radiotracer injection technique. In this study, various injection techniques were evaluated in term of the IMSLN visualization rate, and the impact of IM-SLNB on the diagnostic and prognostic value were analyzed.Clinically, axillary lymph nodes (ALN) negative patients (n = 407) were divided into group A (traditional peritumoral intraparenchymal injection) and group B (modified periareolar intraparenchymal injection). Group B was then separated into group B1 (low volume) and group B2 (high volume) according to the injection volume. Clinically, ALN-positive patients (n = 63) were managed as group B2. Internal mammary sentinel lymph node biopsy was performed for patients with IMSLN visualized.The IMSLN visualization rate was significantly higher in group B than that in group A (71.1% versus 15.5%, P < 0.001), whereas the axillary sentinel lymph nodes were reliably identified in both groups (98.9% versus 98.3%, P = 0.712). With high injection volume, group B2 was found to have higher IMSLN visualization rate than group B1 (75.1% versus 45.8%, P < 0.001). The IMSLN metastasis rate was only 8.1% (12/149) in clinically ALN-negative patients with successful IM-SLNB, and adjuvant treatment was altered in a small proportion. The IMSLN visualization rate was 69.8% (44/63) in clinically ALN-positive patients with the IMSLN metastasis rate up to 20.5% (9/44), and individual radiotherapy strategy could be guided with the IM-SLNB results.The modified injection technique (periareolar intraparenchymal, high volume, and ultrasound guidance) significantly improved the IMSLN visualization rate, making the routine IM-SLNB possible in daily practice. Internal mammary

  18. Internal Mammary Sentinel Lymph Node Biopsy With Modified Injection Technique: High Visualization Rate and Accurate Staging.

    PubMed

    Qiu, Peng-Fei; Cong, Bin-Bin; Zhao, Rong-Rong; Yang, Guo-Ren; Liu, Yan-Bing; Chen, Peng; Wang, Yong-Sheng

    2015-10-01

    Although the 2009 American Joint Committee on Cancer incorporated the internal mammary sentinel lymph node biopsy (IM-SLNB) concept, there has been little change in surgical practice patterns because of the low visualization rate of internal mammary sentinel lymph nodes (IMSLN) with the traditional radiotracer injection technique. In this study, various injection techniques were evaluated in term of the IMSLN visualization rate, and the impact of IM-SLNB on the diagnostic and prognostic value were analyzed.Clinically, axillary lymph nodes (ALN) negative patients (n = 407) were divided into group A (traditional peritumoral intraparenchymal injection) and group B (modified periareolar intraparenchymal injection). Group B was then separated into group B1 (low volume) and group B2 (high volume) according to the injection volume. Clinically, ALN-positive patients (n = 63) were managed as group B2. Internal mammary sentinel lymph node biopsy was performed for patients with IMSLN visualized.The IMSLN visualization rate was significantly higher in group B than that in group A (71.1% versus 15.5%, P < 0.001), whereas the axillary sentinel lymph nodes were reliably identified in both groups (98.9% versus 98.3%, P = 0.712). With high injection volume, group B2 was found to have higher IMSLN visualization rate than group B1 (75.1% versus 45.8%, P < 0.001). The IMSLN metastasis rate was only 8.1% (12/149) in clinically ALN-negative patients with successful IM-SLNB, and adjuvant treatment was altered in a small proportion. The IMSLN visualization rate was 69.8% (44/63) in clinically ALN-positive patients with the IMSLN metastasis rate up to 20.5% (9/44), and individual radiotherapy strategy could be guided with the IM-SLNB results.The modified injection technique (periareolar intraparenchymal, high volume, and ultrasound guidance) significantly improved the IMSLN visualization rate, making the routine IM-SLNB possible in daily practice. Internal mammary

  19. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  20. Development of an unmanned aerial vehicle-based spray system for highly accurate site-specific application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Application of crop production and protection materials is a crucial component in the high productivity of American agriculture. Agricultural chemical application is frequently needed at a specific time and location for accurate site-specific management of crop pests. Piloted aircrafts that carry ...

  1. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    NASA Astrophysics Data System (ADS)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  2. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  3. High Performance Field Reversed Configurations

    NASA Astrophysics Data System (ADS)

    Binderbauer, Michl

    2014-10-01

    The field-reversed configuration (FRC) is a prolate compact toroid with poloidal magnetic fields. FRCs could lead to economic fusion reactors with high power density, simple geometry, natural divertor, ease of translation, and possibly capable of burning aneutronic fuels. However, as in other high-beta plasmas, there are stability and confinement concerns. These concerns can be addressed by introducing and maintaining a significant fast ion population in the system. This is the approach adopted by TAE and implemented for the first time in the C-2 device. Studying the physics of FRCs driven by Neutral Beam (NB) injection, significant improvements were made in confinement and stability. Early C-2 discharges had relatively good confinement, but global power losses exceeded the available NB input power. The addition of axially streaming plasma guns, magnetic end plugs as well as advanced surface conditioning leads to dramatic reductions in turbulence driven losses and greatly improved stability. As a result, fast ion confinement significantly improved and allowed for build-up of a dominant fast particle population. Under such appropriate conditions we achieved highly reproducible, long-lived, macroscopically stable FRCs with record lifetimes. This demonstrated many beneficial effects of large orbit particles and their performance impact on FRCs Together these achievements point to the prospect of beam-driven FRCs as a path toward fusion reactors. This presentation will review and expand on key results and present context for their interpretation.

  4. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  5. TROP-2 immunohistochemistry: a highly accurate method in the differential diagnosis of papillary thyroid carcinoma.

    PubMed

    Bychkov, Andrey; Sampatanukul, Pichet; Shuangshoti, Shanop; Keelawat, Somboon

    2016-08-01

    We aimed to evaluate the diagnostic utility of the novel immunohistochemical marker TROP-2 on thyroid specimens (226 tumours and 207 controls). Whole slide immunohistochemistry was performed and scored by automated digital image analysis. Non-neoplastic thyroid, follicular adenomas, follicular carcinomas, and medullary carcinomas were negative for TROP-2 immunostaining. The majority of papillary thyroid carcinoma (PTC) specimens (94/114, 82.5%) were positive for TROP-2; however, the pattern of staining differed significantly between the histopathological variants. All papillary microcarcinomas (mPTC), PTC classic variant (PTC cv), and tall cell variant (PTC tcv) were TROP-2 positive, with mainly diffuse staining. In contrast, less than half of the PTC follicular variant specimens were positive for TROP-2, with only focal immunoreactivity. TROP-2 could identify PTC cv with 98.1% sensitivity and 97.5% specificity. ROC curve analysis found that the presence of >10% of TROP-2 positive cells in a tumour supported a diagnosis of PTC. The study of intratumoural heterogeneity showed that low-volume cytological samples of PTC cv could be adequately assessed by TROP-2 immunostaining. The TROP-2 H-score (intensity multiplied by proportion) was significantly associated with PTC variant and capsular invasion in encapsulated PTC follicular variant (p<0.001). None of the baseline (age, gender) and clinical (tumour size, nodal disease, stage) parameters were correlated with TROP-2 expression. In conclusion, TROP-2 membranous staining is a very sensitive and specific marker for PTC cv, PTC tcv, and mPTC, with high overall specificity for PTC. PMID:27311870

  6. MyriMatch: highly accurate tandem mass spectral peptide identification by multivariate hypergeometric analysis

    PubMed Central

    Tabb, David L.; Fernando, Christopher G.; Chambers, Matthew C.

    2008-01-01

    Shotgun proteomics experiments are dependent upon database search engines to identify peptides from tandem mass spectra. Many of these algorithms score potential identifications by evaluating the number of fragment ions matched between each peptide sequence and an observed spectrum. These systems, however, generally do not distinguish between matching an intense peak and matching a minor peak. We have developed a statistical model to score peptide matches that is based upon the multivariate hypergeometric distribution. This scorer, part of the “MyriMatch” database search engine, places greater emphasis on matching intense peaks. The probability that the best match for each spectrum has occurred by random chance can be employed to separate correct matches from random ones. We evaluated this software on data sets from three different laboratories employing three different ion trap instruments. Employing a novel system for testing discrimination, we demonstrate that stratifying peaks into multiple intensity classes improves the discrimination of scoring. We compare MyriMatch results to those of Sequest and X!Tandem, revealing that it is capable of higher discrimination than either of these algorithms. When minimal peak filtering is employed, performance plummets for a scoring model that does not stratify matched peaks by intensity. On the other hand, we find that MyriMatch discrimination improves as more peaks are retained in each spectrum. MyriMatch also scales well to tandem mass spectra from high-resolution mass analyzers. These findings may indicate limitations for existing database search scorers that count matched peaks without differentiating them by intensity. This software and source code is available under Mozilla Public License at this URL: http://www.mc.vanderbilt.edu/msrc/bioinformatics/. PMID:17269722

  7. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts.

    PubMed

    Stephanou, Pavlos S; Mavrantzas, Vlasis G

    2014-06-01

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data. PMID:24908037

  8. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts

    NASA Astrophysics Data System (ADS)

    Stephanou, Pavlos S.; Mavrantzas, Vlasis G.

    2014-06-01

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.

  9. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  10. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  11. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  12. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  13. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  14. High power ion thruster performance

    NASA Technical Reports Server (NTRS)

    Rawlin, Vincent K.; Patterson, Michael J.

    1987-01-01

    The ion thruster is one of several forms of space electric propulsion being considered for use on future SP-100-based missions. One possible major mission ground rule is the use of a single Space Shuttle launch. Thus, the mass in orbit at the reactor activation altitude would be limited by the Shuttle mass constraints. When the spacecraft subsystem masses are subtracted from this available mass limit, a maximum propellant mass may be calculated. Knowing the characteristics of each type of electric thruster allows maximum values of total impulse, mission velocity increment, and thrusting time to be calculated. Because ion thrusters easily operate at high values of efficiency (60 to 70%) and specific impulse (3000 to 5000 sec), they can impart large values of total impulse to a spacecraft. They also can be operated with separate control of the propellant flow rate and exhaust velocity. This paper presents values of demonstrated and projected performance of high power ion thrusters used in an analysis of electric propulsion for an SP-100 based mission.

  15. A highly accurate method for the determination of mass and center of mass of a spacecraft

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Trubert, M. R.; Egwuatu, A.

    1978-01-01

    An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.

  16. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  17. Ultra-performance liquid chromatography/tandem mass spectrometry for accurate quantification of global DNA methylation in human sperms.

    PubMed

    Wang, Xiaoli; Suo, Yongshan; Yin, Ruichuan; Shen, Heqing; Wang, Hailin

    2011-06-01

    Aberrant DNA methylation in human sperms has been proposed to be a possible mechanism associated with male infertility. We developed an ultra-performance liquid chromatography/tandem mass spectrometry (UPLC-MS/MS) method for rapid, sensitive, and specific detection of global DNA methylation level in human sperms. Multiple-reaction monitoring (MRM) mode was used in MS/MS detection for accurate quantification of DNA methylation. The intra-day and inter-day precision values of this method were within 1.50-5.70%. By using 2-deoxyguanosine as an internal standard, UPLC-MS/MS method was applied for the detection of global DNA methylation levels in three cultured cell lines. DNA methyltransferases inhibitor 5-aza-2'-deoxycytidine can significantly reduce global DNA methylation levels in treated cell lines, showing the reliability of our method. We further examined global DNA methylation levels in human sperms, and found that global methylation values varied from 3.79% to 4.65%. The average global DNA methylation level of sperm samples washed only by PBS (4.03%) was relatively lower than that of sperm samples in which abnormal and dead sperm cells were removed by density gradient centrifugation (4.25%), indicating the possible aberrant DNA methylation level in abnormal sperm cells. Clinical application of UPLC-MS/MS method in global DNA methylation detection of human sperms will be useful in human sperm quality evaluation and the study of epigenetic mechanisms responsible for male infertility.

  18. Toward accurate molecular identification of species in complex environmental samples: testing the performance of sequence filtering and clustering methods

    PubMed Central

    Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E

    2015-01-01

    Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860

  19. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  20. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  1. A polymer visualization system with accurate heating and cooling control and high-speed imaging.

    PubMed

    Wong, Anson; Guo, Yanting; Park, Chul B; Zhou, Nan Q

    2015-04-23

    A visualization system to observe crystal and bubble formation in polymers under high temperature and pressure has been developed. Using this system, polymer can be subjected to a programmable thermal treatment to simulate the process in high pressure differential scanning calorimetry (HPDSC). With a high-temperature/high-pressure view-cell unit, this system enables in situ observation of crystal formation in semi-crystalline polymers to complement thermal analyses with HPDSC. The high-speed recording capability of the camera not only allows detailed recording of crystal formation, it also enables in situ capture of plastic foaming processes with a high temporal resolution. To demonstrate the system's capability, crystal formation and foaming processes of polypropylene/carbon dioxide systems were examined. It was observed that crystals nucleated and grew into spherulites, and they grew at faster rates as temperature decreased. This observation agrees with the crystallinity measurement obtained with the HPDSC. Cell nucleation first occurred at crystals' boundaries due to CO₂ exclusion from crystal growth fronts. Subsequently, cells were nucleated around the existing ones due to tensile stresses generated in the constrained amorphous regions between networks of crystals.

  2. A Polymer Visualization System with Accurate Heating and Cooling Control and High-Speed Imaging

    PubMed Central

    Wong, Anson; Guo, Yanting; Park, Chul B.; Zhou, Nan Q.

    2015-01-01

    A visualization system to observe crystal and bubble formation in polymers under high temperature and pressure has been developed. Using this system, polymer can be subjected to a programmable thermal treatment to simulate the process in high pressure differential scanning calorimetry (HPDSC). With a high-temperature/high-pressure view-cell unit, this system enables in situ observation of crystal formation in semi-crystalline polymers to complement thermal analyses with HPDSC. The high-speed recording capability of the camera not only allows detailed recording of crystal formation, it also enables in situ capture of plastic foaming processes with a high temporal resolution. To demonstrate the system’s capability, crystal formation and foaming processes of polypropylene/carbon dioxide systems were examined. It was observed that crystals nucleated and grew into spherulites, and they grew at faster rates as temperature decreased. This observation agrees with the crystallinity measurement obtained with the HPDSC. Cell nucleation first occurred at crystals’ boundaries due to CO2 exclusion from crystal growth fronts. Subsequently, cells were nucleated around the existing ones due to tensile stresses generated in the constrained amorphous regions between networks of crystals. PMID:25915031

  3. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles. PMID:26517180

  4. Workplace Learning of High Performance Sports Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Clifford J.; Tinning, Richard

    2010-01-01

    The Australian coaching workplace (to be referred to as the State Institute of Sport; SIS) under consideration in this study employs significant numbers of full-time performance sport coaches and can be accurately characterized as a genuine workplace. Through a consideration of the interaction between what the workplace (SIS) affords the…

  5. High-throughput Accurate-wavelength Lens-based Visible Spectrometera

    SciTech Connect

    Ronald E. Belll and Filippo Scotti

    2010-06-04

    A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm-1 grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arcsec, corresponding to a wavelength error ≤ 0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature and pressure are monitored between the time of calibration and the time of measurement to insure a persistent wavelength calibration

  6. Liquid-crystal-modulated correlated color temperature tunable light-emitting diode with highly accurate regulation.

    PubMed

    Huang, Chiu-Chang; Kuo, Yu-Yi; Chen, Szu-Hua; Chen, Wei-Ting; Chao, Chih-Yu

    2015-02-01

    A precise correlated color temperature (CCT) tuning method for light-emitting diodes (LEDs) has been developed and is demonstrated in this article. By combining LEDs and a liquid crystal (LC) cell, a light source with continuous CCT variation along a straight track on the chromaticity diagram is achieved. Moreover, the manner of CCT variation can be modulated by choosing appropriate LEDs and phosphors to yield a variation going from 3800 K to 6100 K with the track near the black-body locus. By adapting various developed LC technologies for diverse demands, the performance and applications of LEDs can be greatly improved.

  7. Implementing an Inexpensive and Accurate Introductory Gas Density Activity with High School Students

    ERIC Educational Resources Information Center

    Cunningham, W. Patrick; Joseph, Christopher; Morey, Samantha; Santos Romo, Ana; Shope, Cullen; Strang, Jonathan; Yang, Kevin

    2015-01-01

    A simplified activity examined gas density while employing cost-efficient syringes in place of traditional glass bulbs. The exercise measured the density of methane, with very good accuracy and precision, in both first-year high school and AP chemistry settings. The participating students were tasked with finding the density of a gas. The…

  8. Solid rocket booster internal flow analysis by highly accurate adaptive computational methods

    NASA Technical Reports Server (NTRS)

    Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.

    1991-01-01

    The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.

  9. Investigating the Capability of High Resolution ALSM to Provide Accurate Watershed Delineation and Stream Network Data

    NASA Astrophysics Data System (ADS)

    Sedighi, A.; Slatton, K. C.; Hatfield, K.

    2007-05-01

    The development of geographic information systems (GIS) and digital elevation models (DEMs) has provided an opportunity to describe the pathways of water movement in a watershed. Adequate DEM resolution is of high importance in stream network detection. Local, state, and federal agencies have relied on US Geological Survey 1:24,000 scale topographic maps for information on stream networks for planning, management, and regulatory programs related to streams. DEM creation techniques that avoid map contours as the source of digital heights can improve watershed delineation and stream network data quality. Airborne Laser Swath Mapping (ALSM) technology (also referred to as LIDAR) provides DEMs of fine resolution and high accuracy. However, there are shortcomings in using both low resolution and high resolution DEMs. The focus of this work will be in the unique aspects of using ALSM data in watershed delineation and stream network mapping, in comparison to the other sources of DEM. In particular the reliability of both input data and output results of stream network using different resolutions will be evaluated. In this study, stream location resulting from high-resolution ALSM and low- resolution NED are compared to ground truth locations of the stream in Hogtown Creek Watershed, located in Gainesville, Florida. This study shows that ALSM-derived models are more successful at delineating streams and at locating them in their topographically correct position as compared to lower resolution DEMs. However, high resolution ALSM data produce artifacts that can affect the flow of water as predicted by stream network algorithms. Methods for overcoming the challenges with regard to ALSM data in stream network detection are presented.

  10. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  11. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  12. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    NASA Astrophysics Data System (ADS)

    Bell, Ronald E.

    2014-11-01

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  13. Development and Operation of High-throughput Accurate-wavelength Lens-based Spectrometer

    SciTech Connect

    Bell, Ronald E

    2014-07-01

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy < 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  14. Development and operation of a high-throughput accurate-wavelength lens-based spectrometera)

    DOE PAGES

    Bell, Ronald E.

    2014-07-11

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤ 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. The computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection,more » and wavelength calibration.« less

  15. Design and operation of a highly sensitive and accurate laser calorimeter for low-absorbtion materials

    NASA Astrophysics Data System (ADS)

    Kawate, Etsuo; Hanssen, Leonard M.; Kaplan, Simon G.; Datla, Raju V.

    1998-10-01

    This work surveys techniques to measure the absorption coefficient of low absorption materials. A laser calorimeter is being developed with a sensitivity goal of (1 +/- 0.2)X 10-5 cm-1 with one watt of laser power using a CO2 laser (9 (mu) m to 11 (mu) m), a CO laser (5 (mu) m to 8 (mu) m), a He-Ne laser (3.39 (mu) m), and a pumped OPO tunable laser (2 (mu) m to 4 (mu) m) in the infrared region. Much attention has been given to the requirements for high sensitivity and to sources of systematic error including stray light. Our laser calorimeter is capable of absolute electrical calibration. Preliminary results for the absorption coefficient of highly transparent potassium chloride (KCl) samples are reported.

  16. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    SciTech Connect

    Bell, Ronald E.

    2014-11-15

    A high-throughput spectrometer for the 400–820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm{sup −1} grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  17. Kashima RAy-Tracing Service (KARATS) for high accurate GNSS positioning

    NASA Astrophysics Data System (ADS)

    Ichikawa, R.; Hobiger, T.; Hasegawa, S.; Tsutsumi, M.; Koyama, Y.; Kondo, T.

    2010-12-01

    Radio signal delays associated with the neutral atmosphere are one of the major error sources of space geodesy such as GPS, GLONASS, GALILEO, VLBI, In-SAR measurements. We have developed a state-of-art tool to estimate the atmospheric path delays by ray-tracing through JMA meso-scale analysis (MANAL data) data. The tools, which we have named 'KAshima RAytracing Tools (KARAT)', are capable of calculating total slant delays and ray-bending angles considering real atmospheric phenomena. Numerical weather models such as MANAL data have undergone a significant improvement of accuracy and spatial resolution, which makes it feasible to utilize them for the correction of atmosphere excess path delays. In the previous studies for evaluating KARAT performance, the KARAT solutions are slightly better than the solutions using VMF1 and GMF with linear gradient model for horizontal and height positions. Based on these results we have started the web-based online service, 'KAshima RAytracing Service (KARATS)' for providing the atmospheric delay correction of RINEX files on Jan 27th, 2010. The KARATS receives user's RINEX data via a proper web site (http://vps.nict.go.jp/karats/index.html) and processes user's data files using KARAT for reducing atmospheric slant delays. The reduced RINEX files are archived in the specific directory for each user on the KARATS server. Once the processing is finished the information of data archive is sent privately via email to each user. If user want to process a large amount of data files, user can prepare own server which archives them. The KARATS can get these files from the user's server using GNU ¥emph{wget} and performs ray-traced corrections. We will present a brief status of the KARATS and summarize first experiences gained after this service went operational in December 2009. In addition, we will also demonstrate the newest KARAT performance based on the 5km MANAL data which has been operational from April 7th, 2009 and an outlook on

  18. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  19. An experimental device for accurate ultrasounds measurements in liquid foods at high pressure

    NASA Astrophysics Data System (ADS)

    Hidalgo-Baltasar, E.; Taravillo, M.; Baonza, V. G.; Sanz, P. D.; Guignon, B.

    2012-12-01

    The use of high hydrostatic pressure to ensure safe and high-quality product has markedly increased in the food industry during the last decade. Ultrasonic sensors can be employed to control such processes in an equivalent way as they are currently used in processes carried out at room pressure. However, their installation, calibration and use are particularly challenging in the context of a high pressure environment. Besides, data about acoustic properties of food under pressure and even for water are quite scarce in the pressure range of interest for food treatment (namely, above 200 MPa). The objective of this work was to establish a methodology to determine the speed of sound in foods under pressure. An ultrasonic sensor using the multiple reflections method was adapted to a lab-scale HHP equipment to determine the speed of sound in water between 253.15 and 348.15 K, and at pressures up to 700 MPa. The experimental speed-of-sound data were compared to the data calculated from the equation of state of water (IAPWS-95 formulation). From this analysis, the way to calibrate cell path was validated. After this calibration procedure, the speed of sound could be determined in liquid foods by using this sensor with a relative uncertainty between (0.22 and 0.32) % at a confidence level of 95 % over the whole pressure domain.

  20. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  1. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  2. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Britt, Darrell Steven, Jr.

    Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is

  3. Rapid, high-order accurate calculation of flows due to free source or vortex distributions

    NASA Technical Reports Server (NTRS)

    Halsey, D.

    1981-01-01

    Fast Fourier transform (FFT) techniques are applied to the problem of finding the flow due to source or vortex distributions in the field outside an airfoil or other two-dimensional body. Either the complex potential or the complex velocity may be obtained to a high order of accuracy, with computational effort similar to that required by second-order fast Poisson solvers. These techniques are applicable to general flow problems with compressibility and rotation. An example is given of their use for inviscid compressible flow.

  4. a High-Accurate and Efficient Obrechkoff Five-Step Method for Undamped Duffing's Equation

    NASA Astrophysics Data System (ADS)

    Zhao, Deyin; Wang, Zhongcheng; Dai, Yongming; Wang, Yuan

    In this paper, we present a five-step Obrechkoff method to improve the previous two-step one for a second-order initial-value problem with the oscillatory solution. We use a special structure to construct the iterative formula, in which the higher-even-order derivatives are placed at central four nodes, and show there existence of periodic solutions in it with a remarkably wide interval of periodicity, H02 ˜ 16.28. By using a proper first-order derivative (FOD) formula to make this five-step method to have two advantages (a) a very high accuracy since the local truncation error (LTE) of both the main structure and the FOD formula are the same as O (h14); (b) a high efficiency because it avoids solving a polynomial equation with degree-nine by Picard iterative. By applying the new method to the well-known problem, the nonlinear Duffing's equation without damping, we can show that our numerical solution is four to five orders higher than the one by the previous Obrechkoff two-step method and it takes only 25% of CPU time required by the previous method to fulfil the same task. By using the new method, a better "exact" solution is found by fitting, whose error tolerance is below 5×10-15, than the one widely used in the lectures, whose error tolerance is below 10-11.

  5. The S-model: A highly accurate MOST model for CAD

    NASA Astrophysics Data System (ADS)

    Satter, J. H.

    1986-09-01

    A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.

  6. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    SciTech Connect

    Austin, T.M.; Cary, J.R.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  7. Highly accurate servo control of reference beam angle in holographic memory with polarized servo beam

    NASA Astrophysics Data System (ADS)

    Hosaka, Makoto; Ogata, Takeshi; Yamada, Kenichiro; Yamazaki, Kazuyoshi; Shimada, Kenichi

    2016-09-01

    We propose a new servo technique for controlling the reference beam angle in angular multiplexing holographic memory to attain higher capacity and higher speed data archiving. An orthogonally polarized beam with an incident angle slightly different from that of the reference beam is newly applied to the optics. The control signal for the servo is generated as the difference between the diffracted light intensities of these two beams from a hologram. The incident angle difference between the beams to the medium was optimized as sufficient properties of the control signal were obtained. The high accuracy of the control signal with an angle error lower than 1.5 mdeg was successfully confirmed in the simulations and experiments.

  8. Accurate sampling of PCDD/F in high temperature flue-gas using cooled sampling probes.

    PubMed

    Phan, Duong Ngoc Chau; Weidemann, Eva; Lundin, Lisa; Marklund, Stellan; Jansson, Stina

    2012-08-01

    In a laboratory-scale combustion reactor, flue-gas samples were collected at two temperatures in the post-combustion zone, 700°C and 400°C, using two different water-cooled sampling probes. The probes were the cooled probe described in the European Standard method EN-1948:1, referred to as the original probe, and a modified probe that contained a salt/ice mixture to assist the cooling, referred to as the sub-zero probe. To determine the efficiency of the cooling probes, internal temperature measurements were recorded at 5cm intervals inside the probes. Flue-gas samples were analyzed for polychlorinated dibenzo-p-dioxin and dibenzofurans (PCDD/Fs). Samples collected at 700°C using the original cooling probe showed higher concentrations of PCDD/Fs compared to samples collected using the sub-zero probe. No significant differences were observed between samples collected at 400°C. The results indicated that artifact formation of PCDD/Fs readily occurs during flue-gas sampling at high temperatures if the cooling within the probe is insufficient, as found for the original probe at 700°C. It was also shown that this problem could be alleviated by using probes with an enhanced cooling capacity, such as the sub-zero probe. Although this may not affect samples collected for regulatory purposes in exit gases, it is of great importance for research conducted in the high-temperature region of the post-combustion zone.

  9. High expression of CD26 accurately identifies human bacteria-reactive MR1-restricted MAIT cells

    PubMed Central

    Sharma, Prabhat K; Wong, Emily B; Napier, Ruth J; Bishai, William R; Ndung'u, Thumbi; Kasprowicz, Victoria O; Lewinsohn, Deborah A; Lewinsohn, David M; Gold, Marielle C

    2015-01-01

    Mucosa-associated invariant T (MAIT) cells express the semi-invariant T-cell receptor TRAV1–2 and detect a range of bacteria and fungi through the MHC-like molecule MR1. However, knowledge of the function and phenotype of bacteria-reactive MR1-restricted TRAV1–2+ MAIT cells from human blood is limited. We broadly characterized the function of MR1-restricted MAIT cells in response to bacteria-infected targets and defined a phenotypic panel to identify these cells in the circulation. We demonstrated that bacteria-reactive MR1-restricted T cells shared effector functions of cytolytic effector CD8+ T cells. By analysing an extensive panel of phenotypic markers, we determined that CD26 and CD161 were most strongly associated with these T cells. Using FACS to sort phenotypically defined CD8+ subsets we demonstrated that high expression of CD26 on CD8+ TRAV1–2+ cells identified with high specificity and sensitivity, bacteria-reactive MR1-restricted T cells from human blood. CD161hi was also specific for but lacked sensitivity in identifying all bacteria-reactive MR1-restricted T cells, some of which were CD161dim. Using cell surface expression of CD8, TRAV1–2, and CD26hi in the absence of stimulation we confirm that bacteria-reactive T cells are lacking in the blood of individuals with active tuberculosis and are restored in the blood of individuals undergoing treatment for tuberculosis. PMID:25752900

  10. The Berlin Brain--Computer Interface: accurate performance from first-session in BCI-naïve subjects.

    PubMed

    Blankertz, Benjamin; Losch, Florian; Krauledat, Matthias; Dornhege, Guido; Curio, Gabriel; Müller, Klaus-Robert

    2008-10-01

    The Berlin Brain--Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are: 1) the use of well-established motor competences as control paradigms; 2) high-dimensional features from multichannel EEG; and 3) advanced machine-learning techniques. Spatio-spectral changes of sensorimotor rhythms are used to discriminate imagined movements (left hand, right hand, and foot). A previous feedback study [M. Krauledat, K.-R. MUller, and G. Curio. (2007) The non-invasive Berlin brain--computer Interface: Fast acquisition of effective performance in untrained subjects. NeuroImage. [Online]. 37(2), pp. 539--550. Available: http://dx.doi.org/10.1016/j.neuroimage.2007.01.051] with ten subjects provided preliminary evidence that the BBCI system can be operated at high accuracy for subjects with less than five prior BCI exposures. Here, we demonstrate in a group of 14 fully BCI-naIve subjects that 8 out of 14 BCI novices can perform at >84% accuracy in their very first BCI session, and a further four subjects at >70%. Thus, 12 out of 14 BCI-novices had significant above-chance level performances without any subject training even in the first session, as based on an optimized EEG analysis by advanced machine-learning algorithms. PMID:18838371

  11. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  12. High-Performance Phylogeny Reconstruction

    SciTech Connect

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  13. A new direct absorption measurement for high precision and accurate measurement of water vapor in the UT/LS

    NASA Astrophysics Data System (ADS)

    Sargent, M. R.; Sayres, D. S.; Smith, J. B.; Anderson, J.

    2011-12-01

    Highly accurate and precise water vapor measurements in the upper troposphere and lower stratosphere are critical to understanding the climate feedbacks of water vapor and clouds in that region. However, the continued disagreement among water vapor measurements (~1 - 2 ppmv) are too large to constrain the role of different hydration and dehydration mechanisms operating in the UT/LS, with model validation dependent upon which dataset is chosen. In response to these issues, we present a new instrument for measurement of water vapor in the UT/LS that was flown during the April 2011 MACPEX mission out of Houston, TX. The dual axis instrument combines the heritage and validated accuracy of the Harvard Lyman-alpha instrument with a newly designed direct IR absorption instrument, the Harvard Herriott Hygrometer (HHH). The Lyman-alpha detection axis has flown aboard NASA's WB-57 and ER2 aircraft since 1994, and provides a requisite link between the new HHH instrument and the long history of Harvard water vapor measurements. The instrument utilizes the highly sensitive Lyman-alpha photo-fragment fluorescence detection method; its accuracy has been demonstrated though rigorous laboratory calibrations and in situ diagnostic procedures. The Harvard Herriott Hygrometer employs a fiber coupled near-IR laser with state-of-the-art electronics to measure water vapor via direct absorption in a spherical Herriott cell of 10 cm length. The instrument demonstrated in-flight precision of 0.1 ppmv (1-sec, 1-sigma) at mixing ratios as low as 5 ppmv with accuracies of 10% based on careful laboratory calibrations and in-flight performance. We present a description of the measurement technique along with our methodology for calibration and details of the measurement uncertainties. The simultaneous utilization of radically different measurement techniques in a single duct in the new Harvard Water Vapor (HWV) instrument allows for the constraint of systematic errors inherent in each technique

  14. Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation

    NASA Astrophysics Data System (ADS)

    Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.

    2016-06-01

    Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.

  15. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  16. Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data.

    PubMed

    White, Amelia G; Cipriani, Patricia G; Kao, Huey-Ling; Lees, Brandon; Geiger, Davi; Sontag, Eduardo; Gunsalus, Kristin C; Piano, Fabio

    2010-08-01

    We present a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed stages. The object recognition machine consists of four hierarchical layers, each composed of units upon which evaluation functions output a label score, followed by a grouping mechanism that resolves ambiguities in the score by imposing local consistency constraints. Each layer then outputs groups of units, from which the units of the next layer are derived. Using this hierarchical principle, the machine builds up successively more sophisticated representations of the objects to be classified. The algorithm segments large and small objects, decomposes objects into parts, extracts features from these parts, and classifies them by SVM. We are using this system to analyze phenotypic data from C. elegans high-throughput genetic screens, and our system overcomes a previous bottleneck in image analysis by achieving near real-time scoring of image data. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users.

  17. HyRec: A Fast and Highly Accurate Primordial Hydrogen and Helium Recombination Code

    NASA Astrophysics Data System (ADS)

    Ali-Haïmoud, Yacine; Hirata, Christopher M.

    2010-11-01

    We present a state-of-the-art primordial recombination code, HyRec, including all the physical effects that have been shown to significantly affect recombination. The computation of helium recombination includes simple analytic treatments of hydrogen continuum opacity in the He I 2 1P - 1 1S line, the He I] 2 3P - 1 1S line, and treats feedback between these lines within the on-the-spot approximation. Hydrogen recombination is computed using the effective multilevel atom method, virtually accounting for an infinite number of excited states. We account for two-photon transitions from 2s and higher levels as well as frequency diffusion in Lyman-alpha with a full radiative transfer calculation. We present a new method to evolve the radiation field simultaneously with the level populations and the free electron fraction. These computations are sped up by taking advantage of the particular sparseness pattern of the equations describing the radiative transfer. The computation time for a full recombination history is ~2 seconds. This makes our code well suited for inclusion in Monte Carlo Markov chains for cosmological parameter estimation from upcoming high-precision cosmic microwave background anisotropy measurements.

  18. Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1991-01-01

    A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.

  19. High Performance Torso Cooling Garment

    NASA Technical Reports Server (NTRS)

    Conger, Bruce; Makinen, Janice

    2016-01-01

    The concept proposed in this paper is to improve thermal efficiencies of the liquid cooling and ventilation garment (LCVG) in the torso area, which could facilitate removal of LCVG tubing from the arms and legs, thereby increasing suited crew member mobility. EVA space suit mobility in micro-gravity is challenging, and it becomes even more challenging in the gravity of Mars. By using shaped water tubes that greatly increase the contact area with the skin in the torso region of the body, the heat transfer efficiency can be increased. This increase in efficiency could provide the required liquid cooling via torso tubing only; no arm or leg LCVG tubing would be required. Benefits of this approach include increased crewmember mobility, enhanced evaporation cooling, increased comfort during Mars EVA tasks, and easing of the overly dry condition in the helmet associated with the Advanced Extravehicular Mobility Unit (EMU) ventilation loop currently under development. This report describes analysis and test activities performed to evaluate the potential improvements to the thermal performance of the LCVG. Analyses evaluated potential tube shapes for improving the thermal performance of the LCVG. The analysis results fed into the selection of flat flow strips to improve thermal contact with the skin of the suited test subject. Testing of small segments was performed to compare thermal performance of the tubing approach of the current LCVG to the flat flow strips proposed as the new concept. Results of the testing is presented along with recommendations for future development of this new concept.

  20. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  1. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  2. Accurate glass forming for high-temperature solar applications. Final report

    SciTech Connect

    1980-10-01

    Development work was undertaken to thermally form glass for solar concentrators. Sagging and pressing glass to parabolic shapes was investigated with goal of achieving slope errors less than 2.0 mr RMS and costs of $1.25/ft/sup 2/. In addition, a laminating process was investigated to overcome the problem of silvering of a curved surface and to reduce corrosion of the silver. Thermal sagging is a process in which glass is shaped by heating the glass until it is sufficiently soft to deform under its own weight and conform to a mold. For cylindrical parabolic shapes, a method for producing low cost high accuracy molds was developed using castable ceramics and a grinder. Thermal conditions were established for a commercial glass bending furnace to obtain good replication of the mold. The accuracy and cost goals were met for glass size up to 30 x 30 x 0.125 inches and for low iron and regular iron float and sheet glasses. Lamination of two curved pieces of glass using automotive technology was investigated. A silver film was placed between two layers of polyvinyl and butyral (PVB) and this was used to bond two sheets of glass. Economically, and technically, the process appears feasible. However, the non-uniform thickness of PBV cause distortion in the reflected image. More work is needed to assess accuracy of curved laminated composites. Thermal pressing of glass is accomplished by heating the glass until it is soft and mechanically stamping the shape. Equipment was built and operated to determine important parameters in pressing. Control of thermal stresses in the glass is critical to preventing cracks. No glass pieces were produced without cracks.

  3. A high performance thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Tijani, M. E. H.; Spoelstra, S.

    2011-11-01

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  4. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  5. Toward High-Performance Organizations.

    ERIC Educational Resources Information Center

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  6. High-Performance Composite Chocolate

    ERIC Educational Resources Information Center

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  7. High performance, high density hydrocarbon fuels

    NASA Technical Reports Server (NTRS)

    Frankenfeld, J. W.; Hastings, T. W.; Lieberman, M.; Taylor, W. F.

    1978-01-01

    The fuels were selected from 77 original candidates on the basis of estimated merit index and cost effectiveness. The ten candidates consisted of 3 pure compounds, 4 chemical plant streams and 3 refinery streams. Critical physical and chemical properties of the candidate fuels were measured including heat of combustion, density, and viscosity as a function of temperature, freezing points, vapor pressure, boiling point, thermal stability. The best all around candidate was found to be a chemical plant olefin stream rich in dicyclopentadiene. This material has a high merit index and is available at low cost. Possible problem areas were identified as low temperature flow properties and thermal stability. An economic analysis was carried out to determine the production costs of top candidates. The chemical plant and refinery streams were all less than 44 cent/kg while the pure compounds were greater than 44 cent/kg. A literature survey was conducted on the state of the art of advanced hydrocarbon fuel technology as applied to high energy propellents. Several areas for additional research were identified.

  8. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  9. High-Performance Wireless Telemetry

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  10. High-Performance Miniature Hygrometer

    NASA Technical Reports Server (NTRS)

    Van Zandt, Thomas R.; Kaiser, William J.; Kenny, Thomas W.; Crisp, David

    1994-01-01

    Relatively inexpensive hygrometer that occupies volume less than 4 in.(3) measures dewpoints as much as 100 degrees C below ambient temperatures, with accuracy of 0.1 degrees C. Field tests indicate accuracy and repeatability identical to those of state-of-the-art larger dewpoint hygrometers. Operates up to 100 times as fast as older hygrometers, and offers simplicity and small size needed to meet cost and performance requirements of many applications.

  11. CONDENSED MATTER: STRUCTURE, MECHANICAL AND THERMAL PROPERTIES: An Accurate Image Simulation Method for High-Order Laue Zone Effects

    NASA Astrophysics Data System (ADS)

    Cai, Can-Ying; Zeng, Song-Jun; Liu, Hong-Rong; Yang, Qi-Bin

    2008-05-01

    A completely different formulation for simulation of the high order Laue zone (HOLZ) diffractions is derived. It refers to the new method, i.e. the Taylor series (TS) method. To check the validity and accuracy of the TS method, we take polyvinglidene fluoride (PVDF) crystal as an example to calculate the exit wavefunction by the conventional multi-slice (CMS) method and the TS method. The calculated results show that the TS method is much more accurate than the CMS method and is independent of the slice thicknesses. Moreover, the pure first order Laue zone wavefunction by the TS method can reflect the major potential distribution of the first reciprocal plane.

  12. Accurate and robust registration of high-speed railway viaduct point clouds using closing conditions and external geometric constraints

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Song, Mengxiao; Guan, Haiyan; Yu, Yongtao

    2015-08-01

    This paper proposes an automatic method for registering multiple laser scans without a control network. The proposed registration method first uses artificial targets to pair-wise register adjacent scans for initial transformation estimates; the proposed registration method then employs combined adjustments with closing conditions and external triangle constraints to globally register all scans along a long-range, high-speed railway corridor. The proposed registration method uses (1) closing conditions to eliminate registration errors that gradually accumulate as the length of a corridor (the number of scan stations) increases, and (2) external geometric constraints to ensure the shape correctness of an elongated high-speed railway. A 640-m high-speed railway viaduct with twenty-one piers is used to conduct experiments using our proposed registration method. A group of comparative experiments is undertaken to evaluate the robustness and efficiency of the proposed registration method to accurately register long-range corridors.

  13. High-performance solar collector

    NASA Technical Reports Server (NTRS)

    Beekley, D. C.; Mather, G. R., Jr.

    1979-01-01

    Evacuated all-glass concentric tube collector using air or liquid transfer mediums is very efficient at high temperatures. Collector can directly drive existing heating systems that are presently driven by fossil fuel with relative ease of conversion and less expense than installation of complete solar heating systems.

  14. Accurate calculation and assignment of highly excited vibrational levels of floppy triatomic molecules in a basis of adiabatic vibrational eigenstates

    NASA Astrophysics Data System (ADS)

    Bačić, Z.

    1991-09-01

    We show that the triatomic adiabatic vibrational eigenstates (AVES) provide a convenient basis for accurate discrete variable representation (DVR) calculation and automatic assignment of highly excited, large amplitude motion vibrational states of floppy triatomic molecules. The DVR-AVES states are eigenvectors of the diagonal (in the stretch states) blocks of the adiabatically rearranged triatomic DVR-ray eigenvector (DVR-REV) Hamiltonian [J. C. Light and Z. Bačić, J. Chem. Phys. 87, 4008 (1987)]. The transformation of the full triatomic vibrational Hamiltonian from the DVR-REV basis to the new DVR-AVES basis is simple, and does not involve calculation of any new matrix elements. No dynamical approximation is made in the energy level calculation by the DVR-AVES approach; its accuracy and efficiency are identical to those of the DVR-REV method. The DVR-AVES states, as the adiabatic approximation to the vibrational states of a triatomic molecule, are labeled by three vibrational quantum numbers. Consequently, accurate large amplitude motion vibrational levels obtained by diagonalizing the full vibrational Hamiltonian transformed to the DVR-AVES basis, can be assigned automatically by the code, with the three quantum numbers of the dominant DVR-AVES state associated with the largest (by modulus) eigenvector element in the DVR-AVES basis. The DVR-AVES approach is used to calculate accurate highly excited localized and delocalized vibrational levels of HCN/HNC and LiCN/LiNC. A significant fraction of localized states of both systems, below and above the isomerization barrier, is assigned automatically, without inspection of wave function plots or separate approximate calculations.

  15. Novel high performance multispectral photodetector and its performance

    NASA Astrophysics Data System (ADS)

    Mizuno, Genki; Dutta, Jaydeep; Oduor, Patrick; Dutta, Achyut K.; Dhar, Nibir K.

    2016-05-01

    Banpil Photonics has developed a novel high-performance multispectral photodetector array for Short-Wave Infrared (SWIR) imaging. The InGaAs based device uses a unique micro-nano pillar structure that eliminates surface reflection to significantly increase sensitivity and the absorption spectra compared to its macro-scaled thin film pixels counterpart (non-pillar). We discuss the device structure and highlight fabrication of the novel high performance multispectral image sensor. We also present performance results of the device characterization showing low dark current suitable for high performance imaging applications for the most demanding security, defense, and machine vision applications.

  16. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  17. High performance rotational vibration isolator.

    PubMed

    Sunderland, Andrew; Blair, David G; Ju, Li; Golden, Howard; Torres, Francis; Chen, Xu; Lockwood, Ray; Wolfgram, Peter

    2013-10-01

    We present a new rotational vibration isolator with an extremely low resonant frequency of 0.055 ± 0.002 Hz. The isolator consists of two concentric spheres separated by a layer of water and joined by very soft silicone springs. The isolator reduces rotation noise at all frequencies above its resonance which is very important for airborne mineral detection. We show that more than 40 dB of isolation is achieved in a helicopter survey for rotations at frequencies between 2 Hz and 20 Hz. Issues affecting performance such as translation to rotation coupling and temperature are discussed. The isolator contains almost no metal, making it particularly suitable for electromagnetic sensors.

  18. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda.

    PubMed

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne Mq; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039-2.3438% by linear regression analysis (R(2) = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  19. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda

    PubMed Central

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I.; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne MQ; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039–2.3438% by linear regression analysis (R2 = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  20. High performance electromagnetic simulation tools

    NASA Astrophysics Data System (ADS)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  1. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  2. HIGH-PERFORMANCE COATING MATERIALS

    SciTech Connect

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  3. High Concentrations of Measles Neutralizing Antibodies and High-Avidity Measles IgG Accurately Identify Measles Reinfection Cases

    PubMed Central

    Rota, Jennifer S.; Hickman, Carole J.; Mercader, Sara; Redd, Susan; McNall, Rebecca J.; Williams, Nobia; McGrew, Marcia; Walls, M. Laura; Rota, Paul A.; Bellini, William J.

    2016-01-01

    In the United States, approximately 9% of the measles cases reported from 2012 to 2014 occurred in vaccinated individuals. Laboratory confirmation of measles in vaccinated individuals is challenging since IgM assays can give inconclusive results. Although a positive reverse transcription (RT)-PCR assay result from an appropriately timed specimen can provide confirmation, negative results may not rule out a highly suspicious case. Detection of high-avidity measles IgG in serum samples provides laboratory evidence of a past immunologic response to measles from natural infection or immunization. High concentrations of measles neutralizing antibody have been observed by plaque reduction neutralization (PRN) assays among confirmed measles cases with high-avidity IgG, referred to here as reinfection cases (RICs). In this study, we evaluated the utility of measuring levels of measles neutralizing antibody to distinguish RICs from noncases by receiver operating characteristic curve analysis. Single and paired serum samples with high-avidity measles IgG from suspected measles cases submitted to the CDC for routine surveillance were used for the analysis. The RICs were confirmed by a 4-fold rise in PRN titer or by RT-quantitative PCR (RT-qPCR) assay, while the noncases were negative by both assays. Discrimination accuracy was high with serum samples collected ≥3 days after rash onset (area under the curve, 0.953; 95% confidence interval [CI], 0.854 to 0.993). Measles neutralizing antibody concentrations of ≥40,000 mIU/ml identified RICs with 90% sensitivity (95% CI, 74 to 98%) and 100% specificity (95% CI, 82 to 100%). Therefore, when serological or RT-qPCR results are unavailable or inconclusive, suspected measles cases with high-avidity measles IgG can be confirmed as RICs by measles neutralizing antibody concentrations of ≥40,000 mIU/ml. PMID:27335386

  4. High Concentrations of Measles Neutralizing Antibodies and High-Avidity Measles IgG Accurately Identify Measles Reinfection Cases.

    PubMed

    Sowers, Sun B; Rota, Jennifer S; Hickman, Carole J; Mercader, Sara; Redd, Susan; McNall, Rebecca J; Williams, Nobia; McGrew, Marcia; Walls, M Laura; Rota, Paul A; Bellini, William J

    2016-08-01

    In the United States, approximately 9% of the measles cases reported from 2012 to 2014 occurred in vaccinated individuals. Laboratory confirmation of measles in vaccinated individuals is challenging since IgM assays can give inconclusive results. Although a positive reverse transcription (RT)-PCR assay result from an appropriately timed specimen can provide confirmation, negative results may not rule out a highly suspicious case. Detection of high-avidity measles IgG in serum samples provides laboratory evidence of a past immunologic response to measles from natural infection or immunization. High concentrations of measles neutralizing antibody have been observed by plaque reduction neutralization (PRN) assays among confirmed measles cases with high-avidity IgG, referred to here as reinfection cases (RICs). In this study, we evaluated the utility of measuring levels of measles neutralizing antibody to distinguish RICs from noncases by receiver operating characteristic curve analysis. Single and paired serum samples with high-avidity measles IgG from suspected measles cases submitted to the CDC for routine surveillance were used for the analysis. The RICs were confirmed by a 4-fold rise in PRN titer or by RT-quantitative PCR (RT-qPCR) assay, while the noncases were negative by both assays. Discrimination accuracy was high with serum samples collected ≥3 days after rash onset (area under the curve, 0.953; 95% confidence interval [CI], 0.854 to 0.993). Measles neutralizing antibody concentrations of ≥40,000 mIU/ml identified RICs with 90% sensitivity (95% CI, 74 to 98%) and 100% specificity (95% CI, 82 to 100%). Therefore, when serological or RT-qPCR results are unavailable or inconclusive, suspected measles cases with high-avidity measles IgG can be confirmed as RICs by measles neutralizing antibody concentrations of ≥40,000 mIU/ml. PMID:27335386

  5. Accurate measurement of dispersion data through short and narrow tubes used in very high-pressure liquid chromatography.

    PubMed

    Gritti, Fabrice; McDonald, Thomas; Gilar, Martin

    2015-09-01

    An original method is proposed for the accurate and reproducible measurement of the time-based dispersion properties of short L< 50cm and narrow rc< 50μm tubes at mobile phase flow rates typically used in very high-pressure liquid chromatography (vHPLC). Such tubes are used to minimize sample dispersion in vHPLC; however, their dispersion characteristics cannot be accurately measured at such flow rates due to system dispersion contribution of vHPLC injector and detector. It is shown that using longer and wider tubes (>10μL) enables a reliable measurement of the dispersion data. We confirmed that the dimensionless plot of the reduced dispersion coefficient versus the reduced linear velocity (Peclet number) depends on the aspect ratio, L/rc, of the tube, and unexpectedly also on the diffusion coefficient of the analyte. This dimensionless plot could be easily obtained for a large volume tube, which has the same aspect ratio as that of the short and narrow tube, and for the same diffusion coefficient. The dispersion data for the small volume tube are then directly extrapolated from this plot. For instance, it is found that the maximum volume variances of 75μm×30.5cm and 100μm×30.5cm prototype finger-tightened connecting tubes are 0.10 and 0.30μL(2), respectively, with an accuracy of a few percent and a precision smaller than seven percent.

  6. Multi-stencils fast marching methods: a highly accurate solution to the eikonal equation on cartesian domains.

    PubMed

    Hassouna, M Sabry; Farag, A A

    2007-09-01

    A wide range of computer vision applications require an accurate solution of a particular Hamilton- Jacobi (HJ) equation, known as the Eikonal equation. In this paper, we propose an improved version of the fast marching method (FMM) that is highly accurate for both 2D and 3D Cartesian domains. The new method is called multi-stencils fast marching (MSFM), which computes the solution at each grid point by solving the Eikonal equation along several stencils and then picks the solution that satisfies the upwind condition. The stencils are centered at each grid point and cover its entire nearest neighbors. In 2D space, 2 stencils cover the 8-neighbors of the point, while in 3D space, 6 stencils cover its 26-neighbors. For those stencils that are not aligned with the natural coordinate system, the Eikonal equation is derived using directional derivatives and then solved using higher order finite difference schemes. The accuracy of the proposed method over the state-of-the-art FMM-based techniques has been demonstrated through comprehensive numerical experiments.

  7. Highly accurate quantification of hydroxyproline-containing peptides in blood using a protease digest of stable isotope-labeled collagen.

    PubMed

    Taga, Yuki; Kusubata, Masashi; Ogawa-Goto, Kiyoko; Hattori, Shunji

    2014-12-17

    Collagen-derived hydroxyproline (Hyp)-containing dipeptides and tripeptides, which are known to possess physiological functions, appear in blood at high concentrations after oral ingestion of gelatin hydrolysate. However, highly accurate and sensitive quantification of the Hyp-containing peptides in blood has been challenging because of the analytical interference from numerous other blood components. We recently developed a stable isotope-labeled collagen named "SI-collagen" that can be used as an internal standard in various types of collagen analyses employing liquid chromatography-mass spectrometry (LC-MS). Here we prepared stable isotope-labeled Hyp-containing peptides from SI-collagen using trypsin/chymotrypsin and plasma proteases by mimicking the protein degradation pathways in the body. With the protease digest of SI-collagen used as an internal standard mixture, we achieved highly accurate simultaneous quantification of Hyp and 13 Hyp-containing peptides in human blood by LC-MS. The area under the plasma concentration-time curve of Hyp-containing peptides ranged from 0.663 ± 0.022 nmol/mL·h for Pro-Hyp-Gly to 163 ± 1 nmol/mL·h for Pro-Hyp after oral ingestion of 25 g of fish gelatin hydrolysate, and the coefficient of variation of three separate measurements was <7% for each peptide except for Glu-Hyp-Gly, which was near the detection limit. Our method is useful for absorption/metabolism studies of the Hyp-containing peptides and development of functionally characterized gelatin hydrolysate.

  8. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  9. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  10. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  11. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  12. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information.

    PubMed

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W; Jin, Ke; Du, Yingge; Neeway, James J; Ryan, Joseph V; Hu, Dehong; Zhang, Kelvin H L; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest.

  13. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  14. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  15. Accurate High-Resolution Measurements of 3-D Tissue Dynamics With Registration-Enhanced Displacement Encoded MRI

    PubMed Central

    Merchant, Samer S.; Hsu, Edward W.

    2014-01-01

    Displacement fields are important to analyze deformation, which is associated with functional and material tissue properties often used as indicators of health. Magnetic resonance imaging (MRI) techniques like DENSE and image registration methods like Hyperelastic Warping have been used to produce pixel-level deformation fields that are desirable in high-resolution analysis. However, DENSE can be complicated by challenges associated with image phase unwrapping, in particular offset determination. On the other hand, Hyperelastic Warping can be hampered by low local image contrast. The current work proposes a novel approach for measuring tissue displacement with both DENSE and Hyperelastic Warping, incorporating physically accurate displacements obtained by the latter to improve phase characterization in DENSE. The validity of the proposed technique is demonstrated using numerical and physical phantoms, and in vivo small animal cardiac MRI. PMID:24771572

  16. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  17. Accurate Transmittance Measurements of Thick, High-Index, High- Dispersion, IR Windows, Using a Fourier Transform IR Spectrometer

    NASA Astrophysics Data System (ADS)

    Kupferberg, Lenn C.

    1996-03-01

    Fourier transform IR [FT-IR] spectrometers have virtually replaced scanned grating IR spectrometers in the commercial market. While FTIR spectrometers have been a boon for the chemist, they present problems for the measurement of transmittance of thick, high-index, high-dispersion, IR windows. Reflection and refraction of light by the windows introduce measurement errors. The principles of the FT-IR spectrometer will be briefly reviewed. The origins of the measurement errors will be discussed. Simple modifications to the operation of commercially available instruments will be presented. These include using strategically placed apertures and the use of collimated vs. focused beams at the sample position. They are essential for removing the effects of reflected light entering the interferometer and limiting the divergence angle of light in the interferometer. The latter minimizes refractive effects and insures consistent underfilling of the detector. Data will be shown from FT-IR spectrometers made by four manufactures and compared to measurements from a dispersive spectrometer.

  18. Highly Accurate Quartic Force Fields, Vibrational Frequencies, and Spectroscopic Constants for Cyclic and Linear C3H3(+)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.

    2011-01-01

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.

  19. Highly accurate quartic force fields, vibrational frequencies, and spectroscopic constants for cyclic and linear C3H3(+).

    PubMed

    Huang, Xinchuan; Taylor, Peter R; Lee, Timothy J

    2011-05-19

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C(3)H(3)(+) molecular cation, referred to as c-C(3)H(3)(+) and l-C(3)H(3)(+). Specifically, the singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit, and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants by use of both vibrational second-order perturbation theory and variational methods to solve the nuclear Schrödinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C(3)H(3)(+), obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C(3)H(3)(+) and l-C(3)H(3)(+) are the most reliable available for the free gas-phase species, and it is hoped that these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations. PMID:21510653

  20. Team Development for High Performance Management.

    ERIC Educational Resources Information Center

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  1. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  2. Properties Of High-Performance Thermoplastics

    NASA Technical Reports Server (NTRS)

    Johnston, Norman J.; Hergenrother, Paul M.

    1992-01-01

    Report presents review of principal thermoplastics (TP's) used to fabricate high-performance composites. Sixteen principal TP's considered as candidates for fabrication of high-performance composites presented along with names of suppliers, Tg, Tm (for semicrystalline polymers), and approximate maximum processing temperatures.

  3. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  4. LiF TLD-100 as a Dosimeter in High Energy Proton Beam Therapy-Can It Yield Accurate Results?

    SciTech Connect

    Zullo, John R. Kudchadker, Rajat J.; Zhu, X. Ronald; Sahoo, Narayan; Gillin, Michael T.

    2010-04-01

    In the region of high-dose gradients at the end of the proton range, the stopping power ratio of the protons undergoes significant changes, allowing for a broad spectrum of proton energies to be deposited within a relatively small volume. Because of the potential linear energy transfer dependence of LiF TLD-100 (thermolumescent dosimeter), dose measurements made in the distal fall-off region of a proton beam may be less accurate than those made in regions of low-dose gradients. The purpose of this study is to determine the accuracy and precision of dose measured using TLD-100 for a pristine Bragg peak, particularly in the distal fall-off region. All measurements were made along the central axis of an unmodulated 200-MeV proton beam from a Probeat passive beam-scattering proton accelerator (Hitachi, Ltd., Tokyo, Japan) at varying depths along the Bragg peak. Measurements were made using TLD-100 powder flat packs, placed in a virtual water slab phantom. The measurements were repeated using a parallel plate ionization chamber. The dose measurements using TLD-100 in a proton beam were accurate to within {+-}5.0% of the expected dose, previously seen in our past photon and electron measurements. The ionization chamber and the TLD relative dose measurements agreed well with each other. Absolute dose measurements using TLD agreed with ionization chamber measurements to within {+-} 3.0 cGy, for an exposure of 100 cGy. In our study, the differences in the dose measured by the ionization chamber and those measured by TLD-100 were minimal, indicating that the accuracy and precision of measurements made in the distal fall-off region of a pristine Bragg peak is within the expected range. Thus, the rapid change in stopping power ratios at the end of the range should not affect such measurements, and TLD-100 may be used with confidence as an in vivo dosimeter for proton beam therapy.

  5. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    NASA Astrophysics Data System (ADS)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  6. High Resolution Urban Feature Extraction for Global Population Mapping using High Performance Computing

    SciTech Connect

    Vijayaraj, Veeraraghavan; Bright, Eddie A; Bhaduri, Budhendra L

    2007-01-01

    The advent of high spatial resolution satellite imagery like Quick Bird (0.6 meter) and IKONOS (1 meter) has provided a new data source for high resolution urban land cover mapping. Extracting accurate urban regions from high resolution images has many applications and is essential to the population mapping efforts of Oak Ridge National Laboratory's (ORNL) LandScan population distribution program. This paper discusses an automated parallel algorithm that has been implemented on a high performance computing environment to extract urban regions from high resolution images using texture and spectral features

  7. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  8. Accurate and High-Coverage Immune Repertoire Sequencing Reveals Characteristics of Antibody Repertoire Diversification in Young Children with Malaria

    NASA Astrophysics Data System (ADS)

    Jiang, Ning

    Accurately measuring the immune repertoire sequence composition, diversity, and abundance is important in studying repertoire response in infections, vaccinations, and cancer immunology. Using molecular identifiers (MIDs) to tag mRNA molecules is an effective method in improving the accuracy of immune repertoire sequencing (IR-seq). However, it is still difficult to use IR-seq on small amount of clinical samples to achieve a high coverage of the repertoire diversities. This is especially challenging in studying infections and vaccinations where B cell subpopulations with fewer cells, such as memory B cells or plasmablasts, are often of great interest to study somatic mutation patterns and diversity changes. Here, we describe an approach of IR-seq based on the use of MIDs in combination with a clustering method that can reveal more than 80% of the antibody diversity in a sample and can be applied to as few as 1,000 B cells. We applied this to study the antibody repertoires of young children before and during an acute malaria infection. We discovered unexpectedly high levels of somatic hypermutation (SHM) in infants and revealed characteristics of antibody repertoire development in young children that would have a profound impact on immunization in children.

  9. Highly Accurate Semi-Empirical IR Line Lists of Asymmetric SO2 Isotopologues: SO18O and SO17O

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2015-12-01

    Atmosphere models and simulations of Venus, Mars, and Exo-planets will greatly benefit from complete and accurate Infrared spectra data of important molecules such as SO2 and CO2. Currently, high resolution spectra data for SO2 is very limited at 296K and mainly for the primary isotopologue 626. It cannot effectively support the observed data analysis and simulations. Recently we published a semi-empirically refined potential energy surface, denoted Ames-1, and Ames-296K IR line lists for SO2 626 and a few symmetric isotopologues including 646, 636, 666 and 828. The accuracy of line positions is around 0.01 - 0.03 cm-1 for most transitions. For intensities, most deviations are less than 5-15%. Now we have carried out new potential energy surface refinements by including latest experimental data and those of isotopologues. On the newly fitted surface, for the first time we have computed 296K line lists for the two most abundant asymmetric isotopologues, SO2 628 and SO2 627. We will present the spectra simulations of SO2 628 and SO2 627, and compare it with latest high resolution experimental spectroscopy of SO2 628. A composite "natural" line list at 296K is also available with terrestial abundances. These line lists will be available to download at http://huang.seti.org.

  10. A novel, integrated PET-guided MRS technique resulting in more accurate initial diagnosis of high-grade glioma.

    PubMed

    Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash

    2016-06-01

    Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas.

  11. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  12. SEMICONDUCTOR INTEGRATED CIRCUITS: Accurate metamodels of device parameters and their applications in performance modeling and optimization of analog integrated circuits

    NASA Astrophysics Data System (ADS)

    Tao, Liang; Xinzhang, Jia; Junfeng, Chen

    2009-11-01

    Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier.

  13. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  14. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  15. FTS Studies of the 17O Enriched Isotopologues of CO_2 Toward Creating a Complete and Highly Accurate Reference Standard

    NASA Astrophysics Data System (ADS)

    Elliott, Ben; Sung, Keeyoon; Brown, Linda; Miller, Charles

    2014-06-01

    The proliferation and increased abilities of remote sensing missions for the monitoring of planetary atmospheric gas species has spurred the need for complete and accurate spectroscopic reference standards. As a part of our ongoing effort toward creating a global carbon dioxide (CO2) frequency reference standard, we report new FTS measurements of the 17O enriched isotopologues of CO2. The first measurements were taken in the ν3 region (2200 - 2450 cm-1, 65 - 75 THz), have absolute calibration accuracies of 100 kHz (3E-6 cm-1), comparable to the uncertainties for typical sub-millimeter/THz spectroscopy. Such high absolute calibration accuracy has become regular procedure for the cases of linear molecules such as CO2 and CO for FTS measurements at JPL, and enables us to produce measured transition frequencies for entire bands with accuracies that rival those of early heterodyne measurements for individual beat notes. Additionally, by acquiring spectra of multiple carbon dioxide isotopologues simultaneously, we have begun to construct a self-consistent frequency grid based on CO2 that extends from 20 - 200 THz. These new spectroscopic reference standards are a significant step towards minimizing CO2 retrieval errors from remote sensing applications, especially those involving targets with predominantly CO2 atmospheres such as Mars, Venus and candidate terrestrial exoplanets where minor isotopologues will make significant contributions to the radiance signals.

  16. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  17. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1

    PubMed Central

    Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.

    2016-01-01

    Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria

  18. System analysis of high performance MHD systems

    SciTech Connect

    Chang, S.L.; Berry, G.F.; Hu, N.

    1988-01-01

    This paper presents the results of an investigation on the upper ranges of performance that an MHD power plant using advanced technology assumptions might achieve and a parametric study on the key variables affecting this high performance. To simulate a high performance MHD power plant and conduct a parametric study, the Systems Analysis Language Translator (SALT) code developed at Argonne National Laboratory was used. The parametric study results indicate that the overall efficiency of an MHD power plant can be further increased subject to the improvement of some key variables such as, the MHD generator inverter efficiency, channel electrical loading factor, magnetic field strength, preheated air temperature, and combustor heat loss. In an optimization calculation, the simulated high performance MHD power plant using advanced technology assumptions can attain an ultra high overall efficiency, exceeding 62%. 12 refs., 5 figs., 4 tabs.

  19. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  20. High-resolution accurate mass measurements of biomolecules using a new electrospray ionization ion cyclotron resonance mass spectrometer.

    PubMed

    Winger, B E; Hofstadler, S A; Bruce, J E; Udseth, H R; Smith, R D

    1993-07-01

    A novel electrospray ionization/Fourier transform ion cyclotron resonance mass spectrometer based on a 7-T superconducting magnet was developed for high-resolution accurate mass measurements of large biomolecules. Ions formed at atmospheric pressure using electrospray ionization (ESI) were transmitted (through six differential pumping stages) to the trapped ion cell maintained below 10(-9) torr. The increased pumping speed attainable with cryopumping (> 10(5) L/s) allowed brief pressure excursions to above 10(-4) torr, with greatly enhanced trapping efficiencies and subsequent short pumpdown times, facilitating high-resolution mass measurements. A set of electromechanical shutters were also used to minimize the effect of the directed molecular beam produced by the ES1 source and were open only during ion injection. Coupled with the use of the pulsed-valve gas inlet, the trapped ion cell was generally filled to the space charge limit within 100 ms. The use of 10-25 ms ion injection times allowed mass spectra to be obtained from 4 fmol of bovine insulin (Mr 5734) and ubiquitin (Mr 8565, with resolution sufficient to easily resolve the isotopic envelopes and determine the charge states. The microheterogeneity of the glycoprotein ribonuclease B was examined, giving a measured mass of 14,898.74 Da for the most abundant peak in the isotopic envelope of the normally glycosylated protein (i.e., with five mannose and two N-acetylglucosamine residues (an error of approximately 2 ppm) and an average error of approximately 1 ppm for the higher glycosylated and various H3PO4 adducted forms of the protein. Time-domain signals lasting in excess of 80 s were obtained for smaller proteins, producing, for example, a mass resolution of more than 700,000 for the 4(+) charge state (m/z 1434) of insulin. PMID:24227643

  1. A standardized framework for accurate, high-throughput genotyping of recombinant and non-recombinant viral sequences.

    PubMed

    Alcantara, Luiz Carlos Junior; Cassol, Sharon; Libin, Pieter; Deforche, Koen; Pybus, Oliver G; Van Ranst, Marc; Galvão-Castro, Bernardo; Vandamme, Anne-Mieke; de Oliveira, Tulio

    2009-07-01

    Human immunodeficiency virus type-1 (HIV-1), hepatitis B and C and other rapidly evolving viruses are characterized by extremely high levels of genetic diversity. To facilitate diagnosis and the development of prevention and treatment strategies that efficiently target the diversity of these viruses, and other pathogens such as human T-lymphotropic virus type-1 (HTLV-1), human herpes virus type-8 (HHV8) and human papillomavirus (HPV), we developed a rapid high-throughput-genotyping system. The method involves the alignment of a query sequence with a carefully selected set of pre-defined reference strains, followed by phylogenetic analysis of multiple overlapping segments of the alignment using a sliding window. Each segment of the query sequence is assigned the genotype and sub-genotype of the reference strain with the highest bootstrap (>70%) and bootscanning (>90%) scores. Results from all windows are combined and displayed graphically using color-coded genotypes. The new Virus-Genotyping Tools provide accurate classification of recombinant and non-recombinant viruses and are currently being assessed for their diagnostic utility. They have incorporated into several HIV drug resistance algorithms including the Stanford (http://hivdb.stanford.edu) and two European databases (http://www.umcutrecht.nl/subsite/spread-programme/ and http://www.hivrdb.org.uk/) and have been successfully used to genotype a large number of sequences in these and other databases. The tools are a PHP/JAVA web application and are freely accessible on a number of servers including: http://bioafrica.mrc.ac.za/rega-genotype/html/, http://lasp.cpqgm.fiocruz.br/virus-genotype/html/, http://jose.med.kuleuven.be/genotypetool/html/.

  2. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  3. Highly accurate isotope composition measurements by a miniature laser ablation mass spectrometer designed for in situ investigations on planetary surfaces

    NASA Astrophysics Data System (ADS)

    Riedo, A.; Meyer, S.; Heredia, B.; Neuland, M. B.; Bieler, A.; Tulej, M.; Leya, I.; Iakovleva, M.; Mezger, K.; Wurz, P.

    2013-10-01

    An experimental procedure for precise and accurate measurements of isotope abundances by a miniature laser ablation mass spectrometer for space research is described. The measurements were conducted on different untreated NIST standards and galena samples by applying pulsed UV laser radiation (266 nm, 3 ns and 20 Hz) for ablation, atomisation, and ionisation of the sample material. Mass spectra of released ions are measured by a reflectron-type time-of-flight mass analyser. A computer controlled performance optimiser was used to operate the system at maximum ion transmission and mass resolution. At optimal experimental conditions, the best relative accuracy and precision achieved for Pb isotope compositions are at the per mill level and were obtained in a range of applied laser irradiances and a defined number of accumulated spectra. A similar relative accuracy and precision was achieved in the study of Pb isotope compositions in terrestrial galena samples. The results for the galena samples are similar to those obtained with a thermal ionisation mass spectrometer (TIMS). The studies of the isotope composition of other elements yielded relative accuracy and precision at the per mill level too, with characteristic instrument parameters for each element. The relative accuracy and precision of the measurements is degrading with lower element/isotope concentration in a sample. For the elements with abundances below 100 ppm these values drop to the percent level. Depending on the isotopic abundances of Pb in minerals, 207Pb/206Pb ages with accuracy in the range of tens of millions of years can be achieved.

  4. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.

  5. Evaluation of high-definition television for remote task performance

    SciTech Connect

    Draper, J.V.; Fujita, Y.; Herndon, J.N.

    1987-04-01

    High-definition television (HDTV) transmits a video image with more than twice the number (1125 for HDTV to 525 for standard-resolution TV) of horizontal scan lines that standard-resolution TV provides. The improvement in picture quality (compared to standard-resolution TV) that the extra scan lines provide is impressive. Objects in the HDTV picture have more sharply defined edges, better contrast, and more accurate reproduction of shading and color patterns than do those in the standard-resolution TV picture. Because the TV viewing system is a key component for teleoperator performance, an improvement in TV picture quality could mean an improvement in the speed and accuracy with which teleoperators perform tasks. This report describes three experiments designed to evaluate the impact of HDTV on the performance of typical remote tasks. The performance of HDTV was compared to that of standard-resolution, monochromatic TV and standard-resolution, stereoscopic, monochromatic TV in the context of judgment of depth in a televised scene, visual inspection of an object, and performance of a typical remote handling task. The results of the three experiments show that in some areas HDTV can lead to improvement in teleoperator performance. Observers inspecting a small object for a flaw were more accurate with HDTV than with either of the standard-resolution systems. High resolution is critical for detection of small-scale flaws of the type in the experiment (a scratch on a glass bottle). These experiments provided an evaluation of HDTV television for use in tasks that must be routinely performed to remotely maintain a nuclear fuel reprocessing facility. 5 refs., 7 figs., 9 tabs.

  6. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  7. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  8. Appraisal of Artificial Screening Techniques of Tomato to Accurately Reflect Field Performance of the Late Blight Resistance

    PubMed Central

    Nowakowska, Marzena; Nowicki, Marcin; Kłosińska, Urszula; Maciorowski, Robert; Kozik, Elżbieta U.

    2014-01-01

    Late blight (LB) caused by the oomycete Phytophthora infestans continues to thwart global tomato production, while only few resistant cultivars have been introduced locally. In order to gain from the released tomato germplasm with LB resistance, we compared the 5-year field performance of LB resistance in several tomato cultigens, with the results of controlled conditions testing (i.e., detached leaflet/leaf, whole plant). In case of these artificial screening techniques, the effects of plant age and inoculum concentration were additionally considered. In the field trials, LA 1033, L 3707, L 3708 displayed the highest LB resistance, and could be used for cultivar development under Polish conditions. Of the three methods using controlled conditions, the detached leaf and the whole plant tests had the highest correlation with thefield experiments. The plant age effect on LB resistance in tomato reported here, irrespective of the cultigen tested or inoculum concentration used, makes it important to standardize the test parameters when screening for resistance. Our results help show why other reports disagree on LB resistance in tomato. PMID:25279467

  9. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  10. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  11. Development and clinical evaluation of a highly accurate dengue NS1 rapid test: from the preparation of a soluble NS1 antigen to the construction of an RDT.

    PubMed

    Lee, Jihoo; Kim, Hak-Yong; Chong, Chom-Kyu; Song, Hyun-Ok

    2015-06-01

    Early diagnosis of dengue virus (DENV) is important. There are numerous products on the market claiming to detect DENV NS1, but these are not always reliable. In this study, a highly sensitive and accurate rapid diagnostic test (RDT) was developed using anti-dengue NS1 monoclonal antibodies. A recombinant NS1 protein was produced with high antigenicity and purity. Monoclonal antibodies were raised against this purified NS1 antigen. The RDT was constructed using a capturing (4A6A10, Kd=7.512±0.419×10(-9)) and a conjugating antibody (3E12E6, Kd=7.032±0.322×10(-9)). The diagnostic performance was evaluated with NS1-positive clinical samples collected from various dengue endemic countries and compared to SD BioLine Dengue NS1 Ag kit. The constructed RDT exhibited higher sensitivity (92.9%) with more obvious diagnostic performance than the commercial kit (83.3%). The specificity of constructed RDT was 100%. The constructed RDT could offer a reliable point-of-care testing tool for the early detection of dengue infections in remote areas and contribute to the control of dengue-related diseases. PMID:25824725

  12. Development and clinical evaluation of a highly accurate dengue NS1 rapid test: from the preparation of a soluble NS1 antigen to the construction of an RDT.

    PubMed

    Lee, Jihoo; Kim, Hak-Yong; Chong, Chom-Kyu; Song, Hyun-Ok

    2015-06-01

    Early diagnosis of dengue virus (DENV) is important. There are numerous products on the market claiming to detect DENV NS1, but these are not always reliable. In this study, a highly sensitive and accurate rapid diagnostic test (RDT) was developed using anti-dengue NS1 monoclonal antibodies. A recombinant NS1 protein was produced with high antigenicity and purity. Monoclonal antibodies were raised against this purified NS1 antigen. The RDT was constructed using a capturing (4A6A10, Kd=7.512±0.419×10(-9)) and a conjugating antibody (3E12E6, Kd=7.032±0.322×10(-9)). The diagnostic performance was evaluated with NS1-positive clinical samples collected from various dengue endemic countries and compared to SD BioLine Dengue NS1 Ag kit. The constructed RDT exhibited higher sensitivity (92.9%) with more obvious diagnostic performance than the commercial kit (83.3%). The specificity of constructed RDT was 100%. The constructed RDT could offer a reliable point-of-care testing tool for the early detection of dengue infections in remote areas and contribute to the control of dengue-related diseases.

  13. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  14. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing. PMID:27479054

  15. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    NASA Astrophysics Data System (ADS)

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  16. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    SciTech Connect

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-15

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  17. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers.

    PubMed

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class. PMID:23902112

  18. Simulating the Cranfield geological carbon sequestration project with high-resolution static models and an accurate equation of state

    DOE PAGES

    Soltanian, Mohamad Reza; Amooie, Mohammad Amin; Cole, David R.; Graham, David E.; Hosseini, Seyyed Abolfazl; Hovorka, Susan; Pfiffner, Susan M.; Phelps, Tommy Joe; Moortgat, Joachim

    2016-10-11

    In this study, a field-scale carbon dioxide (CO2) injection pilot project was conducted as part of the Southeast Regional Sequestration Partnership (SECARB) at Cranfield, Mississippi. We present higher-order finite element simulations of the compositional two-phase CO2-brine flow and transport during the experiment. High- resolution static models of the formation geology in the Detailed Area Study (DAS) located below the oil- water contact (brine saturated) are used to capture the impact of connected flow paths on breakthrough times in two observation wells. Phase behavior is described by the cubic-plus-association (CPA) equation of state, which takes into account the polar nature ofmore » water molecules. Parameter studies are performed to investigate the importance of Fickian diffusion, permeability heterogeneity, relative permeabilities, and capillarity. Simulation results for the pressure response in the injection well and the CO2 breakthrough times at the observation wells show good agreement with the field data. For the high injection rates and short duration of the experiment, diffusion is relatively unimportant (high P clet numbers), while relative permeabilities have a profound impact on the pressure response. High-permeability pathways, created by fluvial deposits, strongly affect the CO2 transport and highlight the importance of properly characterizing the formation heterogeneity in future carbon sequestration projects.« less

  19. Side-illuminating LED luminaires with accurate projection in high uniformity and high optical utilization factor for large-area field illumination.

    PubMed

    Lo, Yi-Chien; Cai, Jhih-You; Tasi, Ming-Shiou; Tasi, Zheng-Yu; Sun, Ching-Cherng

    2014-03-10

    A novel light luminaire is proposed and experimentally analyzed, which accurately projects light into a large rectangular area to achieve uniform illumination and a high optical utilization factor at the target. Side-illuminating luminaires for large-scale illuminated area are typically set with an elevated tilt angle to enlarge the illuminated area. However, the light pattern is bent thereby reducing the uniformity and optical utilization factor at the target. In this paper, we propose an efficient and useful approach with a rotationally symmetric projection lens that is trimmed to adjust the bending effect and to form a rectangular illumination light pattern on the ground. The design concept is demonstrated and verified. Several potential applications such as highly uniform illumination with fitting shapes for sport courts are analyzed and discussed. PMID:24922246

  20. Side-illuminating LED luminaires with accurate projection in high uniformity and high optical utilization factor for large-area field illumination.

    PubMed

    Lo, Yi-Chien; Cai, Jhih-You; Tasi, Ming-Shiou; Tasi, Zheng-Yu; Sun, Ching-Cherng

    2014-03-10

    A novel light luminaire is proposed and experimentally analyzed, which accurately projects light into a large rectangular area to achieve uniform illumination and a high optical utilization factor at the target. Side-illuminating luminaires for large-scale illuminated area are typically set with an elevated tilt angle to enlarge the illuminated area. However, the light pattern is bent thereby reducing the uniformity and optical utilization factor at the target. In this paper, we propose an efficient and useful approach with a rotationally symmetric projection lens that is trimmed to adjust the bending effect and to form a rectangular illumination light pattern on the ground. The design concept is demonstrated and verified. Several potential applications such as highly uniform illumination with fitting shapes for sport courts are analyzed and discussed. PMID:24800294

  1. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  2. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  3. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  4. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  5. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  6. Performance variability of highly parallel architectures

    SciTech Connect

    Kramer, William T.C.; Ryan, Clint

    2003-05-01

    The design and evaluation of high performance computers has concentrated on increasing computational speed for applications. This performance is often measured on a well configured dedicated system to show the best case. In the real environment, resources are not always dedicated to a single task, and systems run tasks that may influence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner. It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance.

  7. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  8. Performance analysis of memory hierachies in high performance systems

    SciTech Connect

    Yogesh, A.

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  9. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  10. High performance stationary phases for planar chromatography.

    PubMed

    Poole, Salwa K; Poole, Colin F

    2011-05-13

    The kinetic performance of stabilized particle layers, particle membranes, and thin films for thin-layer chromatography is reviewed with a focus on how layer characteristics and experimental conditions affect the observed plate height. Forced flow and pressurized planar electrochromatography are identified as the best candidates to overcome the limited performance achieved by capillary flow for stabilized particle layers. For conventional and high performance plates band broadening is dominated by molecular diffusion at low mobile phase velocities typical of capillary flow systems and by mass transfer with a significant contribution from flow anisotropy at higher flow rates typical of forced flow systems. There are few possible changes to the structure of stabilized particle layers that would significantly improve their performance for capillary flow systems while for forced flow a number of avenues for further study are identified. New media for ultra thin-layer chromatography shows encouraging possibilities for miniaturized high performance systems but the realization of their true performance requires improvements in instrumentation for sample application and detection.

  11. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems.

  12. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  13. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  14. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  15. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  16. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  17. Co-design for High Performance Computing

    NASA Astrophysics Data System (ADS)

    Rodrigues, Arun; Dosanjh, Sudip; Hemmert, Scott

    2010-09-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  18. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  19. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  20. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  1. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  2. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  3. High Performance Woven Mesh Heat Exchangers

    NASA Astrophysics Data System (ADS)

    Wirtz, Richard A.; Li, Chen; Park, Ji-Wook; Xu, Jun

    2002-07-01

    Simple-to-fabricate woven mesh structures, consisting of bonded laminates of two-dimensional plain-weave conductive screens, or three-dimensional orthogonal weaves are described. Geometric equations show that these porous matrices can be fabricated to have a wide range of porosity and a highly anisotropic thermal conductivity vector. A mathematical model of the thermal performance of such a mesh, deployed as a heat exchange surface, is developed. Measurements of pressure drop and overall heat transfer rate are reported and used with the performance model to develop correlation equations of mesh friction factor and Colburn j-factor as a function of coolant properties, mesh characteristics and flow rate through the mesh. A heat exchanger performance analysis delineates conditions where the two mesh technologies offer superior performance.

  4. Liquid Hybridization and Solid Phase Detection: A Highly Sensitive and Accurate Strategy for MicroRNA Detection in Plants and Animals.

    PubMed

    Li, Fosheng; Mei, Lanju; Zhan, Cheng; Mao, Qiang; Yao, Min; Wang, Shenghua; Tang, Lin; Chen, Fang

    2016-01-01

    MicroRNAs (miRNAs) play important roles in nearly every aspect of biology, including physiological, biochemical, developmental and pathological processes. Therefore, a highly sensitive and accurate method of detection of miRNAs has great potential in research on theory and application, such as the clinical approach to medicine, animal and plant production, as well as stress response. Here, we report a strategic method to detect miRNAs from multicellular organisms, which mainly includes liquid hybridization and solid phase detection (LHSPD); it has been verified in various species and is much more sensitive than traditional biotin-labeled Northern blots. By using this strategy and chemiluminescent detection with digoxigenin (DIG)-labeled or biotin-labeled oligonucleotide probes, as low as 0.01-0.25 fmol [for DIG-CDP Star (disodium2-chloro-5-(4-methoxyspiro{1,2-dioxetane-3,2'-(5'-chloro)tricyclo[3.3.1.13,7]decan}-4-yl)phenyl phosphate) system], 0.005-0.1 fmol (for biotin-CDP Star system), or 0.05-0.5 fmol (for biotin-luminol system) of miRNA can be detected and one-base difference can be distinguished between miRNA sequences. Moreover, LHSPD performed very well in the quantitative analysis of miRNAs, and the whole process can be completed within about 9 h. The strategy of LHSPD provides an effective solution for rapid, accurate, and sensitive detection and quantitative analysis of miRNAs in plants and animals. PMID:27598139

  5. Retrospective screening of relevant pesticide metabolites in food using liquid chromatography high resolution mass spectrometry and accurate-mass databases of parent molecules and diagnostic fragment ions.

    PubMed

    Polgár, László; García-Reyes, Juan F; Fodor, Péter; Gyepes, Attila; Dernovics, Mihály; Abrankó, László; Gilbert-López, Bienvenida; Molina-Díaz, Antonio

    2012-08-01

    In recent years, the detection and characterization of relevant pesticide metabolites in food is an important task in order to evaluate their formation, kinetics, stability, and toxicity. In this article, a methodology for the systematic screening of pesticides and their main metabolites in fruit and vegetable samples is described, using LC-HRMS and accurate-mass database search of parent compounds and their diagnostic fragment ions. The approach is based on (i) search for parent pesticide molecules; (ii) search for their metabolites in the positive samples, assuming common fragmentation pathways between the metabolites and parent pesticide molecules; and (iii) search for pesticide conjugates using the data from both parent species and diagnostic fragment ions. An accurate-mass database was constructed consisting of 1396 compounds (850 parent compounds, 447 fragment ions and 99 metabolites). The screening process was performed by the software in an automated fashion. The proposed methodology was evaluated with 29 incurred samples and the output obtained was compared to standard pesticide testing methods (targeted LC-MS/MS). Examples on the application of the proposed approach are shown, including the detection of several pesticide glycosides derivatives, which were found with significantly relevant intensities. Glucose-conjugated forms of parent compounds (e.g., fenhexamid-O-glucoside) and those of metabolites (e.g., despropyl-iprodione-N-glycoside) were detected. Facing the lack of standards for glycosylated pesticides, the study was completed with the synthesis of fenhexamid-O-glucoside for quantification purposes. In some cases the pesticide derivatives were found in a relatively high ratio, drawing the attention to these kinds of metabolites and showing that they should not be neglected in multi-residue methods. The global coverage obtained on the 29 analyzed samples showed the usefulness and benefits of the proposed approach and highlights the practical

  6. Liquid Hybridization and Solid Phase Detection: A Highly Sensitive and Accurate Strategy for MicroRNA Detection in Plants and Animals

    PubMed Central

    Li, Fosheng; Mei, Lanju; Zhan, Cheng; Mao, Qiang; Yao, Min; Wang, Shenghua; Tang, Lin; Chen, Fang

    2016-01-01

    MicroRNAs (miRNAs) play important roles in nearly every aspect of biology, including physiological, biochemical, developmental and pathological processes. Therefore, a highly sensitive and accurate method of detection of miRNAs has great potential in research on theory and application, such as the clinical approach to medicine, animal and plant production, as well as stress response. Here, we report a strategic method to detect miRNAs from multicellular organisms, which mainly includes liquid hybridization and solid phase detection (LHSPD); it has been verified in various species and is much more sensitive than traditional biotin-labeled Northern blots. By using this strategy and chemiluminescent detection with digoxigenin (DIG)-labeled or biotin-labeled oligonucleotide probes, as low as 0.01–0.25 fmol [for DIG-CDP Star (disodium2-chloro-5-(4-methoxyspiro{1,2-dioxetane-3,2′-(5′-chloro)tricyclo[3.3.1.13,7]decan}-4-yl)phenyl phosphate) system], 0.005–0.1 fmol (for biotin-CDP Star system), or 0.05–0.5 fmol (for biotin-luminol system) of miRNA can be detected and one-base difference can be distinguished between miRNA sequences. Moreover, LHSPD performed very well in the quantitative analysis of miRNAs, and the whole process can be completed within about 9 h. The strategy of LHSPD provides an effective solution for rapid, accurate, and sensitive detection and quantitative analysis of miRNAs in plants and animals. PMID:27598139

  7. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  8. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  9. A Low-Cost and High-Performance Conductivity Meter.

    ERIC Educational Resources Information Center

    da Rocha, Rogerio T.; And Others

    1997-01-01

    Describes an apparatus that is stable and accurate enough for quantitative conductivity experiments but maintains the simplicity of construction and use as well as low cost. Discusses principles and implementation and the performance of the assembled apparatus. (JRH)

  10. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  11. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  12. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  13. Assessment of a sponge layer as a non-reflective boundary treatment with highly accurate gust–airfoil interaction results

    NASA Astrophysics Data System (ADS)

    Crivellini, A.

    2016-02-01

    This paper deals with the numerical performance of a sponge layer as a non-reflective boundary condition. This technique is well known and widely adopted, but only recently have the reasons for a sponge failure been recognised, in analysis by Mani. For multidimensional problems, the ineffectiveness of the method is due to the self-reflections of the sponge occurring when it interacts with an oblique acoustic wave. Based on his theoretical investigations, Mani gives some useful guidelines for implementing effective sponge layers. However, in our opinion, some practical indications are still missing from the current literature. Here, an extensive numerical study of the performance of this technique is presented. Moreover, we analyse a reduced sponge implementation characterised by undamped partial differential equations for the velocity components. The main aim of this paper relies on the determination of the minimal width of the layer, as well as of the corresponding strength, required to obtain a reflection error of no more than a few per cent of that observed when solving the same problem on the same grid, but without employing the sponge layer term. For this purpose, a test case of computational aeroacoustics, the single airfoil gust response problem, has been addressed in several configurations. As a direct consequence of our investigation, we present a well documented and highly validated reference solution for the far-field acoustic intensity, a result that is not well established in the literature. Lastly, the proof of the accuracy of an algorithm for coupling sub-domains solved by the linear and non-liner Euler governing equations is given. This result is here exploited to adopt a linear-based sponge layer even in a non-linear computation.

  14. Assessment of a sponge layer as a non-reflective boundary treatment with highly accurate gust-airfoil interaction results

    NASA Astrophysics Data System (ADS)

    Crivellini, A.

    2016-02-01

    This paper deals with the numerical performance of a sponge layer as a non-reflective boundary condition. This technique is well known and widely adopted, but only recently have the reasons for a sponge failure been recognised, in analysis by Mani. For multidimensional problems, the ineffectiveness of the method is due to the self-reflections of the sponge occurring when it interacts with an oblique acoustic wave. Based on his theoretical investigations, Mani gives some useful guidelines for implementing effective sponge layers. However, in our opinion, some practical indications are still missing from the current literature. Here, an extensive numerical study of the performance of this technique is presented. Moreover, we analyse a reduced sponge implementation characterised by undamped partial differential equations for the velocity components. The main aim of this paper relies on the determination of the minimal width of the layer, as well as of the corresponding strength, required to obtain a reflection error of no more than a few per cent of that observed when solving the same problem on the same grid, but without employing the sponge layer term. For this purpose, a test case of computational aeroacoustics, the single airfoil gust response problem, has been addressed in several configurations. As a direct consequence of our investigation, we present a well documented and highly validated reference solution for the far-field acoustic intensity, a result that is not well established in the literature. Lastly, the proof of the accuracy of an algorithm for coupling sub-domains solved by the linear and non-liner Euler governing equations is given. This result is here exploited to adopt a linear-based sponge layer even in a non-linear computation.

  15. High thermoelectric performance of the distorted bismuth(110) layer.

    PubMed

    Cheng, L; Liu, H J; Zhang, J; Wei, J; Liang, J H; Jiang, P H; Fan, D D; Sun, L; Shi, J

    2016-07-14

    The thermoelectric properties of the distorted bismuth(110) layer are investigated using first-principles calculations combined with the Boltzmann transport equation for both electrons and phonons. To accurately predict the electronic and transport properties, the quasiparticle corrections with the GW approximation of many-body effects have been explicitly included. It is found that a maximum ZT value of 6.4 can be achieved for n-type systems, which essentially stemmed from the weak scattering of electrons. Moreover, we demonstrate that the distorted Bi layer retains high ZT values in relatively broad regions of both temperature and carrier concentration. Our theoretical work emphasizes that the deformation potential constant characterizing the electron-phonon scattering strength is an important paradigm for searching high thermoelectric performance materials. PMID:27302907

  16. Efficient and accurate local single reference correlation methods for high-spin open-shell molecules using pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Hansen, Andreas; Liakos, Dimitrios G.; Neese, Frank

    2011-12-01

    A production level implementation of the high-spin open-shell (spin unrestricted) single reference coupled pair, quadratic configuration interaction and coupled cluster methods with up to doubly excited determinants in the framework of the local pair natural orbital (LPNO) concept is reported. This work is an extension of the closed-shell LPNO methods developed earlier [F. Neese, F. Wennmohs, and A. Hansen, J. Chem. Phys. 130, 114108 (2009), 10.1063/1.3086717; F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The internal space is spanned by localized orbitals, while the external space for each electron pair is represented by a truncated PNO expansion. The laborious integral transformation associated with the large number of PNOs becomes feasible through the extensive use of density fitting (resolution of the identity (RI)) techniques. Technical complications arising for the open-shell case and the use of quasi-restricted orbitals for the construction of the reference determinant are discussed in detail. As in the closed-shell case, only three cutoff parameters control the average number of PNOs per electron pair, the size of the significant pair list, and the number of contributing auxiliary basis functions per PNO. The chosen threshold default values ensure robustness and the results of the parent canonical methods are reproduced to high accuracy. Comprehensive numerical tests on absolute and relative energies as well as timings consistently show that the outstanding performance of the LPNO methods carries over to the open-shell case with minor modifications. Finally, hyperfine couplings calculated with the variational LPNO-CEPA/1 method, for which a well-defined expectation value type density exists, indicate the great potential of the LPNO approach for the efficient calculation of molecular properties.

  17. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  18. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  19. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  20. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  1. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  2. High Performance Databases For Scientific Applications

    NASA Technical Reports Server (NTRS)

    French, James C.; Grimshaw, Andrew S.

    1997-01-01

    The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.

  3. Tough, High-Performance, Thermoplastic Addition Polymers

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Proctor, K. Mason; Gleason, John; Morgan, Cassandra; Partos, Richard

    1991-01-01

    Series of addition-type thermoplastics (ATT's) exhibit useful properties. Because of their addition curing and linear structure, ATT polymers have toughness, like thermoplastics, and easily processed, like thermosets. Work undertaken to develop chemical reaction forming stable aromatic rings in backbone of ATT polymer, combining high-temperature performance and thermo-oxidative stability with toughness and easy processibility, and minimizing or eliminating necessity for tradeoffs among properties often observed in conventional polymer syntheses.

  4. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  5. Parameterization of an interfacial force field for accurate representation of peptide adsorption free energy on high-density polyethylene.

    PubMed

    Abramyan, Tigran M; Snyder, James A; Yancey, Jeremy A; Thyparambil, Aby A; Wei, Yang; Stuart, Steven J; Latour, Robert A

    2015-01-01

    Interfacial force field (IFF) parameters for use with the CHARMM force field have been developed for interactions between peptides and high-density polyethylene (HDPE). Parameterization of the IFF was performed to achieve agreement between experimental and calculated adsorption free energies of small TGTG-X-GTGT host-guest peptides (T = threonine, G = glycine, and X = variable amino-acid residue) on HDPE, with ±0.5 kcal/mol agreement. This IFF parameter set consists of tuned nonbonded parameters (i.e., partial charges and Lennard-Jones parameters) for use with an in-house-modified CHARMM molecular dynamic program that enables the use of an independent set of force field parameters to control molecular behavior at a solid-liquid interface. The R correlation coefficient between the simulated and experimental peptide adsorption free energies increased from 0.00 for the standard CHARMM force field parameters to 0.88 for the tuned IFF parameters. Subsequent studies are planned to apply the tuned IFF parameter set for the simulation of protein adsorption behavior on an HDPE surface for comparison with experimental values of adsorbed protein orientation and conformation. PMID:25818122

  6. A simple method for the accurate determination of the Henry's law constant for highly sorptive, semivolatile organic compounds.

    PubMed

    Kim, Yong-Hyun; Kim, Ki-Hyun

    2016-01-01

    A novel technique is developed to determine the Henry's law constants (HLCs) of seven volatile fatty acids (VFAs) with significantly high solubility using a combined application of thermal desorber/gas chromatography/mass spectrometry (TD/GC/MS). In light of the strong sorptive properties of these semi-volatile organic compounds (SVOCs), their HLCs were determined by properly evaluating the fraction lost on the surface of the materials used to induce equilibrium (vial, gas-tight syringe, and sorption tube). To this end, a total of nine repeated experiments were conducted in a closed (static) system at three different gas/liquid volume ratios. The best estimates for HLCs (M/atm) were thus 7,200 (propionic acid), 4,700 (i-butyric acid), 4,400 (n-butyric acid), 2,700 (i-valeric acid), 2,400 (n-valeric acid), 1,000 (hexanoic acid), and 1,500 (heptanoic acid). The differences in the HLC values between this study and previous studies, if assessed in terms of the percent difference, ranged from 9.2% (n-valeric acid) to 55.7% (i-valeric acid). We overcame the main cause of errors encountered in previous studies by performing the proper correction of the sorptive losses of the SVOCs that inevitably took place, particularly on the walls of the equilibration systems (mainly the headspace vial and/or the glass tight syringe). PMID:26577086

  7. Parameterization of an interfacial force field for accurate representation of peptide adsorption free energy on high-density polyethylene

    PubMed Central

    Abramyan, Tigran M.; Snyder, James A.; Yancey, Jeremy A.; Thyparambil, Aby A.; Wei, Yang; Stuart, Steven J.; Latour, Robert A.

    2015-01-01

    Interfacial force field (IFF) parameters for use with the CHARMM force field have been developed for interactions between peptides and high-density polyethylene (HDPE). Parameterization of the IFF was performed to achieve agreement between experimental and calculated adsorption free energies of small TGTG–X–GTGT host–guest peptides (T = threonine, G = glycine, and X = variable amino-acid residue) on HDPE, with ±0.5 kcal/mol agreement. This IFF parameter set consists of tuned nonbonded parameters (i.e., partial charges and Lennard–Jones parameters) for use with an in-house-modified CHARMM molecular dynamic program that enables the use of an independent set of force field parameters to control molecular behavior at a solid–liquid interface. The R correlation coefficient between the simulated and experimental peptide adsorption free energies increased from 0.00 for the standard CHARMM force field parameters to 0.88 for the tuned IFF parameters. Subsequent studies are planned to apply the tuned IFF parameter set for the simulation of protein adsorption behavior on an HDPE surface for comparison with experimental values of adsorbed protein orientation and conformation. PMID:25818122

  8. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  9. Optimizing the design of very high power, high performance converters

    SciTech Connect

    Edwards, R J; Tiagha, E A; Ganetis, G; Nawrocky, R J

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met.

  10. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  11. High Performance High-Tc Superconducting Wires

    SciTech Connect

    Kang, Sukill; Goyal, Amit; Li, Jing; Gapud, Albert Agcaoili; Martin, Patrick M; Heatherly Jr, Lee; Thompson, James R; Christen, David K; List III, Frederick Alyious; Paranthaman, Mariappan Parans; Lee, Dominic F

    2006-01-01

    We demonstrated short segments of a superconducting wire that meets or exceeds performance requirements for many large-scale applications of high-temperature superconducting materials, especially those requiring a high supercurrent and/or a high engineering critical current density in applied magnetic fields. The performance requirements for these varied applications were met in 3-micrometer-thick YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} films epitaxially grown via pulsed laser ablation on rolling assisted biaxially textured substrates. Enhancements of the critical current in self-field as well as excellent retention of this current in high applied magnetic fields were achieved in the thick films via incorporation of a periodic array of extended columnar defects, composed of self-aligned nanodots of nonsuperconducting material extending through the entire thickness of the film. These columnar defects are highly effective in pinning the superconducting vortices or flux lines, thereby resulting in the substantially enhanced performance of this wire.

  12. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  13. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  14. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  15. [High-performance society and doping].

    PubMed

    Gallien, C L

    2002-09-01

    Doping is not limited to high-level athletes. Likewise it is not limited to the field of sports activities. The doping phenomenon observed in sports actually reveals an underlying question concerning the notion of sports itself, and more widely, the society's conception of sports. In a high-performance society, which is also a high-risk society, doping behavior is observed in a large number of persons who may or may not participate in sports activities. The motivation is the search for individual success or profit. The fight against doping must therefore focus on individual responsibility and prevention in order to preserve athlete's health and maintain the ethical and educational value of sports activities.

  16. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  17. Heavily Doped PBSE with High Thermoelectric Performance

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor); Wang, Heng (Inventor); Pei, Yanzhong (Inventor)

    2015-01-01

    The present invention discloses heavily doped PbSe with high thermoelectric performance. Thermoelectric property measurements disclosed herein indicated that PbSe is high zT material for mid-to-high temperature thermoelectric applications. At 850 K a peak zT (is) greater than 1.3 was observed when n(sub H) approximately 1.0 X 10(exp 20) cm(exp -3). The present invention also discloses that a number of strategies used to improve zT of PbTe, such as alloying with other elements, nanostructuring and band modification may also be used to further improve zT in PbSe.

  18. Performance of annular high frequency thermoacoustic engines

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ivan A.

    This thesis presents studies of the behavior of miniature annular thermoacoustic prime movers and the imaging of the complex sound fields using PIV inside the small acoustic wave guides when driven by a temperature gradient. Thermoacoustic engines operating in the standing wave mode are limited in their acoustic efficiency by a high degree of irreversibility that is inherent in how they work. Better performance can be achieved by using traveling waves in the thermoacoustic devices. This has led to the development of an annular high frequency thermoacoustic prime mover consisting of a regenerator, which is a random stack in-between a hot and cold heat exchanger, inside an annular waveguide. Miniature devices were developed and studied with operating frequencies in the range of 2-4 kHz. This corresponds to an average ring circumference of 11 cm for the 3 kHz device, the resonator bore being 6 mm. A similar device of 11 mm bore, length of 18 cm was also investigated; its resonant frequency was 2 kHz. Sound intensities as high as 166.8 dB were generated with limited heat input. Sound power was extracted from the annular structure by an impedance-matching side arm. The nature of the acoustic wave generated by heat was investigated using a high speed PIV instrument. Although the acoustic device appears symmetric, its performance is characterized by a broken symmetry and by perturbations that exist in its structure. Effects of these are observed in the PIV imaging; images show axial and radial components. Moreover, PIV studies show effects of streaming and instabilities which affect the devices' acoustic efficiency. The acoustic efficiency is high, being of 40% of Carnot. This type of device shows much promise as a high efficiency energy converter; it can be reduced in size for microcircuit applications.

  19. Accurate dipole moment curve and non-adiabatic effects on the high resolution spectroscopic properties of the LiH molecule

    NASA Astrophysics Data System (ADS)

    Diniz, Leonardo G.; Kirnosov, Nikita; Alijah, Alexander; Mohallem, José R.; Adamowicz, Ludwik

    2016-04-01

    A very accurate dipole moment curve (DMC) for the ground X1Σ+ electronic state of the 7LiH molecule is reported. It is calculated with the use of all-particle explicitly correlated Gaussian functions with shifted centers. The DMC - the most accurate to our knowledge - and the corresponding highly accurate potential energy curve are used to calculate the transition energies, the transition dipole moments, and the Einstein coefficients for the rovibrational transitions with ΔJ = - 1 and Δv ⩽ 5 . The importance of the non-adiabatic effects in determining these properties is evaluated using the model of a vibrational R-dependent effective reduced mass in the rovibrational calculations introduced earlier (Diniz et al., 2015). The results of the present calculations are used to assess the quality of the two complete linelists of 7LiH available in the literature.

  20. High capacity heat pipe performance demonstration

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A high capacity heat pipe which will operate in one-g and in zero-g is investigated. An artery configuration which is self-priming in one-g was emphasized. Two artery modifications were evolved as candidates to achieve one-g priming and will provide the very high performance: the four artery and the eight artery configurations. These were each evaluated analytically for performance and priming capability. The eight artery configuration was found to be inadequate from a performance standpoint. The four artery showed promise of working. A five-inch long priming element test article was fabricated using the four artery design. Plexiglas viewing windows were made on each end of the heat pipe to permit viewing of the priming activity. The five-inch primary element would not successfully prime in one-g. Difficulties on priming in one-g raised questions about zero-g priming. Therefore a small test element heat pipe for verifying that the proposed configuration will self-prime in zero-g was fabricated and delivered.

  1. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  2. Limited rotational and rovibrational line lists computed with highly accurate quartic force fields and ab initio dipole surfaces.

    PubMed

    Fortenberry, Ryan C; Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2014-02-01

    In this work, computational procedures are employed to compute the rotational and rovibrational spectra and line lists for H2O, CO2, and SO2. Building on the established use of quartic force fields, MP2 and CCSD(T) Dipole Moment Surfaces (DMSs) are computed for each system of study in order to produce line intensities as well as the transition energies. The computed results exhibit a clear correlation to reference data available in the HITRAN database. Additionally, even though CCSD(T) DMSs produce more accurate intensities as compared to experiment, the use of MP2 DMSs results in reliable line lists that are still comparable to experiment. The use of the less computationally costly MP2 method is beneficial in the study of larger systems where use of CCSD(T) would be more costly. PMID:23692860

  3. Performance of the CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Perrotta, Andrea

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved tracking and vertexing algorithms, discussing their impact on the b-tagging performance as well as on the jet and missing energy reconstruction.

  4. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  5. Towards high performance inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Gong, Xiong

    2013-03-01

    Bulk heterojunction polymer solar cells that can be fabricated by solution processing techniques are under intense investigation in both academic institutions and industrial companies because of their potential to enable mass production of flexible and cost-effective alternative to silicon-based electronics. Despite the envisioned advantages and recent technology advances, so far the performance of polymer solar cells is still inferior to inorganic counterparts in terms of the efficiency and stability. There are many factors limiting the performance of polymer solar cells. Among them, the optical and electronic properties of materials in the active layer, device architecture and elimination of PEDOT:PSS are the most determining factors in the overall performance of polymer solar cells. In this presentation, I will present how we approach high performance of polymer solar cells. For example, by developing novel materials, fabrication polymer photovoltaic cells with an inverted device structure and elimination of PEDOT:PSS, we were able to observe over 8.4% power conversion efficiency from inverted polymer solar cells.

  6. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  7. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  8. High Power MPD Thruster Performance Measurements

    NASA Technical Reports Server (NTRS)

    LaPointe, Michael R.; Strzempkowski, Eugene; Pencil, Eric

    2004-01-01

    High power magnetoplasmadynamic (MPD) thrusters are being developed as cost effective propulsion systems for cargo transport to lunar and Mars bases, crewed missions to Mars and the outer planets, and robotic deep space exploration missions. Electromagnetic MPD thrusters have demonstrated, at the laboratory level, the ability to process megawatts of electrical power while providing significantly higher thrust densities than electrostatic electric propulsion systems. The ability to generate higher thrust densities permits a reduction in the number of thrusters required to perform a given mission, and alleviates the system complexity associated with multiple thruster arrays. The specific impulse of an MPD thruster can be optimized to meet given mission requirements, from a few thousand seconds with heavier gas propellants up to 10,000 seconds with hydrogen propellant. In support of programs envisioned by the NASA Office of Exploration Systems, Glenn Research Center is developing and testing quasi-steady MW-class MPD thrusters as a prelude to steady state high power thruster tests. This paper provides an overview of the GRC high power pulsed thruster test facility, and presents preliminary performance data for a quasi-steady baseline MPD thruster geometry.

  9. Arteriopathy in the high-performance athlete.

    PubMed

    Takach, Thomas J; Kane, Peter N; Madjarov, Jeko M; Holleman, Jeremiah H; Nussbaum, Tzvi; Robicsek, Francis; Roush, Timothy S

    2006-01-01

    Pain occurs frequently in high-performance athletes and is most often due to musculoskeletal injury or strain. However, athletes who participate in sports that require highly frequent, repetitive limb motion can also experience pain from an underlying arteriopathy, which causes exercise-induced ischemia. We reviewed the clinical records and follow-up care of 3 high-performance athletes (mean age, 29.3 yr; range, 16-47 yr) who were admitted consecutively to our institution from January 2002 through May 2003, each with a diagnosis of limb ischemia due to arteriopathy. The study group comprised 3 males: 2 active in competitive baseball (ages, 16 and 19 yr) and a cyclist (age, 47 yr). Provocative testing and radiologic evaluation established the diagnoses. Treatment goals included targeted resection of compressive structures, arterial reconstruction to eliminate stenosis and possible emboli, and improvement of distal perfusion. Our successful reconstructive techniques included thoracic outlet decompression and interpositional bypass of the subclavian artery in the 16-year-old patient, pectoralis muscle and tendon decompression to relieve compression of the axillary artery in the 19-year-old, and patch angioplasty for endofibrosis affecting the external iliac artery in the 47-year-old. Each patient was asymptomatic on follow-up and had resumed participation in competitive athletics. The recognition and anatomic definition of an arteriopathy that produces exercise-induced ischemia enables the application of precise therapy that can produce a symptom-free outcome and the ability to resume competitive athletics.

  10. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  11. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  12. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  13. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  14. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  15. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  16. Initial performance of the High Speed Photometer

    NASA Technical Reports Server (NTRS)

    Richards, Evan; Percival, Jeff; Nelson, Matt; Hatter, ED; Fitch, John; White, Rick

    1991-01-01

    The Hubble Space Telescope High Speed Photometer has four image dissector tubes, two with UV sensitive photocathodes, two sensitive to the near UV and to visual light, and a single red sensitive photomultiplier tube. The HSP is capable of photometric measurements from 1200 to 7500 A with time resolution of 11 microseconds and has no moving parts. An initial analysis of the on-orbit engineering performance of the HSP is presented with changes in operating procedures resulting from the primary mirror spherical aberration and experience gained during the verification period.

  17. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  18. Are skinfold-based models accurate and suitable for assessing changes in body composition in highly trained athletes?

    PubMed

    Silva, Analiza M; Fields, David A; Quitério, Ana L; Sardinha, Luís B

    2009-09-01

    This study was designed to assess the usefulness of skinfold (SKF) equations developed by Jackson and Pollock (JP) and by Evans (Ev) in tracking body composition changes (relative fat mass [%FM], absolute fat mass [FM], and fat-free mass [FFM]) of elite male judo athletes before a competition using a 4-compartment (4C) model as the reference method. A total of 18 male, top-level (age: 22.6 +/- 2.9 yr) athletes were evaluated at baseline (weight: 73.4 +/- 7.9 kg; %FM4C: 7.0 +/- 3.3%; FM4C: 5.1 +/- 2.6 kg; and FFM4C: 68.3 +/- 7.3 kg) and before a competition (weight: 72.7 +/- 7.5 kg; %FM4C: 6.5 +/- 3.4%; FM4C: 4.8 +/- 2.6 kg; and FFM4C: 67.9 +/- 7.1 kg). Measures of body density assessed by air displacement plethysmography, bone mineral content by dual energy X-ray absorptiometry, and total-body water by bioelectrical impedance spectroscopy were used to estimate 4C model %FM, FM, and FFM. Seven SKF site models using both JP and Ev were used to estimate %FM, FM, and FFM along with the simplified Ev3SKF site. Changes in %FM, FM, and FFM were not significantly different from the 4C model. The regression model for the SKF in question and the reference method did not differ from the line of identity in estimating changes in %FM, FM, and FFM. The limits of agreement were similar, ranging from -3.4 to 3.6 for %FM, -2.7 to 2.5 kg for FM, and -2.5 to 2.7 kg for FFM. Considering the similar performance of both 7SKF- and 3SKF-based equations compared with the criterion method, these data indicate that either the 7- or 3-site SFK models are not valid to detect %FM, FM, and FFM changes of highly trained athletes. These results highlighted the inaccuracy of anthropometric models in tracking desired changes in body composition of elite male judo athletes before a competition.

  19. High-temperature testing of high performance fiber reinforced concrete

    NASA Astrophysics Data System (ADS)

    Fořt, Jan; Vejmelková, Eva; Pavlíková, Milena; Trník, Anton; Čítek, David; Kolísko, Jiří; Černý, Robert; Pavlík, Zbyšek

    2016-06-01

    The effect of high-temperature exposure on properties of High Performance Fiber Reinforced Concrete (HPFRC) is researched in the paper. At first, reference measurements are done on HPFRC samples without high-temperature loading. Then, the HPFRC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000 °C. For the temperature loaded samples, measurement of residual mechanical and basic physical properties is done. Linear thermal expansion coefficient as function of temperature is accessed on the basis of measured thermal strain data. Additionally, simultaneous difference scanning calorimetry (DSC) and thermogravimetry (TG) analysis is performed in order to observe and explain material changes at elevated temperature. It is found that the applied high temperature loading significantly increases material porosity due to the physical, chemical and combined damage of material inner structure, and negatively affects also the mechanical strength. Linear thermal expansion coefficient exhibits significant dependence on temperature and changes of material structure. The obtained data will find use as input material parameters for modelling the damage of HPFRC structures exposed to the fire and high temperature action.

  20. High performance computing for domestic petroleum reservoir simulation

    SciTech Connect

    Zyvoloski, G.; Auer, L.; Dendy, J.

    1996-06-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory. High-performance computing offers the prospect of greatly increasing the resolution at which petroleum reservoirs can be represented in simulation models. The increases in resolution can be achieved through large increases in computational speed and memory, if machine architecture and numerical methods for solution of the multiphase flow equations can be used to advantage. Perhaps more importantly, the increased speed and size of today`s computers make it possible to add physical processes to simulation codes that heretofore were too expensive in terms of computer time and memory to be practical. These factors combine to allow the development of new, more accurate methods for optimizing petroleum reservoir production.

  1. A Generic Scheduling Simulator for High Performance Parallel Computers

    SciTech Connect

    Yoo, B S; Choi, G S; Jette, M A

    2001-08-01

    It is well known that efficient job scheduling plays a crucial role in achieving high system utilization in large-scale high performance computing environments. A good scheduling algorithm should schedule jobs to achieve high system utilization while satisfying various user demands in an equitable fashion. Designing such a scheduling algorithm is a non-trivial task even in a static environment. In practice, the computing environment and workload are constantly changing. There are several reasons for this. First, the computing platforms constantly evolve as the technology advances. For example, the availability of relatively powerful commodity off-the-shelf (COTS) components at steadily diminishing prices have made it feasible to construct ever larger massively parallel computers in recent years [1, 4]. Second, the workload imposed on the system also changes constantly. The rapidly increasing compute resources have provided many applications developers with the opportunity to radically alter program characteristics and take advantage of these additional resources. New developments in software technology may also trigger changes in user applications. Finally, political climate change may alter user priorities or the mission of the organization. System designers in such dynamic environments must be able to accurately forecast the effect of changes in the hardware, software, and/or policies under consideration. If the environmental changes are significant, one must also reassess scheduling algorithms. Simulation has frequently been relied upon for this analysis, because other methods such as analytical modeling or actual measurements are usually too difficult or costly. A drawback of the simulation approach, however, is that developing a simulator is a time-consuming process. Furthermore, an existing simulator cannot be easily adapted to a new environment. In this research, we attempt to develop a generic job-scheduling simulator, which facilitates the evaluation of

  2. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1988-01-01

    Analytical, numerical and experimental studies were performed on two classes of high temperature materials processing furnaces. The research concentrates on a commercially available high temperature furnace using zirconia as the heating element and an arc furnace based on a ST International tube welder. The zirconia furnace was delivered and work is progressing on schedule. The work on the arc furnace was initially stalled due to the unavailability of the NASA prototype, which is actively being tested aboard the KC-135 experimental aircraft. A proposal was written and funded to purchase an additional arc welder to alleviate this problem. The ST International weld head and power supply were received and testing will begin in early November. The first 6 months of the grant are covered.

  3. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  4. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  5. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  6. High-performance deployable structures for the support of high-concentration ratio solar array modules

    NASA Technical Reports Server (NTRS)

    Mobrem, M.

    1985-01-01

    A study conducted on high-performance deployable structures for the support of high-concentration ratio solar array modules is discussed. Serious consideration is being given to the use of high-concentration ratio solar array modules or applications such as space stations. These concentrator solar array designs offer the potential of reduced cost, reduced electrical complexity, higher power per unit area, and improved survivability. Arrays of concentrators, such as the miniaturized Cassegrainian concentrator modules, present a serious challenge to the structural design because their mass per unit area (5.7 kg/square meters) is higher than that of flexible solar array blankets, and the requirement for accurate orientation towards the Sun (plus or minus 0.5 degree) requires structures with improved accuracy potentials. In addition, use on a space station requires relatively high structural natural frequencies to avoid deleterious interactions with control systems and other large structural components. The objective here is to identify and evaluate conceptual designs of structures suitable for deploying and accurately supporting high-concentration ratio solar array modules.

  7. Process Performance of Optima XEx Single Wafer High Energy Implanter

    SciTech Connect

    Kim, J. H.; Yoon, Jongyoon; Kondratenko, S.; David, J.; Rubin, L. M.; Jang, I. S.; Cha, J. C.; Joo, Y. H.; Lee, A. B.; Jin, S. W.

    2011-01-07

    To meet the process requirements for well formation in future CMOS memory production, high energy implanters require more robust angle, dose, and energy control while maintaining high productivity. The Optima XEx high energy implanter meets these requirements by integrating a traditional LINAC beamline with a robust single wafer handling system. To achieve beam angle control, Optima XEx can control both the horizontal and vertical beam angles to within 0.1 degrees using advanced beam angle measurement and correction. Accurate energy calibration and energy trim functions accelerate process matching by eliminating energy calibration errors. The large volume process chamber and UDC (upstream dose control) using faraday cups outside of the process chamber precisely control implant dose regardless of any chamber pressure increase due to PR (photoresist) outgassing. An optimized RF LINAC accelerator improves reliability and enables singly charged phosphorus and boron energies up to 1200 keV and 1500 keV respectively with higher beam currents. A new single wafer endstation combined with increased beam performance leads to overall increased productivity. We report on the advanced performance of Optima XEx observed during tool installation and volume production at an advanced memory fab.

  8. Highly Accurate Quantum-Chemical Calculations for the Interstellar Molecules C_3 and l-C_3H^+

    NASA Astrophysics Data System (ADS)

    Botschwina, Peter; Schröder, Benjamin; Stein, Christopher; Sebald, Peter; Oswald, Rainer

    2014-06-01

    Composite potential energy surfaces with coupled-cluster contributions up to CCSDTQP were constructed for C_3 and l-C_3H^+ and used in the calculation of spectroscopic properties. The use of very large AO basis sets and the consideration of higher-order correlation beyond CCSD(T) is of utmost importance for C_3 in order to arrive at quantitative spectroscopic data. The first detection of l-C_3H^+ in the interstellar medium was reported by Pety et al., who attributed 9 radio lines observed in the horsehead photodissociation region to that species. That assignment was questioned by the recent theoretical work of Huang et al. However, our more accurate calculations are well in support of the original assignment. The calculated ground-state rotational constant is B_0 = 11248 MHz, only 0.03% off from the radio astronomical value of 11244.9512±0.0015 MHz. The ratio of centrifugal distortion constants D_0(exp.)/D_e(theor.) of 1.8 is quite large, but reasonable in comparison with C_3O and C_3. J. Pety, P. Gratier, V. Guzmán, E. Roueff, M. Gerin et al., Astron. Astrophys. 2012, A68, 1-8. X. Huang, R. C. Fortenberry, T. J. Lee, Astrophys. J. Lett. 2013, 768:L25, 1-5. P. Botschwina, R. Oswald, J. Chem. Phys. 2008, 129, 044305

  9. Full house of fears: evidence that people high in attachment anxiety are more accurate in detecting deceit.

    PubMed

    Ein-Dor, Tsachi; Perry, Adi

    2014-04-01

    Lying is deep-rooted in our nature, as over 90% of all people lie. Laypeople, however, do only slightly better than chance when detecting lies and deceptions. Recently, attachment anxiety was linked with people's hypervigilance toward threat-related cues. Accordingly, we tested whether attachment anxiety predicts people's ability to detect deceit and to play poker-a game that is based on players' ability to detect cheating. In Study 1, 202 participants watched a series of interpersonal interactions that comprised subtle clues to the honesty or dishonesty of the speakers. In Study 2, 58 participants watched clips in which such cues were absent. Participants were asked to decide whether the main characters were honest or dishonest. In Study 3, we asked 35 semiprofessional poker players to participate in a poker tournament, and then we predicted the amount of money won during the game. Results indicated that attachment anxiety, but not other types of anxiety, predicted more accurate detection of deceitful statements (Studies 1-2) and a greater amount of money won during a game of poker (Study 3). Results are discussed in relation to the possible adaptive functions of certain personality characteristics, such as attachment anxiety, often viewed as undesirable. PMID:23437786

  10. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  11. High performance amorphous selenium lateral photodetector

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, Shiva; Allec, Nicholas; Karim, Karim S.

    2012-03-01

    Lateral amorphous selenium (a-Se) detectors based on the metal-semiconductor-metal (MSM) device structure have been studied for indirect detector medical imaging applications. These detectors have raised interest due to their simple structure, ease of fabrication, high-speed, low dark current, low capacitance per unit area and better light utilization. The lateral device structure has a benefit that the electrode spacing may be easily controlled to reduce the required bias for a given desired electric field. In indirect conversion x-ray imaging, the scintillator is coupled to the top of the a-Se MSM photodetector, which itself is integrated on top of the thin-film-transistor (TFT) array. The carriers generated at the top surface of the a-Se layer experience a field that is parallel to the surface, and does not initially sweep them away from the surface. Therefore these carriers may recombine or get trapped in surface states and change the field at the surface, which may degrade the performance of the photodetector. In addition, due to the finite width of the electrodes, the fill factor of the device is less than unity. In this study we examine the effect of lateral drift of carriers and the fill factor on the photodetector performance. The impact of field magnitude on the performance is also investigated.

  12. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  13. High-performance vertical organic transistors.

    PubMed

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. PMID:23637074

  14. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  15. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  16. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    NASA Astrophysics Data System (ADS)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  17. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  18. High-performance holographic technologies for fluid-dynamics experiments

    PubMed Central

    Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.

    2010-01-01

    Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881

  19. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  20. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  1. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  2. Low-Cost High-Performance MRI.

    PubMed

    Sarracanie, Mathieu; LaPierre, Cristen D; Salameh, Najat; Waddington, David E J; Witzel, Thomas; Rosen, Matthew S

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm(3) imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  3. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  4. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  5. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  6. An isotopic-independent highly accurate potential energy surface for CO2 isotopologues and an initial (12)C(16)O2 infrared line list.

    PubMed

    Huang, Xinchuan; Schwenke, David W; Tashkun, Sergey A; Lee, Timothy J

    2012-03-28

    An isotopic-independent, highly accurate potential energy surface (PES) has been determined for CO(2) by refining a purely ab initio PES with selected, purely experimentally determined rovibrational energy levels. The purely ab initio PES is denoted Ames-0, while the refined PES is denoted Ames-1. Detailed tests are performed to demonstrate the spectroscopic accuracy of the Ames-1 PES. It is shown that Ames-1 yields σ(rms) (root-mean-squares error) = 0.0156 cm(-1) for 6873 J = 0-117 (12)C(16)O(2) experimental energy levels, even though less than 500 (12)C(16)O(2) energy levels were included in the refinement procedure. It is also demonstrated that, without any additional refinement, Ames-1 yields very good agreement for isotopologues. Specifically, for the (12)C(16)O(2) and (13)C(16)O(2) isotopologues, spectroscopic constants G(v) computed from Ames-1 are within ±0.01 and 0.02 cm(-1) of reliable experimentally derived values, while for the (16)O(12)C(18)O, (16)O(12)C(17)O, (16)O(13)C(18)O, (16)O(13)C(17)O, (12)C(18)O(2), (17)O(12)C(18)O, (12)C(17)O(2), (13)C(18)O(2), (13)C(17)O(2), (17)O(13)C(18)O, and (14)C(16)O(2) isotopologues, the differences are between ±0.10 and 0.15 cm(-1). To our knowledge, this is the first time a polyatomic PES has been refined using such high J values, and this has led to new challenges in the refinement procedure. An initial high quality, purely ab initio dipole moment surface (DMS) is constructed and used to generate a 296 K line list. For most bands, experimental IR intensities are well reproduced for (12)C(16)O(2) using Ames-1 and the DMS. For more than 80% of the bands, the experimental intensities are reproduced with σ(rms)(ΔI) < 20% or σ(rms)(ΔI∕δ(obs)) < 5. A few exceptions are analyzed and discussed. Directions for future improvements are discussed, though it is concluded that the current Ames-1 and the DMS should be useful in analyzing and assigning high-resolution laboratory or astronomical spectra. PMID:22462861

  7. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  8. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  9. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  10. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  11. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  12. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  13. High performance zinc air fuel cell stack

    NASA Astrophysics Data System (ADS)

    Pei, Pucheng; Ma, Ze; Wang, Keliang; Wang, Xizhong; Song, Mancun; Xu, Huachi

    2014-03-01

    A zinc air fuel cell (ZAFC) stack with inexpensive manganese dioxide (MnO2) as the catalyst is designed, in which the circulation flowing potassium hydroxide (KOH) electrolyte carries the reaction product away and acts as a coolant. Experiments are carried out to investigate the characteristics of polarization, constant current discharge and dynamic response, as well as the factors affecting the performance and uniformity of individual cells in the stack. The results reveal that the peak power density can be as high as 435 mW cm-2 according to the area of the air cathode sheet, and the influence factors on cell performance and uniformity are cell locations, filled state of zinc pellets, contact resistance, flow rates of electrolyte and air. It is also shown that the time needed for voltages to reach steady state and that for current step-up or current step-down are both in milliseconds, indicating the ZAFC can be excellently applied to vehicles with rapid dynamic response demands.

  14. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  15. High definition television: Evaluation for remote task performance

    NASA Astrophysics Data System (ADS)

    Draper, J. V.; Handel, S. J.; Herndon, J. N.

    High definition television (HDTV) transmits a video image with more than twice the number of horizontal scan lines that standard resolution television provides (1125 for HDTV to 525 for standard resolution television), with impressive picture quality improvement. These experimental activities are part of a joint collaboration between the U.S. Department of Energy (USDOE) and the Power Reactor and Nuclear Fuel Development Corporation (PNC) of Japan in the field of the Nuclear Fuel Cycle: Reprocessing Technology. Objects in the HDTV picture have more sharply defined edges, better contrast, and more accurate shading and color pattern reproduction. Because television is a key component for teleoperator performance, picture quality improvement could improve speed and accuracy. This paper describes three experiments which evaluated the impact of HDTV on remote task performance. HDTV was compared to standard resolution, monochromatic television and standard resolution, stereoscopic, monochromatic television. Tasks included judgement of depth in a televised scene, visual inspection, and a remote maintenance task. The experiments show that HDTV can improve performance. HDTV is superior to monoscopic, monochromatic, standard resolution television and to stereoscopic television for remote inspection tasks; it is less proficient than stereo television for distance matching. HDTV leads to lower error rate during tasks but does not reduce time required to complete tasks.

  16. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  17. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  18. Ion-pair high-performance liquid chromatographic analysis of aspartame and related products.

    PubMed

    Verzella, G; Bagnasco, G; Mangia, A

    1985-12-01

    A simple and accurate quantitative determination of aspartame (L-alpha-aspartyl-L-phenylalanine methyl ester), a new artificial sweetener, is described. The method, which is based on ion-pair high-performance liquid chromatography, allows the determination of aspartame in finished bulk and dosage forms, and the detection of a few related products at levels down to 0.1%.

  19. The distribution of highly stable millimeter-wave signals over different optical fiber links with accurate phase-correction

    NASA Astrophysics Data System (ADS)

    Liu, Zhangweiyi; Wang, Xiaocheng; Sun, Dongning; Dong, Yi; Hu, Weisheng

    2015-08-01

    We have demonstrated an optical generation of highly stable millimeter-wave signal distribution system, which transfers a 300GHz signal to two remote ends over different optical fiber links for signal stability comparison. The transmission delay variations of each fiber link caused by temperature and mechanical perturbations are compensated by high-precise phase-correction system. The residual phase noise between two remote end signals is detected by dual-heterodyne phase error transfer and reaches -46dBc/Hz at 1 Hz frequency offset from the carrier. The relative instability is 8×10-17 at 1000s averaging time.

  20. High Power Flex-Propellant Arcjet Performance

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  1. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  2. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  3. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  4. New high performance Si for optical devices

    NASA Astrophysics Data System (ADS)

    Tenma, T.; Matsuzaka, M.; Sako, R.; Takase, K.; Chiba, K.

    2016-05-01

    Against the backdrop of a growing demand in the areas of smart buildings, security, vehicle installation, and other applications, the market for far infrared cameras is expected to grow significantly in the future. However, since germanium (Ge) and chalcogenide glass, which have been used as the lens materials of far infrared cameras, are very expensive or highly toxic, there are some problems supporting the growing demand. We have therefore focused attention on silicon, which is inexpensive and less toxic. Although silicon has been used as a lens material of far infrared cameras, there are some problems remaining to be solved: Cz silicon is inexpensive but delivers low transmittance, and Fz silicon delivers sufficient transmittance but is expensive. We have developed New Cz silicon, which delivers high transmittance as Fz silicon does, and is inexpensive as conventional Cz silicon is. We have already started its sample work at both companies in Japan and overseas and have obtained excellent performance results. Mass production is scheduled to start in this fiscal year.

  5. High performance BGMI circuit for VLWIR FPAs

    NASA Astrophysics Data System (ADS)

    Hao, Li-chao; Chen, Hong-lei; Huang, Ai-bo; Zhang, Jun-ling; Ding, Rui-jun

    2013-09-01

    An improved CMOS readout integrated circuit (ROIC) for N-on-P very long wavelength (VLWIR) detectors is designed, which has the ability to operate with a simple background suppression. It increases the integration time and the signal-to-noise ratio (SNR) of image data. A buffered gate modulation input (BGMI) cell as input circuit provides a low input resistance, high injection efficiency, and precise biasing voltage to the photodiode. By theoretically analyzing the characteristic parameters of MOS device at low temperature, a high gain's feedback amplifier is devised which using a differential stage to provide the inverting gain to improve linearity and to provide tight control of the detector bias. The final chip is fabricated with HHNEC 0.35um 1P4M process technology. The measurement results of the fabricated readout chip under 50K have successfully verified both readout function and performance improvement. With the 5.0V power supply, ROIC provides the output dynamic range over 2.5V. At the same time, the total power dissipation is less than 200mW, and the maximum readout speed is more than 2.5MHz.

  6. High Performance Circularly Polarized Microstrip Antenna

    NASA Technical Reports Server (NTRS)

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  7. Low cost, high performance far infrared microbolometer

    NASA Astrophysics Data System (ADS)

    Roer, Audun; Lapadatu, Adriana; Elfving, Anders; Kittilsland, Gjermund; Hohler, Erling

    2010-04-01

    Far infrared (FIR) is becoming more widely accepted within the automotive industry as a powerful sensor to detect Vulnerable Road Users like pedestrians and bicyclist as well as animals. The main focus of FIR system development lies in reducing the cost of their components, and this will involve optimizing all aspects of the system. Decreased pixel size, improved 3D process integration technologies and improved manufacturing yields will produce the necessary cost reduction on the sensor to enable high market penetration. The improved 3D process integration allows a higher fill factor and improved transmission/absorption properties. Together with the high Thermal Coefficient of Resistance (TCR) and low 1/f noise properties provided by monocrystalline silicon germanium SiGe thermistor material, they lead to bolometer performances beyond those of existing devices. The thermistor material is deposited and optimized on an IR wafer separated from the read-out integrated circuit (ROIC) wafer. The IR wafer is transferred to the ROIC using CMOS compatible processes and materials, utilizing a low temperature wafer bonding process. Long term vacuum sealing obtained by wafer scale packaging enables further cost reductions and improved quality. The approach allows independent optimization of ROIC and thermistor material processing and is compatible with existing MEMS-foundries, allowing fast time to market.

  8. Accurate high-throughput identification of parallel G-quadruplex topology by a new tetraaryl-substituted imidazole.

    PubMed

    Hu, Ming-Hao; Chen, Shuo-Bin; Wang, Yu-Qing; Zeng, You-Mei; Ou, Tian-Miao; Li, Ding; Gu, Lian-Quan; Huang, Zhi-Shu; Tan, Jia-Heng

    2016-09-15

    G-quadruplex nucleic acids are four-stranded DNA or RNA secondary structures that are formed in guanine-rich sequences. These structures exhibit extensive structural polymorphism and play a pivotal role in the control of a variety of cellular processes. To date, diverse approaches for high-throughput identification of G-quadruplex structures have been successfully developed, but high-throughput methods for further characterization of their topologies are still lacking. In this study, we report a new tetra-arylimidazole probe psIZCM-1, which was found to display significant and distinctive changes in both the absorption and the fluorescence spectra in the presence of parallel G-quadruplexes but show insignificant changes upon interactions with anti-parallel G-quadruplexes or other non-quadruplex oligonucleotides. In view of this dual-output feature, we used psIZCM-1 to identify the parallel G-quadruplexes from a large set of 314 oligonucleotides (including 300 G-quadruplex-forming oligonucleotides and 14 non-quadruplex oligonucleotides) via a microplate reader and accordingly established a high-throughput method for the characterization of parallel G-quadruplex topologies. The accuracy of this method was greater than 95%, which was much higher than that of the commercial probe NMM. To make the approach more practical, we further combined psIZCM-1 with another G-quadruplex probe IZCM-7 to realize the high-throughput classification of parallel, anti-parallel G-quadruplexes and non-quadruplex structures.

  9. High performance surface inspection method for thin-film sensors

    NASA Astrophysics Data System (ADS)

    Wieser, Volkmar; Larndorfer, Stefan; Moser, Bernhard

    2007-02-01

    Thin-film sensors for use in automotive or aeronautic applications must conform to very high quality standards. Due to defects that cannot be addressed by conventional electronic measurements, an accurate optical inspection is imperative to ensure long-term quality aspects of the produced thin-film sensor. In this particular case, resolutions of 1 μm per pixel are necessary to meet the required high quality standards. Furthermore, it has to be guaranteed that defects are detected robustly with high reliability. In this paper, a new method is proposed that solves the problem of handling local deformations due to production variabilities without having to use computational intensive local image registration operations. The main idea of this method is based on a combination of efficient morphological preprocessing and a multi-step comparison strategy based on logical implication. The main advantage of this approach is that the neighborhood operations that care for the robustness of the image comparison can be computed in advance and stored in a modified reference image. By virtue of this approach, no further neighborhood operations have to be carried out on the acquired test image during inspection time. A systematic, experimental study shows that this method is superior to existing approaches concerning reliability, robustness, and computational efficiency. As a result, the requirements of high-resolution inspection and high-performance throughput while accounting for local deformations are met very well by the implemented inspection system. The work is substantiated with theoretical arguments and a comprehensive analysis of the obtained performance and practical usability in the above-mentioned, challenging industrial environment.

  10. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  11. Development and operation of a high-throughput accurate-wavelength lens-based spectrometera)

    SciTech Connect

    Bell, Ronald E.

    2014-07-11

    A high-throughput spectrometer for the 400-820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm-1 grating is matched with fast f /1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤ 0.075 arc seconds. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount behind the entrance slit. The computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  12. Accurate control of multishelled ZnO hollow microspheres for dye-sensitized solar cells with high efficiency.

    PubMed

    Dong, Zhenghong; Lai, Xiaoyong; Halpert, Jonathan E; Yang, Nailiang; Yi, Luoxin; Zhai, Jin; Wang, Dan; Tang, Zhiyong; Jiang, Lei

    2012-02-21

    A series of multishelled ZnO hollow microspheres with controlled shell number and inter-shell spacing have been successfully prepared by a simple carbonaceous microsphere templating method, whose large surface area and complex multishelled hollow structure enable them load sufficient dyes and multi-reflect the light for enhancing light harvesting and realize a high conversion efficiency of up to 5.6% when used in dye-sensitized solar cells. PMID:22266874

  13. Assignment of resonances in dissociative recombination of HD{sup +} ions: High-resolution measurements compared with accurate computations

    SciTech Connect

    Waffeu Tamo, F. O.; Buhr, H.; Schwalm, D.; Motapon, O.; Altevogt, S.; Andrianarijaona, V. M.; Grieser, M.; Lammich, L.; Lestinsky, M.; Motsch, M.; Novotny, S.; Orlov, D. A.; Pedersen, H. B.; Sprenger, F.; Weigel, U.; Wolf, A.; Nevo, I.; Urbain, X.; Schneider, I. F.

    2011-08-15

    The collision-energy resolved rate coefficient for dissociative recombination of HD{sup +} ions in the vibrational ground state is measured using the photocathode electron target at the heavy-ion storage ring TSR. Rydberg resonances associated with rovibrational excitation of the HD{sup +} core are scanned as a function of the electron collision energy with an instrumental broadening below 1 meV in the low-energy limit. The measurement is compared to calculations using multichannel quantum defect theory, accounting for rotational structure and interactions and considering the six lowest rotational energy levels as initial ionic states. Using thermal-equilibrium-level populations at 300 K to approximate the experimental conditions, close correspondence between calculated and measured structures is found up to the first vibrational excitation threshold of the cations near 0.24 eV. Detailed assignments, including naturally broadened and overlapping Rydberg resonances, are performed for all structures up to 0.024 eV. Resonances from purely rotational excitation of the ion core are found to have similar strengths as those involving vibrational excitation. A dominant low-energy resonance is assigned to contributions from excited rotational states only. The results indicate strong modifications in the energy dependence of the dissociative recombination rate coefficient through the rotational excitation of the parent ions, and underline the need for studies with rotationally cold species to obtain results reflecting low-temperature ionized media.

  14. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    NASA Astrophysics Data System (ADS)

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  15. Encoding negative events under stress: high subjective arousal is related to accurate emotional memory despite misinformation exposure.

    PubMed

    Hoscheidt, Siobhan M; LaBar, Kevin S; Ryan, Lee; Jacobs, W Jake; Nadel, Lynn

    2014-07-01

    Stress at encoding affects memory processes, typically enhancing, or preserving, memory for emotional information. These effects have interesting implications for eyewitness accounts, which in real-world contexts typically involve encoding an aversive event under stressful conditions followed by potential exposure to misinformation. The present study investigated memory for a negative event encoded under stress and subsequent misinformation endorsement. Healthy young adults participated in a between-groups design with three experimental sessions conducted 48 h apart. Session one consisted of a psychosocial stress induction (or control task) followed by incidental encoding of a negative slideshow. During session two, participants were asked questions about the slideshow, during which a random subgroup was exposed to misinformation. Memory for the slideshow was tested during the third session. Assessment of memory accuracy across stress and no-stress groups revealed that stress induced just prior to encoding led to significantly better memory for the slideshow overall. The classic misinformation effect was also observed - participants exposed to misinformation were significantly more likely to endorse false information during memory testing. In the stress group, however, memory accuracy and misinformation effects were moderated by arousal experienced during encoding of the negative event. Misinformed-stress group participants who reported that the negative slideshow elicited high arousal during encoding were less likely to endorse misinformation for the most aversive phase of the story. Furthermore, these individuals showed better memory for components of the aversive slideshow phase that had been directly misinformed. Results from the current study provide evidence that stress and high subjective arousal elicited by a negative event act concomitantly during encoding to enhance emotional memory such that the most aversive aspects of the event are well remembered and

  16. High performance constructed wetlands for cold climates.

    PubMed

    Jenssen, Petter D; Maehlum, Trend; Krogstad, Tore; Vråle, Lasse

    2005-01-01

    In 1991, the first subsurface flow constructed wetland for treatment of domestic wastewater was built in Norway. Today, this method is rapidly becoming a popular method for wastewater treatment in rural Norway. This is due to excellent performance even during winter and low maintenance. The systems can be constructed regardless of site conditions. The Norwegian concept for small constructed wetlands is based on the use of a septic tank followed by an aerobic vertical down-flow biofilter succeeded by a subsurface horizontal-flow constructed wetland. The aerobic biofilter, prior to the subsurface flow stage, is essential to remove BOD and achieve nitrification in a climate where the plants are dormant during the cold season. When designed according to present guidelines a consistent P-removal of > 90% can be expected for 15 years using natural iron or calcium rich sand or a new manufactured lightweight aggregate with P-sorption capacities, which exceeds most natural media. When the media is saturated with P it can be used as soil conditioner and P-fertilizer. Nitrogen removal in the range of 40-60% is achieved. Removal of indicator bacteria is high and < 1000 thermotolerant coliforms/100 ml is normally achieved.

  17. High performance vapour-cell frequency standards

    NASA Astrophysics Data System (ADS)

    Gharavipour, M.; Affolderbach, C.; Kang, S.; Bandi, T.; Gruet, F.; Pellaton, M.; Mileti, G.

    2016-06-01

    We report our investigations on a compact high-performance rubidium (Rb) vapour-cell clock based on microwave-optical double-resonance (DR). These studies are done in both DR continuous-wave (CW) and Ramsey schemes using the same Physics Package (PP), with the same Rb vapour cell and a magnetron-type cavity with only 45 cm3 external volume. In the CW-DR scheme, we demonstrate a DR signal with a contrast of 26% and a linewidth of 334 Hz; in Ramsey-DR mode Ramsey signals with higher contrast up to 35% and a linewidth of 160 Hz have been demonstrated. Short-term stabilities of 1.4×10-13 τ-1/2 and 2.4×10-13 τ-1/2 are measured for CW-DR and Ramsey-DR schemes, respectively. In the Ramsey-DR operation, thanks to the separation of light and microwave interactions in time, the light-shift effect has been suppressed which allows improving the long-term clock stability as compared to CW-DR operation. Implementations in miniature atomic clocks are considered.

  18. Compact high performance spectrometers using computational imaging

    NASA Astrophysics Data System (ADS)

    Morton, Kenneth; Weisberg, Arel

    2016-05-01

    Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.

  19. An integrated high performance Fastbus slave interface

    SciTech Connect

    Christiansen, J.; Ljuslin, C. )

    1993-08-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip.

  20. High performance composites with active stiffness control.

    PubMed

    Tridech, Charnwit; Maples, Henry A; Robinson, Paul; Bismarck, Alexander

    2013-09-25

    High performance carbon fiber reinforced composites with controllable stiffness could revolutionize the use of composite materials in structural applications. Here we describe a structural material, which has a stiffness that can be actively controlled on demand. Such a material could have applications in morphing wings or deployable structures. A carbon fiber reinforced-epoxy composite is described that can undergo an 88% reduction in flexural stiffness at elevated temperatures and fully recover when cooled, with no discernible damage or loss in properties. Once the stiffness has been reduced, the required deformations can be achieved at much lower actuation forces. For this proof-of-concept study a thin polyacrylamide (PAAm) layer was electrocoated onto carbon fibers that were then embedded into an epoxy matrix via resin infusion. Heating the PAAm coating above its glass transition temperature caused it to soften and allowed the fibers to slide within the matrix. To produce the stiffness change the carbon fibers were used as resistance heating elements by passing a current through them. When the PAAm coating had softened, the ability of the interphase to transfer load to the fibers was significantly reduced, greatly lowering the flexural stiffness of the composite. By changing the moisture content in PAAm fiber coating, the temperature at which the PAAm softens and the composites undergo a reduction in stiffness can be tuned. PMID:23978266

  1. Can optical diagnosis of small colon polyps be accurate? Comparing standard scope without narrow banding to high definition scope with narrow banding

    PubMed Central

    Ashktorab, Hassan; Etaati, Firoozeh; Rezaeean, Farahnaz; Nouraie, Mehdi; Paydar, Mansour; Namin, Hassan Hassanzadeh; Sanderson, Andrew; Begum, Rehana; Alkhalloufi, Kawtar; Brim, Hassan; Laiyemo, Adeyinka O

    2016-01-01

    AIM: To study the accuracy of using high definition (HD) scope with narrow band imaging (NBI) vs standard white light colonoscope without NBI (ST), to predict the histology of the colon polyps, particularly those < 1 cm. METHODS: A total of 147 African Americans patients who were referred to Howard University Hospital for screening or, diagnostic or follow up colonoscopy, during a 12-mo period in 2012 were prospectively recruited. Some patients had multiple polyps and total number of polyps was 179. Their colonoscopies were performed by 3 experienced endoscopists who determined the size and stated whether the polyps being removed were hyperplastic or adenomatous polyps using standard colonoscopes or high definition colonoscopes with NBI. The histopathologic diagnosis was reported by pathologists as part of routine care. RESULTS: Of participants in the study, 55 (37%) were male and median (interquartile range) of age was 56 (19-80). Demographic, clinical characteristics, past medical history of patients, and the data obtained by two instruments were not significantly different and two methods detected similar number of polyps. In ST scope 89% of polyps were < 1 cm vs 87% in HD scope (P = 0.7). The ST scope had a positive predictive value (PPV) and positive likelihood ratio (PLR) of 86% and 4.0 for adenoma compared to 74% and 2.6 for HD scope. There was a trend of higher sensitivity for HD scope (68%) compare to ST scope (53%) with almost the same specificity. The ST scope had a PPV and PLR of 38% and 1.8 for hyperplastic polyp (HPP) compared to 42% and 2.2 for HD scope. The sensitivity and specificity of two instruments for HPP diagnosis were similar. CONCLUSION: Our results indicated that HD scope was more sensitive in diagnosis of adenoma than ST scope. Clinical diagnosis of HPP with either scope is less accurate compared to adenoma. Colonoscopy diagnosis is not yet fully matched with pathologic diagnosis of colon polyp. However with the advancement of both

  2. Can optical diagnosis of small colon polyps be accurate? Comparing standard scope without narrow banding to high definition scope with narrow banding

    PubMed Central

    Ashktorab, Hassan; Etaati, Firoozeh; Rezaeean, Farahnaz; Nouraie, Mehdi; Paydar, Mansour; Namin, Hassan Hassanzadeh; Sanderson, Andrew; Begum, Rehana; Alkhalloufi, Kawtar; Brim, Hassan; Laiyemo, Adeyinka O

    2016-01-01

    AIM: To study the accuracy of using high definition (HD) scope with narrow band imaging (NBI) vs standard white light colonoscope without NBI (ST), to predict the histology of the colon polyps, particularly those < 1 cm. METHODS: A total of 147 African Americans patients who were referred to Howard University Hospital for screening or, diagnostic or follow up colonoscopy, during a 12-mo period in 2012 were prospectively recruited. Some patients had multiple polyps and total number of polyps was 179. Their colonoscopies were performed by 3 experienced endoscopists who determined the size and stated whether the polyps being removed were hyperplastic or adenomatous polyps using standard colonoscopes or high definition colonoscopes with NBI. The histopathologic diagnosis was reported by pathologists as part of routine care. RESULTS: Of participants in the study, 55 (37%) were male and median (interquartile range) of age was 56 (19-80). Demographic, clinical characteristics, past medical history of patients, and the data obtained by two instruments were not significantly different and two methods detected similar number of polyps. In ST scope 89% of polyps were < 1 cm vs 87% in HD scope (P = 0.7). The ST scope had a positive predictive value (PPV) and positive likelihood ratio (PLR) of 86% and 4.0 for adenoma compared to 74% and 2.6 for HD scope. There was a trend of higher sensitivity for HD scope (68%) compare to ST scope (53%) with almost the same specificity. The ST scope had a PPV and PLR of 38% and 1.8 for hyperplastic polyp (HPP) compared to 42% and 2.2 for HD scope. The sensitivity and specificity of two instruments for HPP diagnosis were similar. CONCLUSION: Our results indicated that HD scope was more sensitive in diagnosis of adenoma than ST scope. Clinical diagnosis of HPP with either scope is less accurate compared to adenoma. Colonoscopy diagnosis is not yet fully matched with pathologic diagnosis of colon polyp. However with the advancement of both

  3. Three-dimensional digital holographic aperture synthesis for rapid and highly-accurate large-volume metrology

    NASA Astrophysics Data System (ADS)

    Crouch, Stephen; Kaylor, Brant M.; Barber, Zeb W.; Reibel, Randy R.

    2015-09-01

    Currently large volume, high accuracy three-dimensional (3D) metrology is dominated by laser trackers, which typically utilize a laser scanner and cooperative reflector to estimate points on a given surface. The dependency upon the placement of cooperative targets dramatically inhibits the speed at which metrology can be conducted. To increase speed, laser scanners or structured illumination systems can be used directly on the surface of interest. Both approaches are restricted in their axial and lateral resolution at longer stand-off distances due to the diffraction limit of the optics used. Holographic aperture ladar (HAL) and synthetic aperture ladar (SAL) can enhance the lateral resolution of an imaging system by synthesizing much larger apertures by digitally combining measurements from multiple smaller apertures. Both of these approaches only produce two-dimensional imagery and are therefore not suitable for large volume 3D metrology. We combined the SAL and HAL approaches to create a swept frequency digital holographic 3D imaging system that provides rapid measurement speed for surface coverage with unprecedented axial and lateral resolution at longer standoff ranges. The technique yields a "data cube" of Fourier domain data, which can be processed with a 3D Fourier transform to reveal a 3D estimate of the surface. In this paper, we provide the theoretical background for the technique and show experimental results based on an ultra-wideband frequency modulated continuous wave (FMCW) chirped heterodyne ranging system showing ~100 micron lateral and axial precisions at >2 m standoff distances.

  4. Accurate high level ab initio-based global potential energy surface and dynamics calculations for ground state of CH2(+).

    PubMed

    Li, Y Q; Zhang, P Y; Han, K L

    2015-03-28

    A global many-body expansion potential energy surface is reported for the electronic ground state of CH2 (+) by fitting high level ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pV6Z basis set. The topographical features of the new global potential energy surface are examined in detail and found to be in good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. In turn, in order to validate the potential energy surface, a test theoretical study of the reaction CH(+)(X(1)Σ(+))+H((2)S)→C(+)((2)P)+H2(X(1)Σg (+)) has been carried out with the method of time dependent wavepacket on the title potential energy surface. The total integral cross sections and the rate coefficients have been calculated; the results determined that the new potential energy surface can both be recommended for dynamics studies of any type and as building blocks for constructing the potential energy surfaces of larger C(+)/H containing systems.

  5. High-performance surface-micromachined inchworm actuator.

    SciTech Connect

    Walraven, Jeremy Allen; Redmond, James Michael; Luck, David L.; Ashurst, William Robert; de Boer, Maarten Pieter; Maboudian, Roya; Corwin, Alex David

    2003-07-01

    This work demonstrates a polycrystalline silicon surface-micromachined inchworm actuator that exhibits high-performance characteristics such as large force ({+-}0.5 millinewtons), large velocity range (0 to {+-}4.4 mm/sec), large displacement range ({+-}100 microns), small step size ({+-}10, {+-}40 or {+-}100 nanometers), low power consumption (nanojoules per cycle), continuous bidirectional operation and relatively small area (600 x 200{micro}m{sup 2}). An in situ load spring calibrated on a logarithmic scale from micronewtons to millinewtons, optical microscopy and Michelson interferometry are used to characterize its performance. The actuator consists of a force-amplifying plate that spans two voltage-controlled clamps, and walking is achieved by appropriately sequencing signals to these three components. In the clamps, normal force is borne by equipotential rubbing counterfaces, enabling friction to be measured against load. Using different monolayer coatings, we show that the static coefficient of friction can be changed from 0.14 to 1.04, and that it is load-independent over a broad range. We further find that the static coefficient of friction does not accurately predict the force generated by the actuator and attribute this to nanometer-scale presliding tangential deflections.

  6. High-performance simulations for atmospheric pressure plasma reactor

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav

    Plasma-assisted processing and deposition of materials is an important component of modern industrial applications, with plasma reactors sharing 30% to 40% of manufacturing steps in microelectronics production. Development of new flexible electronics increases demands for efficient high-throughput deposition methods and roll-to-roll processing of materials. The current work represents an attempt of practical design and numerical modeling of a plasma enhanced chemical vapor deposition system. The system utilizes plasma at standard pressure and temperature to activate a chemical precursor for protective coatings. A specially designed linear plasma head, that consists of two parallel plates with electrodes placed in the parallel arrangement, is used to resolve clogging issues of currently available commercial plasma heads, as well as to increase the flow-rate of the processed chemicals and to enhance the uniformity of the deposition. A test system is build and discussed in this work. In order to improve operating conditions of the setup and quality of the deposited material, we perform numerical modeling of the plasma system. The theoretical and numerical models presented in this work comprehensively describe plasma generation, recombination, and advection in a channel of arbitrary geometry. Number density of plasma species, their energy content, electric field, and rate parameters are accurately calculated and analyzed in this work. Some interesting engineering outcomes are discussed with a connection to the proposed setup. The numerical model is implemented with the help of high-performance parallel technique and evaluated at a cluster for parallel calculations. A typical performance increase, calculation speed-up, parallel fraction of the code and overall efficiency of the parallel implementation are discussed in details.

  7. Topology representing network enables highly accurate classification of protein images taken by cryo electron-microscope without masking.

    PubMed

    Ogura, Toshihiko; Iwasaki, Kenji; Sato, Chikara

    2003-09-01

    In single-particle analysis, a three-dimensional (3-D) structure of a protein is constructed using electron microscopy (EM). As these images are very noisy in general, the primary process of this 3-D reconstruction is the classification of images according to their Euler angles, the images in each classified group then being averaged to reduce the noise level. In our newly developed strategy of classification, we introduce a topology representing network (TRN) method. It is a modified method of a growing neural gas network (GNG). In this system, a network structure is automatically determined in response to the images input through a growing process. After learning without a masking procedure, the GNG creates clear averages of the inputs as unit coordinates in multi-dimensional space, which are then utilized for classification. In the process, connections are automatically created between highly related units and their positions are shifted where the inputs are distributed in multi-dimensional space. Consequently, several separated groups of connected units are formed. Although the interrelationship of units in this space are not easily understood, we succeeded in solving this problem by converting the unit positions into two-dimensional (2-D) space, and by further optimizing the unit positions with the simulated annealing (SA) method. In the optimized 2-D map, visualization of the connections of units provided rich information about clustering. As demonstrated here, this method is clearly superior to both the multi-variate statistical analysis (MSA) and the self-organizing map (SOM) as a classification method and provides a first reliable classification method which can be used without masking for very noisy images. PMID:14572474

  8. Arapan-S: a fast and highly accurate whole-genome assembly software for viruses and small genomes

    PubMed Central

    2012-01-01

    Background Genome assembly is considered to be a challenging problem in computational biology, and has been studied extensively by many researchers. It is extremely difficult to build a general assembler that is able to reconstruct the original sequence instead of many contigs. However, we believe that creating specific assemblers, for solving specific cases, will be much more fruitful than creating general assemblers. Findings In this paper, we present Arapan-S, a whole-genome assembly program dedicated to handling small genomes. It provides only one contig (along with the reverse complement of this contig) in many cases. Although genomes consist of a number of segments, the implemented algorithm can detect all the segments, as we demonstrate for Influenza Virus A. The Arapan-S program is based on the de Bruijn graph. We have implemented a very sophisticated and fast method to reconstruct the original sequence and neglect erroneous k-mers. The method explores the graph by using neither the shortest nor the longest path, but rather a specific and reliable path based on the coverage level or k-mers’ lengths. Arapan-S uses short reads, and it was tested on raw data downloaded from the NCBI Trace Archive. Conclusions Our findings show that the accuracy of the assembly was very high; the result was checked against the European Bioinformatics Institute (EBI) database using the NCBI BLAST Sequence Similarity Search. The identity and the genome coverage was more than 99%. We also compared the efficiency of Arapan-S with other well-known assemblers. In dealing with small genomes, the accuracy of Arapan-S is significantly higher than the accuracy of other assemblers. The assembly process is very fast and requires only a few seconds. Arapan-S is available for free to the public. The binary files for Arapan-S are available through http://sourceforge.net/projects/dnascissor/files/. PMID:22591859

  9. Experience with high-performance PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Goldburgh, Mitchell M.; Head, Calvin

    1997-05-01

    Lockheed Martin (Loral) has installed PACS with associated teleradiology in several tens of hospitals. The PACS that have been installed have been the basis for a shift to filmless radiology in many of the hospitals. the basic structure for the PACS and the teleradiology that is being used is outlined. The way that the PACS are being used in the hospitals is instructive. The three most used areas for radiology in the hospital are the wards including the ICU wards, the emergency room, and the orthopedics clinic. The examinations are mostly CR images with 20 percent to 30 percent of the examinations being CT, MR, and ultrasound exams. The PACS are being used to realize improved productivity for radiology and for the clinicians. For radiology the same staff is being used for 30 to 50 percent more workload. For the clinicians 10 to 20 percent of their time is being saved in dealing with radiology images. The improved productivity stems from the high performance of the PACS that has been designed and installed. Images are available on any workstation in the hospital within less than two seconds, even during the busiest hour of the day. The examination management functions to restrict the attention of any one user to the examinations that are of interest. The examination management organizes the workflow through the radiology department and the hospital, improving the service of the radiology department by reducing the time until the information from a radiology examination is available. The remaining weak link in the PACS system is transcription. The examination can be acquired, read, an the report dictated in much less than ten minutes. The transcription of the dictated reports can take from a few hours to a few days. The addition of automatic transcription services will remove this weak link.

  10. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  11. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  12. Aeroelastic Calculations of Quiet High- Speed Fan Performed

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Srivastava, Rakesh; Mehmed, Oral; Min, James B.

    2002-01-01

    An advanced high-speed fan was recently designed under a cooperative effort between the NASA Glenn Research Center and Honeywell Engines & Systems. The principal design goals were to improve performance and to reduce fan noise at takeoff. Scale models of the Quiet High-Speed Fan were tested for operability, performance, and acoustics. During testing, the fan showed significantly improved noise characteristics, but a self-excited aeroelastic vibration known as flutter was encountered in the operating range. Flutter calculations were carried out for the Quiet High-Speed Fan using a three-dimensional, unsteady aerodynamic, Reynolds-averaged Navier-Stokes turbomachinery code named "TURBO." The TURBO code can accurately model the viscous flow effects that can play an important role in various aeroelastic problems such as flutter with flow separation, flutter at high loading conditions near the stall line (stall flutter), and flutter in the presence of shock and boundary-layer interaction. Initially, calculations were performed with no blade vibrations. These calculations were at a constant rotational speed and a varying mass flow rate. The mass flow rate was varied by changing the backpressure at the exit boundary of the computational domain. These initial steady calculations were followed by aeroelastic calculations in which the blades were prescribed to vibrate harmonically in a natural mode, at a natural frequency, and with a fixed interblade phase angle between adjacent blades. The AE-prep preprocessor was used to interpolate the in-vacuum mode shapes from the structural dynamics mesh onto the computational fluid dynamics mesh and to smoothly propagate the grid deformations from the blade surface to the interior points of the grid. The aeroelastic calculations provided the unsteady aerodynamic forces on the blade surface due to blade vibrations. These forces were vector multiplied with the structural dynamic mode shape to calculate the work done on the blade during

  13. Accurate CpG and non-CpG cytosine methylation analysis by high-throughput locus-specific pyrosequencing in plants.

    PubMed

    How-Kit, Alexandre; Daunay, Antoine; Mazaleyrat, Nicolas; Busato, Florence; Daviaud, Christian; Teyssier, Emeline; Deleuze, Jean-François; Gallusci, Philippe; Tost, Jörg

    2015-07-01

    Pyrosequencing permits accurate quantification of DNA methylation of specific regions where the proportions of the C/T polymorphism induced by sodium bisulfite treatment of DNA reflects the DNA methylation level. The commercially available high-throughput locus-specific pyrosequencing instruments allow for the simultaneous analysis of 96 samples, but restrict the DNA methylation analysis to CpG dinucleotide sites, which can be limiting in many biological systems. In contrast to mammals where DNA methylation occurs nearly exclusively on CpG dinucleotides, plants genomes harbor DNA methylation also in other sequence contexts including CHG and CHH motives, which cannot be evaluated by these pyrosequencing instruments due to software limitations. Here, we present a complete pipeline for accurate CpG and non-CpG cytosine methylation analysis at single base-resolution using high-throughput locus-specific pyrosequencing. The devised approach includes the design and validation of PCR amplification on bisulfite-treated DNA and pyrosequencing assays as well as the quantification of the methylation level at every cytosine from the raw peak intensities of the Pyrograms by two newly developed Visual Basic Applications. Our method presents accurate and reproducible results as exemplified by the cytosine methylation analysis of the promoter regions of two Tomato genes (NOR and CNR) encoding transcription regulators of fruit ripening during different stages of fruit development. Our results confirmed a significant and temporally coordinated loss of DNA methylation on specific cytosines during the early stages of fruit development in both promoters as previously shown by WGBS. The manuscript describes thus the first high-throughput locus-specific DNA methylation analysis in plants using pyrosequencing.

  14. Accurate CpG and non-CpG cytosine methylation analysis by high-throughput locus-specific pyrosequencing in plants.

    PubMed

    How-Kit, Alexandre; Daunay, Antoine; Mazaleyrat, Nicolas; Busato, Florence; Daviaud, Christian; Teyssier, Emeline; Deleuze, Jean-François; Gallusci, Philippe; Tost, Jörg

    2015-07-01

    Pyrosequencing permits accurate quantification of DNA methylation of specific regions where the proportions of the C/T polymorphism induced by sodium bisulfite treatment of DNA reflects the DNA methylation level. The commercially available high-throughput locus-specific pyrosequencing instruments allow for the simultaneous analysis of 96 samples, but restrict the DNA methylation analysis to CpG dinucleotide sites, which can be limiting in many biological systems. In contrast to mammals where DNA methylation occurs nearly exclusively on CpG dinucleotides, plants genomes harbor DNA methylation also in other sequence contexts including CHG and CHH motives, which cannot be evaluated by these pyrosequencing instruments due to software limitations. Here, we present a complete pipeline for accurate CpG and non-CpG cytosine methylation analysis at single base-resolution using high-throughput locus-specific pyrosequencing. The devised approach includes the design and validation of PCR amplification on bisulfite-treated DNA and pyrosequencing assays as well as the quantification of the methylation level at every cytosine from the raw peak intensities of the Pyrograms by two newly developed Visual Basic Applications. Our method presents accurate and reproducible results as exemplified by the cytosine methylation analysis of the promoter regions of two Tomato genes (NOR and CNR) encoding transcription regulators of fruit ripening during different stages of fruit development. Our results confirmed a significant and temporally coordinated loss of DNA methylation on specific cytosines during the early stages of fruit development in both promoters as previously shown by WGBS. The manuscript describes thus the first high-throughput locus-specific DNA methylation analysis in plants using pyrosequencing. PMID:26072424

  15. High performance MEMS micro-gyroscope

    NASA Technical Reports Server (NTRS)

    Bae, S. Y.; Hayworth, K. J.; Yee, K. Y.; Shcheglov, K.; Challoner, A. D.; Wiberg, D. V.

    2002-01-01

    This paper reports on JPL's on-going research into MEMS gyroscopes. This paper will describe the gyroscope's fabrication-methods, a new 8-electrode layout developed to improve performance, and performance statistics of a batch of six gyroscopes recently rate tested.

  16. Maintaining safety and high performance on shiftwork

    NASA Technical Reports Server (NTRS)

    Monk, T. H.; Folkard, S.; Wedderburn, A. I.

    1996-01-01

    This review of the shiftwork area focuses on aspects of safety and productivity. It discusses the situations in which shiftworker performance is critical, the types of problem that can develop and the reasons why shiftworker performance can be impaired. The review ends with a discnssion of the various advantages and disadvantages of several shift rotation systems, and of other possible solutions to the problem.

  17. Accurate calculation of chemical shifts in highly dynamic H2@C60 through an integrated quantum mechanics/molecular dynamics scheme.

    PubMed

    Jiménez-Osés, Gonzalo; García, José I; Corzana, Francisco; Elguero, José

    2011-05-20

    A new protocol combining classical MD simulations and DFT calculations is presented to accurately estimate the (1)H NMR chemical shifts of highly mobile guest-host systems and their thermal dependence. This strategy has been successfully applied for the hydrogen molecule trapped into C(60) fullerene, an unresolved and challenging prototypical case for which experimental values have never been reproduced. The dependence of the final values on the theoretical method and their implications to avoid over interpretation of the obtained results are carefully described.

  18. Mapping of Settlements in High Resolution Satellite Imagery using High Performance Computing

    SciTech Connect

    Cheriydat, Anil; Bright, Eddie A; Bhaduri, Budhendra L; Potere, David T

    2007-01-01

    Classifying urban land cover from high-resolution satellite imagery is challenging, and those challenges are compounded when the imagery databases are very large. Accurate land cover data is a crucial component of the population distribution modeling efforts of the Oak Ridge National Laboratory's (ORNL) LandScan Program. Currently, LandScan Program imagery analysts manually interpret high-resolution (1-5 meter) imagery to augment existing satellite-derived medium (30m) and coarse (1km) resolution land cover datasets. At LandScan, the high-resolution image archives that require interpretation are on the order of terabytes. The goal of this research is to automate urban land cover mapping utilizing ORNL's high performance computing capabilities. Our algorithm employs gray-level and local edge-pattern co-occurrence matrices to generate texture and edge patterns. Areas of urban land cover correlate with statistical features derived from these texture and edge patterns. We have parallelized our algorithms for implementation on a 64-node system using a single instruction multiple data programming model (SIMD) with Message Passing Interface (MPI) as the communication mode. Our parallel-configured classifier performs 30-40 times faster than stand-alone alternatives. When compared with manually interpreted IKONOS imagery, the classifier achieves a 91% overall accuracy. These early results are promising, pointing towards future large-scale classification of urban areas.

  19. Accurate flexible fitting of high-resolution protein structures to small-angle x-ray scattering data using a coarse-grained model with implicit hydration shell.

    PubMed

    Zheng, Wenjun; Tekpinar, Mustafa

    2011-12-21

    Small-angle x-ray scattering (SAXS) is a powerful technique widely used to explore conformational states and transitions of biomolecular assemblies in solution. For accurate model reconstruction from SAXS data, one promising approach is to flexibly fit a known high-resolution protein structure to low-resolution SAXS data by computer simulations. This is a highly challenging task due to low information content in SAXS data. To meet this challenge, we have developed what we believe to be a novel method based on a coarse-grained (one-bead-per-residue) protein representation and a modified form of the elastic network model that allows large-scale conformational changes while maintaining pseudobonds and secondary structures. Our method optimizes a pseudoenergy that combines the modified elastic-network model energy with a SAXS-fitting score and a collision energy that penalizes steric collisions. Our method uses what we consider a new implicit hydration shell model that accounts for the contribution of hydration shell to SAXS data accurately without explicitly adding waters to the system. We have rigorously validated our method using five test cases with simulated SAXS data and three test cases with experimental SAXS data. Our method has successfully generated high-quality structural models with root mean-squared deviation of 1 ∼ 3 Å from the target structures.

  20. High Performance Diesel Fueled Cabin Heater

    SciTech Connect

    Butcher, Tom

    2001-08-05

    Recent DOE-OHVT studies show that diesel emissions and fuel consumption can be greatly reduced at truck stops by switching from engine idle to auxiliary-fired heaters. Brookhaven National Laboratory (BNL) has studied high performance diesel burner designs that address the shortcomings of current low fire-rate burners. Initial test results suggest a real opportunity for the development of a truly advanced truck heating system. The BNL approach is to use a low pressure, air-atomized burner derived form burner designs used commonly in gas turbine combustors. This paper reviews the design and test results of the BNL diesel fueled cabin heater. The burner design is covered by U.S. Patent 6,102,687 and was issued to U.S. DOE on August 15, 2000.The development of several novel oil burner applications based on low-pressure air atomization is described. The atomizer used is a pre-filming, air blast nozzle of the type commonly used in gas turbine combustion. The air pressure used can b e as low as 1300 Pa and such pressure can be easily achieved with a fan. Advantages over conventional, pressure-atomized nozzles include ability to operate at low input rates without very small passages and much lower fuel pressure requirements. At very low firing rates the small passage sizes in pressure swirl nozzles lead to poor reliability and this factor has practically constrained these burners to firing rates over 14 kW. Air atomization can be used very effectively at low firing rates to overcome this concern. However, many air atomizer designs require pressures that can be achieved only with a compressor, greatly complicating the burner package and increasing cost. The work described in this paper has been aimed at the practical adaptation of low-pressure air atomization to low input oil burners. The objective of this work is the development of burners that can achieve the benefits of air atomization with air pressures practically achievable with a simple burner fan.