Sample records for accurate high performance

  1. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  2. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    PubMed

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  3. Accurate analysis of parabens in human urine using isotope-dilution ultrahigh-performance liquid chromatography-high resolution mass spectrometry.

    PubMed

    Zhou, Hui-Ting; Chen, Hsin-Chang; Ding, Wang-Hsien

    2018-02-20

    An analytical method that utilizes isotope-dilution ultrahigh-performance liquid chromatography coupled with hybrid quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS or called UHPLC-HRMS) was developed, and validated to be highly precise and accurate for the detection of nine parabens (methyl-, ethyl-, propyl-, isopropyl-, butyl-, isobutyl-, pentyl-, hexyl-, and benzyl-parabens) in human urine samples. After sample preparation by ultrasound-assisted emulsification microextraction (USAEME), the extract was directly injected into UHPLC-HRMS. By using negative electrospray ionization in the multiple reaction monitoring (MRM) mode and measuring the peak area ratios of both the natural and the labeled-analogues in the samples and calibration standards, the target analytes could be accurately identified and quantified. Another use for the labeled-analogues was to correct for systematic errors associated with the analysis, such as the matrix effect and other variations. The limits of quantitation (LOQs) were ranging from 0.3 to 0.6 ng/mL. High precisions for both repeatability and reproducibility were obtained ranging from 1 to 8%. High trueness (mean extraction recovery, or called accuracy) ranged from 93 to 107% on two concentration levels. According to preliminary results, the total concentrations of four most detected parabens (methyl-, ethyl-, propyl- and butyl-) ranged from 0.5 to 79.1 ng/mL in male urine samples, and from 17 to 237 ng/mL in female urine samples. Interestingly, two infrequently detected pentyl- and hexyl-parabens were found in one of the male samples in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Rapid and accurate prediction of degradant formation rates in pharmaceutical formulations using high-performance liquid chromatography-mass spectrometry.

    PubMed

    Darrington, Richard T; Jiao, Jim

    2004-04-01

    Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.

  5. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  6. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  7. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  8. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  9. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  10. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  11. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  12. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  13. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  14. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  15. Performance Patterns of High, Medium, and Low Performers during and following a Reward versus Non-Reward Contingency Phase

    ERIC Educational Resources Information Center

    Oliver, Renee; Williams, Robert L.

    2006-01-01

    Three contingency conditions were applied to the math performance of 4th and 5th graders: bonus credit for accurately solving math problems, bonus credit for completing math problems, and no bonus credit for accurately answering or completing math problems. Mixed ANOVAs were used in tracking the performance of high, medium, and low performers…

  16. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  17. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, Imma; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750+/-0.0049 amu and 270.0786+/-0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098+/-0.0061 amu and 314.1153+/-0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  18. Countercurrent chromatography separation of saponins by skeleton type from Ampelozizyphus amazonicus for off-line ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry analysis and characterisation.

    PubMed

    de Souza Figueiredo, Fabiana; Celano, Rita; de Sousa Silva, Danila; das Neves Costa, Fernanda; Hewitson, Peter; Ignatova, Svetlana; Piccinelli, Anna Lisa; Rastrelli, Luca; Guimarães Leitão, Suzana; Guimarães Leitão, Gilda

    2017-01-20

    Ampelozizyphus amazonicus Ducke (Rhamnaceae), a medicinal plant used to prevent malaria, is a climbing shrub, native to the Amazonian region, with jujubogenin glycoside saponins as main compounds. The crude extract of this plant is too complex for any kind of structural identification, and HPLC separation was not sufficient to resolve this issue. Therefore, the aim of this work was to obtain saponin enriched fractions from the bark ethanol extract by countercurrent chromatography (CCC) for further isolation and identification/characterisation of the major saponins by HPLC and MS. The butanol extract was fractionated by CCC with hexane - ethyl acetate - butanol - ethanol - water (1:6:1:1:6; v/v) solvent system yielding 4 group fractions. The collected fractions were analysed by UHPLC-HRMS (ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry) and MS n . Group 1 presented mainly oleane type saponins, and group 3 showed mainly jujubogenin glycosides, keto-dammarane type triterpene saponins and saponins with C 31 skeleton. Thus, CCC separated saponins from the butanol-rich extract by skeleton type. A further purification of group 3 by CCC (ethyl acetate - ethanol - water (1:0.2:1; v/v)) and HPLC-RI was performed in order to obtain these unusual aglycones in pure form. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  20. High-accurate optical fiber liquid level sensor

    NASA Astrophysics Data System (ADS)

    Sun, Dexing; Chen, Shouliu; Pan, Chao; Jin, Henghuan

    1991-08-01

    A highly accurate optical fiber liquid level sensor is presented. The single-chip microcomputer is used to process and control the signal. This kind of sensor is characterized by self-security and is explosion-proof, so it can be applied in any liquid level detecting areas, especially in the oil and chemical industries. The theories and experiments about how to improve the measurement accuracy are described. The relative error for detecting the measurement range 10 m is up to 0.01%.

  1. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  2. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  3. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  4. Performance of a Micro-Strip Gas Chamber for event wise, high rate thermal neutron detection with accurate 2D position determination

    NASA Astrophysics Data System (ADS)

    Mindur, B.; Alimov, S.; Fiutowski, T.; Schulz, C.; Wilpert, T.

    2014-12-01

    A two-dimensional (2D) position sensitive detector for neutron scattering applications based on low-pressure gas amplification and micro-strip technology was built and tested with an innovative readout electronics and data acquisition system. This detector contains a thin solid neutron converter and was developed for time- and thus wavelength-resolved neutron detection in single-event counting mode, which improves the image contrast in comparison with integrating detectors. The prototype detector of a Micro-Strip Gas Chamber (MSGC) was built with a solid natGd/CsI thermal neutron converter for spatial resolutions of about 100 μm and counting rates up to 107 neutrons/s. For attaining very high spatial resolutions and counting rates via micro-strip readout with centre-of-gravity evaluation of the signal amplitude distributions, a fast, channel-wise, self-triggering ASIC was developed. The front-end chips (MSGCROCs), which are very first signal processing components, are read out into powerful ADC-FPGA boards for on-line data processing and thereafter via Gigabit Ethernet link into the data receiving PC. The workstation PC is controlled by a modular, high performance dedicated software suite. Such a fast and accurate system is crucial for efficient radiography/tomography, diffraction or imaging applications based on high flux thermal neutron beam. In this paper a brief description of the detector concept with its operation principles, readout electronics requirements and design together with the signals processing stages performed in hardware and software are presented. In more detail the neutron test beam conditions and measurement results are reported. The focus of this paper is on the system integration, two dimensional spatial resolution, the time resolution of the readout system and the imaging capabilities of the overall setup. The detection efficiency of the detector prototype is estimated as well.

  5. An optimized method for neurotransmitters and their metabolites analysis in mouse hypothalamus by high performance liquid chromatography-Q Exactive hybrid quadrupole-orbitrap high-resolution accurate mass spectrometry.

    PubMed

    Yang, Zong-Lin; Li, Hui; Wang, Bing; Liu, Shu-Ying

    2016-02-15

    Neurotransmitters (NTs) and their metabolites are known to play an essential role in maintaining various physiological functions in nervous system. However, there are many difficulties in the detection of NTs together with their metabolites in biological samples. A new method for NTs and their metabolites detection by high performance liquid chromatography coupled with Q Exactive hybrid quadruple-orbitrap high-resolution accurate mass spectrometry (HPLC-HRMS) was established in this paper. This method was a great development of the applying of Q Exactive MS in the quantitative analysis. This method enabled a rapid quantification of ten compounds within 18min. Good linearity was obtained with a correlation coefficient above 0.99. The concentration range of the limit of detection (LOD) and the limit of quantitation (LOQ) level were 0.0008-0.05nmol/mL and 0.002-25.0nmol/mL respectively. Precisions (relative standard deviation, RSD) of this method were at 0.36-12.70%. Recovery ranges were between 81.83% and 118.04%. Concentrations of these compounds in mouse hypothalamus were detected by Q Exactive LC-MS technology with this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Beta-band activity and connectivity in sensorimotor and parietal cortex are important for accurate motor performance.

    PubMed

    Chung, Jae W; Ofori, Edward; Misra, Gaurav; Hess, Christopher W; Vaillancourt, David E

    2017-01-01

    Accurate motor performance may depend on the scaling of distinct oscillatory activity within the motor cortex and effective neural communication between the motor cortex and other brain areas. Oscillatory activity within the beta-band (13-30Hz) has been suggested to provide distinct functional roles for attention and sensorimotor control, yet it remains unclear how beta-band and other oscillatory activity within and between cortical regions is coordinated to enhance motor performance. We explore this open issue by simultaneously measuring high-density cortical activity and elbow flexor and extensor neuromuscular activity during ballistic movements, and manipulating error using high and low visual gain across three target distances. Compared with low visual gain, high visual gain decreased movement errors at each distance. Group analyses in 3D source-space revealed increased theta-, alpha-, and beta-band desynchronization of the contralateral motor cortex and medial parietal cortex in high visual gain conditions and this corresponded to reduced movement error. Dynamic causal modeling was used to compute connectivity between motor cortex and parietal cortex. Analyses revealed that gain affected the directionally-specific connectivity across broadband frequencies from parietal to sensorimotor cortex but not from sensorimotor cortex to parietal cortex. These new findings provide support for the interpretation that broad-band oscillations in theta, alpha, and beta frequency bands within sensorimotor and parietal cortex coordinate to facilitate accurate upper limb movement. Our findings establish a link between sensorimotor oscillations in the context of online motor performance in common source space across subjects. Specifically, the extent and distinct role of medial parietal cortex to sensorimotor beta connectivity and local domain broadband activity combine in a time and frequency manner to assist ballistic movements. These findings can serve as a model to examine

  7. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  8. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  9. Highly Accurate Quantitative Analysis Of Enantiomeric Mixtures from Spatially Frequency Encoded 1H NMR Spectra.

    PubMed

    Plainchont, Bertrand; Pitoux, Daisy; Cyrille, Mathieu; Giraud, Nicolas

    2018-02-06

    We propose an original concept to measure accurately enantiomeric excesses on proton NMR spectra, which combines high-resolution techniques based on a spatial encoding of the sample, with the use of optically active weakly orienting solvents. We show that it is possible to simulate accurately dipolar edited spectra of enantiomers dissolved in a chiral liquid crystalline phase, and to use these simulations to calibrate integrations that can be measured on experimental data, in order to perform a quantitative chiral analysis. This approach is demonstrated on a chemical intermediate for which optical purity is an essential criterion. We find that there is a very good correlation between the experimental and calculated integration ratios extracted from G-SERF spectra, which paves the way to a general method of determination of enantiomeric excesses based on the observation of 1 H nuclei.

  10. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    NASA Astrophysics Data System (ADS)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  11. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  12. High resolution as a key feature to perform accurate ELISPOT measurements using Zeiss KS ELISPOT readers.

    PubMed

    Malkusch, Wolf

    2005-01-01

    The enzyme-linked immunospot (ELISPOT) assay was originally developed for the detection of individual antibody secreting B-cells. Since then, the method has been improved, and ELISPOT is used for the determination of the production of tumor necrosis factor (TNF)-alpha, interferon (IFN)-gamma, or various interleukins (IL)-4, IL-5. ELISPOT measurements are performed in 96-well plates with nitrocellulose membranes either visually or by means of image analysis. Image analysis offers various procedures to overcome variable background intensity problems and separate true from false spots. ELISPOT readers offer a complete solution for precise and automatic evaluation of ELISPOT assays. Number, size, and intensity of each single spot can be determined, printed, or saved for further statistical evaluation. Cytokine spots are always round, but because of floating edges with the background, they have a nonsmooth borderline. Resolution is a key feature for a precise detection of ELISPOT. In standard applications shape and edge steepness are essential parameters in addition to size and color for an accurate spot recognition. These parameters need a minimum spot diameter of 6 pixels. Collecting one single image per well with a standard color camera with 750 x 560 pixels will result in a resolution much too low to get all of the spots in a specimen. IFN-gamma spots may have only 25 microm diameters, and TNF-alpha spots just 15 microm. A 750 x 560 pixel image of a 6-mm well has a pixel size of 12 microm, resulting in only 1 or 2 pixel for a spot. Using a precise microscope optic in combination with a high resolution (1300 x 1030 pixel) integrating digital color camera, and at least 2 x 2 images per well will result in a pixel size of 2.5 microm and, as a minimum, 6 pixel diameter per spot. New approaches try to detect two cytokines per cell at the same time (i.e., IFN-gamma and IL-5). Standard staining procedures produce brownish spots (horseradish peroxidase) and blue spots

  13. Accurate Emission Line Diagnostics at High Redshift

    NASA Astrophysics Data System (ADS)

    Jones, Tucker

    2017-08-01

    How do the physical conditions of high redshift galaxies differ from those seen locally? Spectroscopic surveys have invested hundreds of nights of 8- and 10-meter telescope time as well as hundreds of Hubble orbits to study evolution in the galaxy population at redshifts z 0.5-4 using rest-frame optical strong emission line diagnostics. These surveys reveal evolution in the gas excitation with redshift but the physical cause is not yet understood. Consequently there are large systematic errors in derived quantities such as metallicity.We have used direct measurements of gas density, temperature, and metallicity in a unique sample at z=0.8 to determine reliable diagnostics for high redshift galaxies. Our measurements suggest that offsets in emission line ratios at high redshift are primarily caused by high N/O abundance ratios. However, our ground-based data cannot rule out other interpretations. Spatially resolved Hubble grism spectra are needed to distinguish between the remaining plausible causes such as active nuclei, shocks, diffuse ionized gas emission, and HII regions with escaping ionizing flux. Identifying the physical origin of evolving excitation will allow us to build the necessary foundation for accurate measurements of metallicity and other properties of high redshift galaxies. Only then can we expoit the wealth of data from current surveys and near-future JWST spectroscopy to understand how galaxies evolve over time.

  14. Method of measuring thermal conductivity of high performance insulation

    NASA Technical Reports Server (NTRS)

    Hyde, E. H.; Russell, L. D.

    1968-01-01

    Method accurately measures the thermal conductivity of high-performance sheet insulation as a discrete function of temperature. It permits measurements to be made at temperature drops of approximately 10 degrees F across the insulation and ensures measurement accuracy by minimizing longitudinal heat losses in the system.

  15. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  16. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  17. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  18. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  19. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  20. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Elementmore » Analysis of the system are presented.« less

  1. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    NASA Astrophysics Data System (ADS)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  2. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  3. Accurate measurement of junctional conductance between electrically coupled cells with dual whole-cell voltage-clamp under conditions of high series resistance.

    PubMed

    Hartveit, Espen; Veruki, Margaret Lin

    2010-03-15

    Accurate measurement of the junctional conductance (G(j)) between electrically coupled cells can provide important information about the functional properties of coupling. With the development of tight-seal, whole-cell recording, it became possible to use dual, single-electrode voltage-clamp recording from pairs of small cells to measure G(j). Experiments that require reduced perturbation of the intracellular environment can be performed with high-resistance pipettes or the perforated-patch technique, but an accompanying increase in series resistance (R(s)) compromises voltage-clamp control and reduces the accuracy of G(j) measurements. Here, we present a detailed analysis of methodologies available for accurate determination of steady-state G(j) and related parameters under conditions of high R(s), using continuous or discontinuous single-electrode voltage-clamp (CSEVC or DSEVC) amplifiers to quantify the parameters of different equivalent electrical circuit model cells. Both types of amplifiers can provide accurate measurements of G(j), with errors less than 5% for a wide range of R(s) and G(j) values. However, CSEVC amplifiers need to be combined with R(s)-compensation or mathematical correction for the effects of nonzero R(s) and finite membrane resistance (R(m)). R(s)-compensation is difficult for higher values of R(s) and leads to instability that can damage the recorded cells. Mathematical correction for R(s) and R(m) yields highly accurate results, but depends on accurate estimates of R(s) throughout an experiment. DSEVC amplifiers display very accurate measurements over a larger range of R(s) values than CSEVC amplifiers and have the advantage that knowledge of R(s) is unnecessary, suggesting that they are preferable for long-duration experiments and/or recordings with high R(s). Copyright (c) 2009 Elsevier B.V. All rights reserved.

  4. Nonlinear stability and control study of highly maneuverable high performance aircraft

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.

    1993-01-01

    This project is intended to research and develop new nonlinear methodologies for the control and stability analysis of high-performance, high angle-of-attack aircraft such as HARV (F18). Past research (reported in our Phase 1, 2, and 3 progress reports) is summarized and more details of final Phase 3 research is provided. While research emphasis is on nonlinear control, other tasks such as associated model development, system identification, stability analysis, and simulation are performed in some detail as well. An overview of various models that were investigated for different purposes such as an approximate model reference for control adaptation, as well as another model for accurate rigid-body longitudinal motion is provided. Only a very cursory analysis was made relative to type 8 (flexible body dynamics). Standard nonlinear longitudinal airframe dynamics (type 7) with the available modified F18 stability derivatives, thrust vectoring, actuator dynamics, and control constraints are utilized for simulated flight evaluation of derived controller performance in all cases studied.

  5. Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.

    2012-01-01

    The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.

  6. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli.

  7. Highly accurate nephelometric titrimetry.

    PubMed

    Zhan, Xiancheng; Li, Chengrong; Li, Zhiyi; Yang, Xiucen; Zhong, Shuguang; Yi, Tao

    2004-02-01

    A method that accurately indicates the end-point of precipitation reactions by the measurement of the relative intensity of the scattered light in the titrate is presented. A new nephelometric titrator with an internal nephelometric sensor has been devised. The work of the titrator including the sensor and change in the turbidity of the titrate and intensity of the scattered light are described. The accuracy of the nephelometric titrimetry is discussed theoretically. The titration of NaCl with AgNO(3) serves as a model. A relative error as well as deviation is within 0.2% under the experimental conditions. The applicability of the titrimetry in pharmaceutical analyses, for example, phenytoin sodium and procaine hydrochloride, is generally illustrated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  8. Broad screening of illicit ingredients in cosmetics using ultra-high-performance liquid chromatography-hybrid quadrupole-Orbitrap mass spectrometry with customized accurate-mass database and mass spectral library.

    PubMed

    Meng, Xianshuang; Bai, Hua; Guo, Teng; Niu, Zengyuan; Ma, Qiang

    2017-12-15

    Comprehensive identification and quantitation of 100 multi-class regulated ingredients in cosmetics was achieved using ultra-high-performance liquid chromatography (UHPLC) coupled with hybrid quadrupole-Orbitrap high-resolution mass spectrometry (Q-Orbitrap HRMS). A simple, efficient, and inexpensive sample pretreatment protocol was developed using ultrasound-assisted extraction (UAE), followed by dispersive solid-phase extraction (dSPE). The cosmetic samples were analyzed by UHPLC-Q-Orbitrap HRMS under synchronous full-scan MS and data-dependent MS/MS (full-scan MS 1 /dd-MS 2 ) acquisition mode. The mass resolution was set to 70,000 FWHM (full width at half maximum) for full-scan MS 1 and 17,500 FWHM for dd-MS 2 stage with the experimentally measured mass deviations of less than 2ppm (parts per million) for quasi-molecular ions and 5ppm for characteristic fragment ions for each individual analyte. An accurate-mass database and a mass spectral library were built in house for searching the 100 target compounds. Broad screening was conducted by comparing the experimentally measured exact mass of precursor and fragment ions, retention time, isotopic pattern, and ionic ratio with the accurate-mass database and by matching the acquired MS/MS spectra against the mass spectral library. The developed methodology was evaluated and validated in terms of limits of detection (LODs), limits of quantitation (LOQs), linearity, stability, accuracy, and matrix effect. The UHPLC-Q-Orbitrap HRMS approach was applied for the analysis of 100 target illicit ingredients in 123 genuine cosmetic samples, and exhibited great potential for high-throughput, sensitive, and reliable screening of multi-class illicit compounds in cosmetics. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  10. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.

    PubMed

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R

    2009-07-01

    The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/

  11. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance

    PubMed Central

    Talamas, Sean N.; Mavor, Kenneth I.; Perrett, David I.

    2016-01-01

    Despite the old adage not to ‘judge a book by its cover’, facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone’s health or intelligence, but such cues are overshadowed by an ‘attractiveness halo’ whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students’ future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  12. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    NASA Astrophysics Data System (ADS)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  13. Accurate RNA consensus sequencing for high-fidelity detection of transcriptional mutagenesis-induced epimutations.

    PubMed

    Reid-Bayliss, Kate S; Loeb, Lawrence A

    2017-08-29

    Transcriptional mutagenesis (TM) due to misincorporation during RNA transcription can result in mutant RNAs, or epimutations, that generate proteins with altered properties. TM has long been hypothesized to play a role in aging, cancer, and viral and bacterial evolution. However, inadequate methodologies have limited progress in elucidating a causal association. We present a high-throughput, highly accurate RNA sequencing method to measure epimutations with single-molecule sensitivity. Accurate RNA consensus sequencing (ARC-seq) uniquely combines RNA barcoding and generation of multiple cDNA copies per RNA molecule to eliminate errors introduced during cDNA synthesis, PCR, and sequencing. The stringency of ARC-seq can be scaled to accommodate the quality of input RNAs. We apply ARC-seq to directly assess transcriptome-wide epimutations resulting from RNA polymerase mutants and oxidative stress.

  14. Comprehensive identification and structural characterization of target components from Gelsemium elegans by high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry based on accurate mass databases combined with MS/MS spectra.

    PubMed

    Liu, Yan-Chun; Xiao, Sa; Yang, Kun; Ling, Li; Sun, Zhi-Liang; Liu, Zhao-Ying

    2017-06-01

    This study reports an applicable analytical strategy of comprehensive identification and structure characterization of target components from Gelsemium elegans by using high-performance liquid chromatography quadrupole time-of-flight mass spectrometry (LC-QqTOF MS) based on the use of accurate mass databases combined with MS/MS spectra. The databases created included accurate masses and elemental compositions of 204 components from Gelsemium and their structural data. The accurate MS and MS/MS spectra were acquired through data-dependent auto MS/MS mode followed by an extraction of the potential compounds from the LC-QqTOF MS raw data of the sample. The same was matched using the databases to search for targeted components in the sample. The structures for detected components were tentatively characterized by manually interpreting the accurate MS/MS spectra for the first time. A total of 57 components have been successfully detected and structurally characterized from the crude extracts of G. elegans, but has failed to differentiate some isomers. This analytical strategy is generic and efficient, avoids isolation and purification procedures, enables a comprehensive structure characterization of target components of Gelsemium and would be widely applicable for complicated mixtures that are derived from Gelsemium preparations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Lincoln Laboratory demonstrates highly accurate vehicle localization under adverse weather conditions

    DTIC Science & Technology

    2016-05-25

    2016 Lincoln Laboratory demonstrates highly accurate vehicle localization under adverse weather conditions A ground-penetrating radar system...the problems limiting the development and adoption of self-driving vehicles: how can a vehicle navigate to stay within its lane when bad weather ... weather conditions, but it is challenging, even impossible, for them to work when snow covers the markings and surfaces or precipitation obscures points

  16. High sample throughput genotyping for estimating C-lineage introgression in the dark honeybee: an accurate and cost-effective SNP-based tool.

    PubMed

    Henriques, Dora; Browne, Keith A; Barnett, Mark W; Parejo, Melanie; Kryger, Per; Freeman, Tom C; Muñoz, Irene; Garnery, Lionel; Highet, Fiona; Jonhston, J Spencer; McCormack, Grace P; Pinto, M Alice

    2018-06-04

    The natural distribution of the honeybee (Apis mellifera L.) has been changed by humans in recent decades to such an extent that the formerly widest-spread European subspecies, Apis mellifera mellifera, is threatened by extinction through introgression from highly divergent commercial strains in large tracts of its range. Conservation efforts for A. m. mellifera are underway in multiple European countries requiring reliable and cost-efficient molecular tools to identify purebred colonies. Here, we developed four ancestry-informative SNP assays for high sample throughput genotyping using the iPLEX Mass Array system. Our customized assays were tested on DNA from individual and pooled, haploid and diploid honeybee samples extracted from different tissues using a diverse range of protocols. The assays had a high genotyping success rate and yielded accurate genotypes. Performance assessed against whole-genome data showed that individual assays behaved well, although the most accurate introgression estimates were obtained for the four assays combined (117 SNPs). The best compromise between accuracy and genotyping costs was achieved when combining two assays (62 SNPs). We provide a ready-to-use cost-effective tool for accurate molecular identification and estimation of introgression levels to more effectively monitor and manage A. m. mellifera conservatories.

  17. Do dual-route models accurately predict reading and spelling performance in individuals with acquired alexia and agraphia?

    PubMed

    Rapcsak, Steven Z; Henry, Maya L; Teague, Sommer L; Carnahan, Susan D; Beeson, Pélagie M

    2007-06-18

    Coltheart and co-workers [Castles, A., Bates, T. C., & Coltheart, M. (2006). John Marshall and the developmental dyslexias. Aphasiology, 20, 871-892; Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204-256] have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper, we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult neurological patients with acquired alexia and agraphia. These findings provide empirical support for dual-route theories of written language processing.

  18. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.

  19. Gold nanospikes based microsensor as a highly accurate mercury emission monitoring system

    PubMed Central

    Sabri, Ylias M.; Ippolito, Samuel J.; Tardio, James; Bansal, Vipul; O'Mullane, Anthony P.; Bhargava, Suresh K.

    2014-01-01

    Anthropogenic elemental mercury (Hg0) emission is a serious worldwide environmental problem due to the extreme toxicity of the heavy metal to humans, plants and wildlife. Development of an accurate and cheap microsensor based online monitoring system which can be integrated as part of Hg0 removal and control processes in industry is still a major challenge. Here, we demonstrate that forming Au nanospike structures directly onto the electrodes of a quartz crystal microbalance (QCM) using a novel electrochemical route results in a self-regenerating, highly robust, stable, sensitive and selective Hg0 vapor sensor. The data from a 127 day continuous test performed in the presence of volatile organic compounds and high humidity levels, showed that the sensor with an electrodeposted sensitive layer had 260% higher response magnitude, 3.4 times lower detection limit (~22 μg/m3 or ~2.46 ppbv) and higher accuracy (98% Vs 35%) over a Au control based QCM (unmodified) when exposed to a Hg0 vapor concentration of 10.55 mg/m3 at 101°C. Statistical analysis of the long term data showed that the nano-engineered Hg0 sorption sites on the developed Au nanospikes sensitive layer play a critical role in the enhanced sensitivity and selectivity of the developed sensor towards Hg0 vapor. PMID:25338965

  20. Gold nanospikes based microsensor as a highly accurate mercury emission monitoring system

    NASA Astrophysics Data System (ADS)

    Sabri, Ylias M.; Ippolito, Samuel J.; Tardio, James; Bansal, Vipul; O'Mullane, Anthony P.; Bhargava, Suresh K.

    2014-10-01

    Anthropogenic elemental mercury (Hg0) emission is a serious worldwide environmental problem due to the extreme toxicity of the heavy metal to humans, plants and wildlife. Development of an accurate and cheap microsensor based online monitoring system which can be integrated as part of Hg0 removal and control processes in industry is still a major challenge. Here, we demonstrate that forming Au nanospike structures directly onto the electrodes of a quartz crystal microbalance (QCM) using a novel electrochemical route results in a self-regenerating, highly robust, stable, sensitive and selective Hg0 vapor sensor. The data from a 127 day continuous test performed in the presence of volatile organic compounds and high humidity levels, showed that the sensor with an electrodeposted sensitive layer had 260% higher response magnitude, 3.4 times lower detection limit (~22 μg/m3 or ~2.46 ppbv) and higher accuracy (98% Vs 35%) over a Au control based QCM (unmodified) when exposed to a Hg0 vapor concentration of 10.55 mg/m3 at 101°C. Statistical analysis of the long term data showed that the nano-engineered Hg0 sorption sites on the developed Au nanospikes sensitive layer play a critical role in the enhanced sensitivity and selectivity of the developed sensor towards Hg0 vapor.

  1. Accurate high-throughput structure mapping and prediction with transition metal ion FRET

    PubMed Central

    Yu, Xiaozhen; Wu, Xiongwu; Bermejo, Guillermo A.; Brooks, Bernard R.; Taraska, Justin W.

    2013-01-01

    Mapping the landscape of a protein’s conformational space is essential to understanding its functions and regulation. The limitations of many structural methods have made this process challenging for most proteins. Here, we report that transition metal ion FRET (tmFRET) can be used in a rapid, highly parallel screen, to determine distances from multiple locations within a protein at extremely low concentrations. The distances generated through this screen for the protein Maltose Binding Protein (MBP) match distances from the crystal structure to within a few angstroms. Furthermore, energy transfer accurately detects structural changes during ligand binding. Finally, fluorescence-derived distances can be used to guide molecular simulations to find low energy states. Our results open the door to rapid, accurate mapping and prediction of protein structures at low concentrations, in large complex systems, and in living cells. PMID:23273426

  2. Highly accurate potential energy surface for the He-H2 dimer

    NASA Astrophysics Data System (ADS)

    Bakr, Brandon W.; Smith, Daniel G. A.; Patkowski, Konrad

    2013-10-01

    A new highly accurate interaction potential is constructed for the He-H2 van der Waals complex. This potential is fitted to 1900 ab initio energies computed at the very large-basis coupled-cluster level and augmented by corrections for higher-order excitations (up to full configuration interaction level) and the diagonal Born-Oppenheimer correction. At the vibrationally averaged H-H bond length of 1.448736 bohrs, the well depth of our potential, 15.870 ± 0.065 K, is nearly 1 K larger than the most accurate previous studies have indicated. In addition to constructing our own three-dimensional potential in the van der Waals region, we present a reparameterization of the Boothroyd-Martin-Peterson potential surface [A. I. Boothroyd, P. G. Martin, and M. R. Peterson, J. Chem. Phys. 119, 3187 (2003)] that is suitable for all configurations of the triatomic system. Finally, we use the newly developed potentials to compute the properties of the lone bound states of 4He-H2 and 3He-H2 and the interaction second virial coefficient of the hydrogen-helium mixture.

  3. Simple and accurate sum rules for highly relativistic systems

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2005-03-01

    In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.

  4. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  5. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation

  6. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    PubMed Central

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-01-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496

  7. Engineering the Charge Transport of Ag Nanocrystals for Highly Accurate, Wearable Temperature Sensors through All-Solution Processes.

    PubMed

    Joh, Hyungmok; Lee, Seung-Wook; Seong, Mingi; Lee, Woo Seok; Oh, Soong Ju

    2017-06-01

    All-nanocrystal (NC)-based and all-solution-processed wearable resistance temperature detectors (RTDs) are introduced. The charge transport mechanisms of Ag NC thin films are engineered through various ligand treatments to design high performance RTDs. Highly conductive Ag NC thin films exhibiting metallic transport behavior with high positive temperature coefficients of resistance (TCRs) are achieved through tetrabutylammonium bromide treatment. Ag NC thin films showing hopping transport with high negative TCRs are created through organic ligand treatment. All-solution-based, one-step photolithography techniques that integrate two distinct opposite-sign TCR Ag NC thin films into an ultrathin single device are developed to decouple the mechanical effects such as human motion. The unconventional materials design and strategy enables highly accurate, sensitive, wearable and motion-free RTDs, demonstrated by experiments on moving or curved objects such as human skin, and simulation results based on charge transport analysis. This strategy provides a low cost and simple method to design wearable multifunctional sensors with high sensitivity which could be utilized in various fields such as biointegrated sensors or electronic skin. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  9. Development of highly accurate approximate scheme for computing the charge transfer integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pershin, Anton; Szalay, Péter G.

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, itmore » was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.« less

  10. Turbulence modeling of free shear layers for high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas L.

    1993-01-01

    The High Performance Aircraft (HPA) Grand Challenge of the High Performance Computing and Communications (HPCC) program involves the computation of the flow over a high performance aircraft. A variety of free shear layers, including mixing layers over cavities, impinging jets, blown flaps, and exhaust plumes, may be encountered in such flowfields. Since these free shear layers are usually turbulent, appropriate turbulence models must be utilized in computations in order to accurately simulate these flow features. The HPCC program is relying heavily on parallel computers. A Navier-Stokes solver (POVERFLOW) utilizing the Baldwin-Lomax algebraic turbulence model was developed and tested on a 128-node Intel iPSC/860. Algebraic turbulence models run very fast, and give good results for many flowfields. For complex flowfields such as those mentioned above, however, they are often inadequate. It was therefore deemed that a two-equation turbulence model will be required for the HPA computations. The k-epsilon two-equation turbulence model was implemented on the Intel iPSC/860. Both the Chien low-Reynolds-number model and a generalized wall-function formulation were included.

  11. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  12. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. High-speed engine/component performance assessment using exergy and thrust-based methods

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.

    1996-01-01

    This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.

  14. Revisit to three-dimensional percolation theory: Accurate analysis for highly stretchable conductive composite materials

    PubMed Central

    Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek

    2016-01-01

    A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties. PMID:27694856

  15. High-performance time-resolved fluorescence by direct waveform recording.

    PubMed

    Muretta, Joseph M; Kyrychenko, Alexander; Ladokhin, Alexey S; Kast, David J; Gillispie, Gregory D; Thomas, David D

    2010-10-01

    We describe a high-performance time-resolved fluorescence (HPTRF) spectrometer that dramatically increases the rate at which precise and accurate subnanosecond-resolved fluorescence emission waveforms can be acquired in response to pulsed excitation. The key features of this instrument are an intense (1 μJ/pulse), high-repetition rate (10 kHz), and short (1 ns full width at half maximum) laser excitation source and a transient digitizer (0.125 ns per time point) that records a complete and accurate fluorescence decay curve for every laser pulse. For a typical fluorescent sample containing a few nanomoles of dye, a waveform with a signal/noise of about 100 can be acquired in response to a single laser pulse every 0.1 ms, at least 10(5) times faster than the conventional method of time-correlated single photon counting, with equal accuracy and precision in lifetime determination for lifetimes as short as 100 ps. Using standard single-lifetime samples, the detected signals are extremely reproducible, with waveform precision and linearity to within 1% error for single-pulse experiments. Waveforms acquired in 0.1 s (1000 pulses) with the HPTRF instrument were of sufficient precision to analyze two samples having different lifetimes, resolving minor components with high accuracy with respect to both lifetime and mole fraction. The instrument makes possible a new class of high-throughput time-resolved fluorescence experiments that should be especially powerful for biological applications, including transient kinetics, multidimensional fluorescence, and microplate formats.

  16. Development and validation of high-performance liquid chromatography and high-performance thin-layer chromatography methods for the quantification of khellin in Ammi visnaga seed

    PubMed Central

    Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar

    2015-01-01

    Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890

  17. pyQms enables universal and accurate quantification of mass spectrometry data.

    PubMed

    Leufken, Johannes; Niehues, Anna; Sarin, L Peter; Wessel, Florian; Hippler, Michael; Leidel, Sebastian A; Fufezan, Christian

    2017-10-01

    Quantitative mass spectrometry (MS) is a key technique in many research areas (1), including proteomics, metabolomics, glycomics, and lipidomics. Because all of the corresponding molecules can be described by chemical formulas, universal quantification tools are highly desirable. Here, we present pyQms, an open-source software for accurate quantification of all types of molecules measurable by MS. pyQms uses isotope pattern matching that offers an accurate quality assessment of all quantifications and the ability to directly incorporate mass spectrometer accuracy. pyQms is, due to its universal design, applicable to every research field, labeling strategy, and acquisition technique. This opens ultimate flexibility for researchers to design experiments employing innovative and hitherto unexplored labeling strategies. Importantly, pyQms performs very well to accurately quantify partially labeled proteomes in large scale and high throughput, the most challenging task for a quantification algorithm. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  18. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  19. High-accurate optical vector analysis based on optical single-sideband modulation

    NASA Astrophysics Data System (ADS)

    Xue, Min; Pan, Shilong

    2016-11-01

    Most of the efforts devoted to the area of optical communications were on the improvement of the optical spectral efficiency. Varies innovative optical devices are thus developed to finely manipulate the optical spectrum. Knowing the spectral responses of these devices, including the magnitude, phase and polarization responses, is of great importance for their fabrication and application. To achieve high-resolution characterization, optical vector analyzers (OVAs) based on optical single-sideband (OSSB) modulation have been proposed and developed. Benefiting from the mature and highresolution microwave technologies, the OSSB-based OVA can potentially achieve a resolution of sub-Hz. However, the accuracy is restricted by the measurement errors induced by the unwanted first-order sideband and the high-order sidebands in the OSSB signal, since electrical-to-optical conversion and optical-to-electrical conversion are essentially required to achieve high-resolution frequency sweeping and extract the magnitude and phase information in the electrical domain. Recently, great efforts have been devoted to improve the accuracy of the OSSB-based OVA. In this paper, the influence of the unwanted-sideband induced measurement errors and techniques for implementing high-accurate OSSB-based OVAs are discussed.

  20. Accurate determination of high-risk coronary lesion type by multidetector cardiac computed tomography.

    PubMed

    Alasnag, Mirvat; Umakanthan, Branavan; Foster, Gary P

    2008-07-01

    Coronary arteriography (CA) is the standard method to image coronary lesions. Multidetector cardiac computerized tomography (MDCT) provides high-resolution images of coronary arteries, allowing a noninvasive alternative to determine lesion type. To date, no studies have assessed the ability of MDCT to categorize coronary lesion types. The objective of this study was to determine the accuracy of lesion type categorization by MDCT using CA as a reference standard. Patients who underwent both MDCT and CA within 2 months of each other were enrolled. MDCT and CA images were reviewed in a blinded fashion. Lesions were categorized according to the SCAI classification system (Types I-IV). The origin, proximal and middle segments of the major arteries were analyzed. Each segment comprised a data point for comparison. Analysis was performed using the Spearman Correlation Test. Four hundred eleven segments were studied, of which 110 had lesions. The lesion distribution was as follows: 35 left anterior descending (LAD), 29 circumflex (Cx), 31 right coronary artery (RCA), 2 ramus intermedius, 8 diagonal, 4 obtuse marginal and 2 left internal mammary arteries. Correlations between MDCT and CA were significant in all major vessels (LAD, Cx, RCA) (p < 0.001). The overall correlation coefficient was 0.67. Concordance was strong for lesion Types II-IV (97%) and poor for Type I (30%). High-risk coronary lesion types can be accurately categorized by MDCT. This ability may allow MDCT to play an important noninvasive role in the planning of coronary interventions.

  1. Energy-switching potential energy surface for the water molecule revisited: A highly accurate singled-sheeted form.

    PubMed

    Galvão, B R L; Rodrigues, S P J; Varandas, A J C

    2008-07-28

    A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.

  2. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  3. Projection lenses for high-resolution ablation with excimer lasers: high-performance, wide-field and high-UV laser power

    NASA Astrophysics Data System (ADS)

    Schlichting, Johannes; Winkler, Kerstin; Koerner, Lienhard; Schletterer, Thomas; Burghardt, Berthold; Kahlert, Hans-Juergen

    2000-10-01

    The productive and accurate ablation of microstructures demands the precise imaging of a mask pattern onto the substrate under work. The job can be done with high performance wide field lenses as a key component of ablation equipment. The image field has dimensions of 20 to 30 mm. Typical dimensions and accuracy of the microstructures are in the order of some microns. On the other hand, the working depth of focus (DOF) has to be in the order of some 10 microns to be successful on drilling through 20 to 50 μm substrates. All these features have to be reached under the conditions of high power laser UV light. Some design principles for such systems are applied, such as optimum number of elements, minimum tolerance sensitivity, material restrictions for the lens elements as well as mechanical parts (mounting), restrictions of possible power densities on lens surfaces (including ghosts), matched quality for the manufactures system. The special applications require appropriate performance criteria for theoretical calculation and measurements, which allow to conclude the performance of the application. The base is wave front calculation and measurement (using Shack- Hartmann sensor) in UV. Derived criteria are calculated and compared with application results.

  4. Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?

    PubMed

    Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.

  5. Denaturing high-performance liquid chromatography for mutation detection and genotyping.

    PubMed

    Fackenthal, Donna Lee; Chen, Pei Xian; Howe, Ted; Das, Soma

    2013-01-01

    Denaturing high-performance liquid chromatography (DHPLC) is an accurate and efficient screening technique used for detecting DNA sequence changes by heteroduplex analysis. It can also be used for genotyping of single nucleotide polymorphisms (SNPs). The high sensitivity of DHPLC has made this technique one of the most reliable approaches to mutation analysis and, therefore, used in various areas of genetics, both in the research and clinical arena. This chapter describes the methods used for mutation detection analysis and the genotyping of SNPs by DHPLC on the WAVE™ system from Transgenomic Inc. ("WAVE" and "DNASep" are registered trademarks, and "Navigator" is a trademark, of Transgenomic, used with permission. All other trademarks are property of the respective owners).

  6. Energy stable and high-order-accurate finite difference methods on staggered grids

    NASA Astrophysics Data System (ADS)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  7. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  9. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  10. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  11. Histamine quantification in human plasma using high resolution accurate mass LC-MS technology.

    PubMed

    Laurichesse, Mathieu; Gicquel, Thomas; Moreau, Caroline; Tribut, Olivier; Tarte, Karin; Morel, Isabelle; Bendavid, Claude; Amé-Thomas, Patricia

    2016-01-01

    Histamine (HA) is a small amine playing an important role in anaphylactic reactions. In order to identify and quantify HA in plasma matrix, different methods have been developed but present several disadvantages. Here, we developed an alternative method using liquid chromatography coupled with an ultra-high resolution and accurate mass instrument, Q Exactive™ (Thermo Fisher) (LCHRMS). The method includes a protein precipitation of plasma samples spiked with HA-d4 as internal standard (IS). LC separation was performed on a C18 Accucore column (100∗2.1mm, 2.6μm) using a mobile phase containing nonafluoropentanoic acid (3nM) and acetonitrile with 0.1% (v/v) formic acid on gradient mode. Separation of analytes was obtained within 10min. Analysis was performed from full scan mode and targeted MS2 mode using a 5ppm mass window. Ion transitions monitored for targeted MS2 mode were 112.0869>95.0607m/z for HA and 116.1120>99.0855m/z for HA-d4. Calibration curves were obtained by adding standard calibration dilution at 1 to 180nM in TrisBSA. Elution of HA and IS occurred at 4.1min. The method was validated over a range of concentrations from 1nM to 100nM. The intra- and inter-run precisions were <15% for quality controls. Human plasma samples from 30 patients were analyzed by LCHRMS, and the results were highly correlated with those obtained using the gold standard radioimmunoassay (RIA) method. Overall, we demonstrate here that LCHRMS is a sensitive method for histamine quantification in biological human plasmas, suitable for routine use in medical laboratories. In addition, LCHRMS is less time-consuming than RIA, avoids the use of radioactivity, and could then be considered as an alternative quantitative method. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  12. Performance of Improved High-Order Filter Schemes for Turbulent Flows with Shocks

    NASA Technical Reports Server (NTRS)

    Kotov, Dmitry Vladimirovich; Yee, Helen M C.

    2013-01-01

    The performance of the filter scheme with improved dissipation control ? has been demonstrated for different flow types. The scheme with local ? is shown to obtain more accurate results than its counterparts with global or constant ?. At the same time no additional tuning is needed to achieve high accuracy of the method when using the local ? technique. However, further improvement of the method might be needed for even more complex and/or extreme flows.

  13. Predicting High-Power Performance in Professional Cyclists.

    PubMed

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  14. Highly accurate bound state calculations of the two-center molecular ions by using the universal variational expansion for three-body systems

    NASA Astrophysics Data System (ADS)

    Frolov, Alexei M.

    2018-03-01

    The universal variational expansion for the non-relativistic three-body systems is explicitly constructed. This universal expansion can be used to perform highly accurate numerical computations of the bound state spectra in various three-body systems, including Coulomb three-body systems with arbitrary particle masses and electric charges. Our main interest is related to the adiabatic three-body systems which contain one bound electron and two heavy nuclei of hydrogen isotopes: the protium p, deuterium d and tritium t. We also consider the analogous (model) hydrogen ion ∞H2+ with the two infinitely heavy nuclei.

  15. Identification and accurate quantification of structurally related peptide impurities in synthetic human C-peptide by liquid chromatography-high resolution mass spectrometry.

    PubMed

    Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei

    2018-06-04

    Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.

  16. Quantitative analysis of benzodiazepines in vitreous humor by high-performance liquid chromatography

    PubMed Central

    Bazmi, Elham; Behnoush, Behnam; Akhgari, Maryam; Bahmanabadi, Leila

    2016-01-01

    Objective: Benzodiazepines are frequently screened drugs in emergency toxicology, drugs of abuse testing, and in forensic cases. As the variations of benzodiazepines concentrations in biological samples during bleeding, postmortem changes, and redistribution could be biasing forensic medicine examinations, hence selecting a suitable sample and a validated accurate method is essential for the quantitative analysis of these main drug categories. The aim of this study was to develop a valid method for the determination of four benzodiazepines (flurazepam, lorazepam, alprazolam, and diazepam) in vitreous humor using liquid–liquid extraction and high-performance liquid chromatography. Methods: Sample preparation was carried out using liquid–liquid extraction with n-hexane: ethyl acetate and subsequent detection by high-performance liquid chromatography method coupled to diode array detector. This method was applied to quantify benzodiazepines in 21 authentic vitreous humor samples. Linear curve for each drug was obtained within the range of 30–3000 ng/mL with coefficient of correlation higher than 0.99. Results: The limit of detection and quantitation were 30 and 100 ng/mL respectively for four drugs. The method showed an appropriate intra- and inter-day precision (coefficient of variation < 10%). Benzodiazepines recoveries were estimated to be over 80%. The method showed high selectivity; no additional peak due to interfering substances in samples was observed. Conclusion: The present method was selective, sensitive, accurate, and precise for the quantitative analysis of benzodiazepines in vitreous humor samples in forensic toxicology laboratory. PMID:27635251

  17. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  18. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  19. A highly accurate symmetric optical flow based high-dimensional nonlinear spatial normalization of brain images.

    PubMed

    Wen, Ying; Hou, Lili; He, Lianghua; Peterson, Bradley S; Xu, Dongrong

    2015-05-01

    Spatial normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional spatial normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 1: Cell embedding method and performance

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Kiesel, Peter; Sommer, Lars Wilko; Schwartz, Julian; Lochbaum, Alexander; Hegyi, Alex; Schuh, Andreas; Arakaki, Kyle; Saha, Bhaskar; Ganguli, Anurag; Kim, Kyung Ho; Kim, ChaeAh; Hah, Hoe Jin; Kim, SeokKoo; Hwang, Gyu-Ok; Chung, Geun-Chang; Choi, Bokkyu; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic sensors. High-performance large-format pouch cells with embedded fiber-optic sensors were fabricated. The first of this two-part paper focuses on the embedding method details and performance of these cells. The seal integrity, capacity retention, cycle life, compatibility with existing module designs, and mass-volume cost estimates indicate their suitability for xEV and other advanced battery applications. The second part of the paper focuses on the internal strain and temperature signals obtained from these sensors under various conditions and their utility for high-accuracy cell state estimation algorithms.

  1. A highly accurate wireless digital sun sensor based on profile detecting and detector multiplexing technologies

    NASA Astrophysics Data System (ADS)

    Wei, Minsong; Xing, Fei; You, Zheng

    2017-01-01

    The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.

  2. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  3. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  4. Workplace Learning of High Performance Sports Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Clifford J.; Tinning, Richard

    2010-01-01

    The Australian coaching workplace (to be referred to as the State Institute of Sport; SIS) under consideration in this study employs significant numbers of full-time performance sport coaches and can be accurately characterized as a genuine workplace. Through a consideration of the interaction between what the workplace (SIS) affords the…

  5. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-06

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.

  6. Towards First Principles-Based Prediction of Highly Accurate Electrochemical Pourbaix Diagrams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Zhenhua; Chan, Maria K. Y.; Zhao, Zhi-Jian

    2015-08-13

    Electrochemical potential/pH (Pourbaix) diagrams underpin many aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such diagrams, inherent errors in the description of transition metal (hydroxy)oxides, together with neglect of van der Waals interactions, have limited the reliability of such predictions for even the simplest pure metal bulk compounds, and corresponding predictions for more complex alloy or surface structures are even more challenging. In the present work, through synergistic use of a Hubbard U correction,more » a state-of-the-art dispersion correction, and a water-based bulk reference state for the calculations, these errors are systematically corrected. The approach describes the weak binding that occurs between hydroxyl-containing functional groups in certain compounds in Pourbaix diagrams, corrects for self-interaction errors in transition metal compounds, and reduces residual errors on oxygen atoms by preserving a consistent oxidation state between the reference state, water, and the relevant bulk phases. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxides, oxyhydroxides, binary, and ternary oxides, where the corresponding thermodynamics of redox and (de)hydration are described with standard errors of 0.04 eV per (reaction) formula unit. The approach further preserves accurate descriptions of the overall thermodynamics of electrochemically-relevant bulk reactions, such as water formation, which is an essential condition for facilitating accurate analysis of reaction energies for electrochemical processes on surfaces. The overall generality and transferability of the scheme suggests that it may find useful application in the construction of a broad array of electrochemical phase diagrams, including

  7. Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach

    PubMed Central

    Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474

  8. Process Performance of Optima XEx Single Wafer High Energy Implanter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J. H.; Yoon, Jongyoon; Kondratenko, S.

    2011-01-07

    To meet the process requirements for well formation in future CMOS memory production, high energy implanters require more robust angle, dose, and energy control while maintaining high productivity. The Optima XEx high energy implanter meets these requirements by integrating a traditional LINAC beamline with a robust single wafer handling system. To achieve beam angle control, Optima XEx can control both the horizontal and vertical beam angles to within 0.1 degrees using advanced beam angle measurement and correction. Accurate energy calibration and energy trim functions accelerate process matching by eliminating energy calibration errors. The large volume process chamber and UDC (upstreammore » dose control) using faraday cups outside of the process chamber precisely control implant dose regardless of any chamber pressure increase due to PR (photoresist) outgassing. An optimized RF LINAC accelerator improves reliability and enables singly charged phosphorus and boron energies up to 1200 keV and 1500 keV respectively with higher beam currents. A new single wafer endstation combined with increased beam performance leads to overall increased productivity. We report on the advanced performance of Optima XEx observed during tool installation and volume production at an advanced memory fab.« less

  9. Highly accurate thickness measurement of multi-layered automotive paints using terahertz technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krimi, Soufiene; Beigang, René; Klier, Jens

    2016-07-11

    In this contribution, we present a highly accurate approach for thickness measurements of multi-layered automotive paints using terahertz time domain spectroscopy in reflection geometry. The proposed method combines the benefits of a model-based material parameters extraction method to calibrate the paint coatings, a generalized Rouard's method to simulate the terahertz radiation behavior within arbitrary thin films, and the robustness of a powerful evolutionary optimization algorithm to increase the sensitivity of the minimum thickness measurement limit. Within the framework of this work, a self-calibration model is introduced, which takes into consideration the real industrial challenges such as the effect of wet-on-wetmore » spray in the painting process.« less

  10. Alumina ceramic based high-temperature performance of wireless passive pressure sensor

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Wu, Guozhu; Guo, Tao; Tan, Qiulin

    2016-12-01

    A wireless passive pressure sensor equivalent to inductive-capacitive (LC) resonance circuit and based on alumina ceramic is fabricated by using high temperature sintering ceramic and post-fire metallization processes. Cylindrical copper spiral reader antenna and insulation layer are designed to realize the wireless measurement for the sensor in high temperature environment. The high temperature performance of the sensor is analyzed and discussed by studying the phase-frequency and amplitude-frequency characteristics of reader antenna. The average frequency change of sensor is 0.68 kHz/°C when the temperature changes from 27°C to 700°C and the relative change of twice measurements is 2.12%, with high characteristic of repeatability. The study of temperature-drift characteristic of pressure sensor in high temperature environment lays a good basis for the temperature compensation methods and insures the pressure signal readout accurately.

  11. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  12. A Comprehensive Strategy for Accurate Mutation Detection of the Highly Homologous PMS2.

    PubMed

    Li, Jianli; Dai, Hongzheng; Feng, Yanming; Tang, Jia; Chen, Stella; Tian, Xia; Gorman, Elizabeth; Schmitt, Eric S; Hansen, Terah A A; Wang, Jing; Plon, Sharon E; Zhang, Victor Wei; Wong, Lee-Jun C

    2015-09-01

    Germline mutations in the DNA mismatch repair gene PMS2 underlie the cancer susceptibility syndrome, Lynch syndrome. However, accurate molecular testing of PMS2 is complicated by a large number of highly homologous sequences. To establish a comprehensive approach for mutation detection of PMS2, we have designed a strategy combining targeted capture next-generation sequencing (NGS), multiplex ligation-dependent probe amplification, and long-range PCR followed by NGS to simultaneously detect point mutations and copy number changes of PMS2. Exonic deletions (E2 to E9, E5 to E9, E8, E10, E14, and E1 to E15), duplications (E11 to E12), and a nonsense mutation, p.S22*, were identified. Traditional multiplex ligation-dependent probe amplification and Sanger sequencing approaches cannot differentiate the origin of the exonic deletions in the 3' region when PMS2 and PMS2CL share identical sequences as a result of gene conversion. Our approach allows unambiguous identification of mutations in the active gene with a straightforward long-range-PCR/NGS method. Breakpoint analysis of multiple samples revealed that recurrent exon 14 deletions are mediated by homologous Alu sequences. Our comprehensive approach provides a reliable tool for accurate molecular analysis of genes containing multiple copies of highly homologous sequences and should improve PMS2 molecular analysis for patients with Lynch syndrome. Copyright © 2015 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  13. Experimental and Numerical Examination of the Thermal Transmittance of High Performance Window Frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustavsen Ph.D., Arild; Goudey, Howdy; Kohler, Christian

    2010-06-17

    While window frames typically represent 20-30percent of the overall window area, their impact on the total window heat transfer rates may be much larger. This effect is even greater in low-conductance (highly insulating) windows which incorporate very low conductance glazings. Developing low-conductance window frames requires accurate simulation tools for product research and development. The Passivhaus Institute in Germany states that windows (glazing and frames, combined) should have U-values not exceeding 0.80 W/(m??K). This has created a niche market for highly insulating frames, with frame U-values typically around 0.7-1.0 W/(m2 cdot K). The U-values reported are often based on numerical simulationsmore » according to international simulation standards. It is prudent to check the accuracy of these calculation standards, especially for high performance products before more manufacturers begin to use them to improve other product offerings. In this paper the thermal transmittance of five highly insulating window frames (three wooden frames, one aluminum frame and one PVC frame), found from numerical simulations and experiments, are compared. Hot box calorimeter results are compared with numerical simulations according to ISO 10077-2 and ISO 15099. In addition CFD simulations have been carried out, in order to use the most accurate tool available to investigate the convection and radiation effects inside the frame cavities. Our results show that available tools commonly used to evaluate window performance, based on ISO standards, give good overall agreement, but specific areas need improvement.« less

  14. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Can Scores on an Interim High School Reading Assessment Accurately Predict Low Performance on College Readiness Exams? REL 2016-124

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2016-01-01

    During the 2013/14 school year two Florida school districts sought to develop an early warning system to identify students at risk of low performance on college readiness measures in grade 11 or 12 (such as the SAT or ACT) in order to support them with remedial coursework prior to high school graduation. The study presented in this report provides…

  16. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a largemore » number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and

  17. Children Can Accurately Monitor and Control Their Number-Line Estimation Performance

    ERIC Educational Resources Information Center

    Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.

    2016-01-01

    Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…

  18. Accurate masking technology for high-resolution powder blasting

    NASA Astrophysics Data System (ADS)

    Pawlowski, Anne-Gabrielle; Sayah, Abdeljalil; Gijs, Martin A. M.

    2005-07-01

    We have combined eroding 10 µm diameter Al2O3 particles with a new masking technology to realize the smallest and most accurate possible structures by powder blasting. Our masking technology is based on the sequential combination of two polymers:(i) the brittle epoxy resin SU8 for its photosensitivity and (ii) the elastic and thermocurable poly-dimethylsiloxane for its large erosion resistance. We have micropatterned various types of structures with a minimum width of 20 µm for test structures with an aspect ratio of 1, and 50 µm for test structures with an aspect ratio of 2.

  19. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  20. Accurate Sample Assignment in a Multiplexed, Ultrasensitive, High-Throughput Sequencing Assay for Minimal Residual Disease.

    PubMed

    Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike

    2016-07-01

    High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  1. A Dual-Mode Large-Arrayed CMOS ISFET Sensor for Accurate and High-Throughput pH Sensing in Biomedical Diagnosis.

    PubMed

    Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping

    2015-09-01

    The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.

  2. Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties

    PubMed Central

    Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi

    2016-01-01

    It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot’s end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed. PMID:27483274

  3. High-performance holographic technologies for fluid-dynamics experiments

    PubMed Central

    Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.

    2010-01-01

    Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881

  4. Utility of high-resolution accurate MS to eliminate interferences in the bioanalysis of ribavirin and its phosphate metabolites.

    PubMed

    Wei, Cong; Grace, James E; Zvyaga, Tatyana A; Drexler, Dieter M

    2012-08-01

    The polar nucleoside drug ribavirin (RBV) combined with IFN-α is a front-line treatment for chronic hepatitis C virus infection. RBV acts as a prodrug and exerts its broad antiviral activity primarily through its active phosphorylated metabolite ribavirin 5´-triphosphate (RTP), and also possibly through ribavirin 5´-monophosphate (RMP). To study RBV transport, diffusion, metabolic clearance and its impact on drug-metabolizing enzymes, a LC-MS method is needed to simultaneously quantify RBV and its phosphorylated metabolites (RTP, ribavirin 5´-diphosphate and RMP). In a recombinant human UGT1A1 assay, the assay buffer components uridine and its phosphorylated derivatives are isobaric with RBV and its phosphorylated metabolites, leading to significant interference when analyzed by LC-MS with the nominal mass resolution mode. Presented here is a LC-MS method employing LC coupled with full-scan high-resolution accurate MS analysis for the simultaneous quantitative determination of RBV, RMP, ribavirin 5´-diphosphate and RTP by differentiating RBV and its phosphorylated metabolites from uridine and its phosphorylated derivatives by accurate mass, thus avoiding interference. The developed LC-high-resolution accurate MS method allows for quantitation of RBV and its phosphorylated metabolites, eliminating the interferences from uridine and its phosphorylated derivatives in recombinant human UGT1A1 assays.

  5. Quantification of sulphur amino acids by ultra-high performance liquid chromatography in aquatic invertebrates.

    PubMed

    Thera, Jennifer C; Kidd, Karen A; Dodge-Lynch, M Elaine; Bertolo, Robert F

    2017-12-15

    We examined the performance of an ultra-high performance liquid chromatography method to quantify protein-bound sulphur amino acids in zooplankton. Both cysteic acid and methionine sulfone were linear from 5 to 250 pmol (r 2  = 0.99), with a method detection limit of 13 pmol and 9 pmol, respectively. Although there was no matrix effect on linearity, adjacent peaks and co-eluting noise from the invertebrate proteins increased the detection limits when compared to common standards. Overall, performance characteristics were reproducible and accurate, and provide a means for quantifying sulphur amino acids in aquatic invertebrates, an understudied group. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Performance of Gas Scintillation Proportional Counter Array for High-Energy X-Ray Observatory

    NASA Technical Reports Server (NTRS)

    Gubarev, Mikhail; Ramsey, Brian; Apple, Jeffery

    2004-01-01

    A focal plane array of high-pressure gas scintillation proportional counters (GSPC) for a High Energy X-Ray Observatory (HERO) is developed at the Marshall Space Flight Center. The array is consisted from eight GSPCs and is a part of balloon born payload scheduled to flight in May 2004. These detectors have an active area of approximately 20 square centimeters, and are filled with a high pressure (10(exp 6) Pa) xenon-helium mixture. Imaging is via crossed-grid position-sensitive phototubes sensitive in the UV region. The performance of the GSPC is well matched to that of the telescopes x-ray optics which have response to 75 keV and a focal spot size of approximately 500 microns. The detector's energy resolution, 4% FWHM at 60 keV, is adequate for resolving the broad spectral lines of astrophysical importance and for accurate continuum measurements. Results of the on-earth detector calibration will be presented and in-flight detector performance will be provided, as available.

  7. An Image-Based Algorithm for Precise and Accurate High Throughput Assessment of Drug Activity against the Human Parasite Trypanosoma cruzi

    PubMed Central

    Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H.; Hansen, Michael A. E.

    2014-01-01

    We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity. PMID:24503652

  8. Performance analysis and optimization of high capacity pulse tube refrigerator

    NASA Astrophysics Data System (ADS)

    Ghahremani, Amir R.; Saidi, M. H.; Jahanbakhshi, R.; Roshanghalb, F.

    High capacity pulse tube refrigerator (HCPTR) is a new generation of cryocoolers tailored to provide more than 250 W of cooling power at cryogenic temperatures. The most important characteristics of HCPTR when compared to other types of pulse tube refrigerators are a powerful pressure wave generator, and an accurate design. In this paper the influence of geometrical and operating parameters on the performance of a double inlet pulse tube refrigerator (DIPTR) is studied. The model is validated with the existing experimental data. As a result of this optimization, a new configuration of HCPTR is proposed. This configuration provides 335 W at 80 K cold end temperature with a frequency of 50 Hz and COP of 0.05.

  9. Highly accurate analytic formulae for projectile motion subjected to quadratic drag

    NASA Astrophysics Data System (ADS)

    Turkyilmazoglu, Mustafa

    2016-05-01

    The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.

  10. Design of a new high-performance pointing controller for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1993-01-01

    A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.

  11. Highly accurate apparatus for electrochemical characterization of the felt electrodes used in redox flow batteries

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon

    2016-04-01

    Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.

  12. High performance concrete bridges

    DOT National Transportation Integrated Search

    2000-08-01

    This compilation of FHWA reports focuses on high performance concrete bridges. High performance concrete is described as concrete with enhanced durability and strength characteristics. Under the Strategic Highway Research Program (SHRP), more than 40...

  13. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    NASA Astrophysics Data System (ADS)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  14. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    NASA Technical Reports Server (NTRS)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  15. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  16. Performance Evaluation of a High Bandwidth Liquid Fuel Modulation Valve for Active Combustion Control

    NASA Technical Reports Server (NTRS)

    Saus, Joseph R.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.

    2012-01-01

    At the NASA Glenn Research Center, a characterization rig was designed and constructed for the purpose of evaluating high bandwidth liquid fuel modulation devices to determine their suitability for active combustion control research. Incorporated into the rig s design are features that approximate conditions similar to those that would be encountered by a candidate device if it were installed on an actual combustion research rig. The characterized dynamic performance measures obtained through testing in the rig are planned to be accurate indicators of expected performance in an actual combustion testing environment. To evaluate how well the characterization rig predicts fuel modulator dynamic performance, characterization rig data was compared with performance data for a fuel modulator candidate when the candidate was in operation during combustion testing. Specifically, the nominal and off-nominal performance data for a magnetostrictive-actuated proportional fuel modulation valve is described. Valve performance data were collected with the characterization rig configured to emulate two different combustion rig fuel feed systems. Fuel mass flows and pressures, fuel feed line lengths, and fuel injector orifice size was approximated in the characterization rig. Valve performance data were also collected with the valve modulating the fuel into the two combustor rigs. Comparison of the predicted and actual valve performance data show that when the valve is operated near its design condition the characterization rig can appropriately predict the installed performance of the valve. Improvements to the characterization rig and accompanying modeling activities are underway to more accurately predict performance, especially for the devices under development to modulate fuel into the much smaller fuel injectors anticipated in future lean-burning low-emissions aircraft engine combustors.

  17. Lightweight Provenance Service for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. Themore » need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.« less

  18. High performance systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  19. Higs-instrument: design and demonstration of a high performance gas concentration imager

    NASA Astrophysics Data System (ADS)

    Verlaan, A. L.; Klop, W. A.; Visser, H.; van Brug, H.; Human, J.

    2017-09-01

    Climate change and environmental conditions are high on the political agenda of international governments. Laws and regulations are being setup all around the world to improve the air quality and to reduce the impact. The growth of a number of trace gasses, including CO2, Methane and NOx are especially interesting due to their environmental impact. The regulations made are being based on both models and measurements of the trend of those trace gases over the years. Now the regulations are in place also enforcement and therewith measurements become more and more important. Instruments enabling high spectral and spatial resolution as well as high accurate measurements of trace gases are required to deliver the necessary inputs. Nowadays those measurements are usually performed by space based spectrometers. The requirement for high spectral resolution and measurement accuracy significantly increases the size of the instruments. As a result the instrument and satellite becomes very expensive to develop and to launch. Specialized instruments with a small volume and the required performance will offer significant advantages in both cost and performance. Huib's Innovative Gas Sensor (HIGS, named after its inventor Huib Visser), currently being developed at TNO is an instrument that achieves exactly that. Designed to measure only a single gas concentration, opposed to deriving it from a spectrum, it achieves high performance within a small design volume. The instrument enables instantaneous imaging of the gas distribution of the selected gas. An instrument demonstrator has been developed for NO2 detection. Laboratory measurements proved the measurement technique to be successful. An on-sky measurement campaign is in preparation. This paper addresses both the instrument design as well as the demonstrated performances.

  20. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  1. High symptom reporters are less interoceptively accurate in a symptom-related context.

    PubMed

    Bogaerts, Katleen; Millen, An; Li, Wan; De Peuter, Steven; Van Diest, Ilse; Vlemincx, Elke; Fannes, Stien; Van den Bergh, Omer

    2008-11-01

    We investigated the role of a symptom interpretation frame on the accuracy of interoception and on retrospective symptom reporting in nonclinical high and low reporters of medically unexplained symptoms. All participants (N=74) went through two subsequent trials of the Rebreathing Test, inducing altered respiration and other physical sensations as a result of a gradually increasing pCO(2) level in the blood. Each trial consisted of a baseline (60 s), a rebreathing phase (150 s), and a recovery phase (150 s). In one trial, the sensations were framed in a neutral way ("the gas mixture might alter breathing behavior and induce respiratory sensations"). In the other trial, a symptom frame was induced ("the gas mixture might alter breathing behavior and induce respiratory symptoms"). Breathing behavior was continuously monitored, subjective sensations were rated every 10 s, and after each trial, participants filled out a symptom checklist. Within-subject correlations between the subjective rating and its physiological referent were calculated for the rebreathing phase and recovery phase of each trial separately. High symptom reporters had more (retrospective) complaints than low symptom reporters, especially in the symptom trial. Only in the symptom frame were high symptom reporters less accurate than low symptom reporters. The reduction in interoceptive accuracy (IA) in high symptom reporters was most striking in the recovery phase of the symptom frame trial. A contextual cue, such as a reference to symptoms, reduced IA in high symptom reporters and this was more so during recovery from the symptom induction.

  2. Applications of high spectral resolution FTIR observations demonstrated by the radiometrically accurate ground-based AERI and the scanning HIS aircraft instruments

    NASA Astrophysics Data System (ADS)

    Revercomb, Henry E.; Knuteson, Robert O.; Best, Fred A.; Tobin, David C.; Smith, William L.; Feltz, Wayne F.; Petersen, Ralph A.; Antonelli, Paolo; Olson, Erik R.; LaPorte, Daniel D.; Ellington, Scott D.; Werner, Mark W.; Dedecker, Ralph G.; Garcia, Raymond K.; Ciganovich, Nick N.; Howell, H. Benjamin; Vinson, Kenneth; Ackerman, Steven A.

    2003-06-01

    Development in the mid 80s of the High-resolution Interferometer Sounder (HIS) for the high altitude NASA ER2 aircraft demonstrated the capability for advanced atmospheric temperature and water vapor sounding and set the stage for new satellite instruments that are now becoming a reality [AIRS (2002), CrIS (2006), IASI (2006), GIFTS (2005/6)]. Follow-on developments at the University of Wisconsin-Madison that employ interferometry for a wide range of Earth observations include the ground-based Atmospheric Emitted Radiance Interferometer (AERI) and the Scanning HIS aircraft instrument (S-HIS). The AERI was developed for the US DOE Atmospheric Radiation Measurement (ARM) Program, primarily to provide highly accurate radiance spectra for improving radiative transfer models. The continuously operating AERI soon demonstrated valuable new capabilities for sensing the rapidly changing state of the boundary layer and properties of the surface and clouds. The S-HIS is a smaller version of the original HIS that uses cross-track scanning to enhance spatial coverage. S-HIS and its close cousin, the NPOESS Airborne Sounder Testbed (NAST) operated by NASA Langley, are being used for satellite instrument validation and for atmospheric research. The calibration and noise performance of these and future satellite instruments is key to optimizing their remote sensing products. Recently developed techniques for improving effective radiometric performance by removing noise in post-processing is a primary subject of this paper.

  3. Accurate dipole moment curve and non-adiabatic effects on the high resolution spectroscopic properties of the LiH molecule

    NASA Astrophysics Data System (ADS)

    Diniz, Leonardo G.; Kirnosov, Nikita; Alijah, Alexander; Mohallem, José R.; Adamowicz, Ludwik

    2016-04-01

    A very accurate dipole moment curve (DMC) for the ground X1Σ+ electronic state of the 7LiH molecule is reported. It is calculated with the use of all-particle explicitly correlated Gaussian functions with shifted centers. The DMC - the most accurate to our knowledge - and the corresponding highly accurate potential energy curve are used to calculate the transition energies, the transition dipole moments, and the Einstein coefficients for the rovibrational transitions with ΔJ = - 1 and Δv ⩽ 5 . The importance of the non-adiabatic effects in determining these properties is evaluated using the model of a vibrational R-dependent effective reduced mass in the rovibrational calculations introduced earlier (Diniz et al., 2015). The results of the present calculations are used to assess the quality of the two complete linelists of 7LiH available in the literature.

  4. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  5. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  6. Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.

    PubMed

    Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L

    2018-01-01

    Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p  = 0.04; d  = 0.34), had increased response bias toward fear ( p  = 0.03), and increased response time to facial emotions as a whole ( p  = 0.02; d  = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.

  7. High performance computing and communications: Advancing the frontiers of information technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996

  8. Bioinspired Polymeric Photonic Crystals for High Cycling pH-Sensing Performance.

    PubMed

    Fei, Xiang; Lu, Tao; Ma, Jun; Wang, Wanlin; Zhu, Shenmin; Zhang, Di

    2016-10-12

    Artificial photonic crystals (PCs) have been extensively studied to improve the sensing performance of poly(acrylic acid) (PAAc), as it can transform the PAAc volume change into optical signal which is easier to read. Nevertheless, these PCs are limited by the monostructure. We herein developed new photonic crystals (PCs) by coating acrylic acid and acrylamide (AAm) via in situ copolymerization onto Papilio paris wings having hierarchical, lamellar structure. Our PCs exhibited high performance of color tunability to environmental pH, as detected by reflectance spectra and visual observation. The introduction of AAm into the system created covalent bonding which robustly bridged the polymer with the wings, leading to an accurate yet broad variation of reflection wavelength to gauge environmental pH. The reflection wavelength can be tailored by the refractive index of the lamellar interspacing due to the swelling/deswelling of the polymer. The mechanism is not only supported by experimenta but proved by finite-difference time-domain simulation. Moreover, It is worth noting that the covalent bonding has provided the PCs-based pH sensor with high cycling performance, implying great potential in practical applications. The simple fabrication process is applicable to the development of a wide variety of stimuli-responsive PCs taking advantage of other polymers.

  9. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  10. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-09

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  11. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  12. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  13. A systematic approach for the accurate and rapid measurement of water vapor transmission through ultra-high barrier films

    NASA Astrophysics Data System (ADS)

    Kiese, Sandra; Kücükpinar, Esra; Reinelt, Matthias; Miesbauer, Oliver; Ewender, Johann; Langowski, Horst-Christian

    2017-02-01

    Flexible organic electronic devices are often protected from degradation by encapsulation in multilayered films with very high barrier properties against moisture and oxygen. However, metrology must be improved to detect such low quantities of permeants. We therefore developed a modified ultra-low permeation measurement device based on a constant-flow carrier-gas system to measure both the transient and stationary water vapor permeation through high-performance barrier films. The accumulation of permeated water vapor before its transport to the detector allows the measurement of very low water vapor transmission rates (WVTRs) down to 2 × 10-5 g m-2 d-1. The measurement cells are stored in a temperature-controlled chamber, allowing WVTR measurements within the temperature range 23-80 °C. Differences in relative humidity can be controlled within the range 15%-90%. The WVTR values determined using the novel measurement device agree with those measured using a commercially available carrier-gas device from MOCON®. Depending on the structure and quality of the barrier film, it may take a long time for the WVTR to reach a steady-state value. However, by using a combination of the time-dependent measurement and the finite element method, we were able to estimate the steady-state WVTR accurately with significantly shorter measurement times.

  14. [Screening and confirmation of 24 hormones in cosmetics by ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry].

    PubMed

    Li, Zhaoyong; Wang, Fengmei; Niu, Zengyuan; Luo, Xin; Zhang, Gang; Chen, Junhui

    2014-05-01

    A method of ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry (UPLC-LTQ/Orbitrap MS) was established to screen and confirm 24 hormones in cosmetics. Various cosmetic samples were extracted with methanol. The extract was loaded onto a Waters ACQUITY UPLC BEH C18 column (50 mm x 2.1 mm, 1.7 microm) using a gradient elution of acetonitrile/water containing 0.1% (v/v) formic acid for the separation. The accurate mass of quasi-molecular ion was acquired by full scanning of electrostatic field orbitrap. The rapid screening was carried out by the accurate mass of quasi-molecular ion. The confirmation analysis for targeted compounds was performed with the retention time and qualitative fragments obtained by data dependent scan mode. Under the optimal conditions, the 24 hormones were routinely detected with mass accuracy error below 3 x 10(-6) (3 ppm), and good linearities were obtained in their respective linear ranges with correlation coefficients higher than 0.99. The LODs (S/N = 3) of the 24 compounds were < or = 10 microg/kg, which can meet the requirements for the actual screening of cosmetic samples. The developed method was applied to screen the hormones in 50 cosmetic samples. The results demonstrate that the method is a useful tool for the rapid screening and identification of the hormones in cosmetics.

  15. The identification of complete domains within protein sequences using accurate E-values for semi-global alignment

    PubMed Central

    Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.

    2007-01-01

    The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268

  16. A High-Performance Neural Prosthesis Incorporating Discrete State Selection With Hidden Markov Models.

    PubMed

    Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V

    2017-04-01

    Communication neural prostheses aim to restore efficient communication to people with motor neurological injury or disease by decoding neural activity into control signals. These control signals are both analog (e.g., the velocity of a computer mouse) and discrete (e.g., clicking an icon with a computer mouse) in nature. Effective, high-performing, and intuitive-to-use communication prostheses should be capable of decoding both analog and discrete state variables seamlessly. However, to date, the highest-performing autonomous communication prostheses rely on precise analog decoding and typically do not incorporate high-performance discrete decoding. In this report, we incorporated a hidden Markov model (HMM) into an intracortical communication prosthesis to enable accurate and fast discrete state decoding in parallel with analog decoding. In closed-loop experiments with nonhuman primates implanted with multielectrode arrays, we demonstrate that incorporating an HMM into a neural prosthesis can increase state-of-the-art achieved bitrate by 13.9% and 4.2% in two monkeys ( ). We found that the transition model of the HMM is critical to achieving this performance increase. Further, we found that using an HMM resulted in the highest achieved peak performance we have ever observed for these monkeys, achieving peak bitrates of 6.5, 5.7, and 4.7 bps in Monkeys J, R, and L, respectively. Finally, we found that this neural prosthesis was robustly controllable for the duration of entire experimental sessions. These results demonstrate that high-performance discrete decoding can be beneficially combined with analog decoding to achieve new state-of-the-art levels of performance.

  17. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  18. An on-spot internal standard addition approach for accurately determining colistin A and colistin B in dried blood spots using ultra high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Tsai, I-Lin; Kuo, Ching-Hua; Sun, Hsin-Yun; Chuang, Yu-Chung; Chepyala, Divyabharathi; Lin, Shu-Wen; Tsai, Yun-Jung

    2017-10-25

    Outbreaks of multidrug-resistant Gram-negative bacterial infections have been reported worldwide. Colistin, an antibiotic with known nephrotoxicity and neurotoxicity, is now being used to treat multidrug-resistant Gram-negative strains. In this study, we applied an on-spot internal standard addition approach coupled with an ultra high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify colistin A and B from dried blood spots (DBSs). Only 15μL of whole blood was required for each sample. An internal standard with the same yield of extraction recoveries as colistin was added to the spot before sample extraction for accurate quantification. Formic acid in water (0.15%) with an equal volume of acetonitrile (50:50v/v) was used as the extraction solution. With the optimized extraction process and LC-MS/MS conditions, colistin A and B could be quantified from a DBS with respective limits of quantification of 0.13 and 0.27μgmL -1 , and the retention times were < 2min. The relative standard deviations of within-run and between-run precisions for peak area ratios were all < 17.3%. Accuracies were 91.5-111.2% for lower limit of quantification, low, medium, and high QC samples. The stability of the easily hydrolyzed prodrug, colistin methanesulfonate, was investigated in DBSs. Less than 4% of the prodrug was found to be hydrolyzed in DBSs at room temperature after 48h. The developed method applied an on-spot internal standard addition approach which benefited the precision and accuracy. Results showed that DBS sampling coupled with the sensitive LC-MS/MS method has the potential to be an alternative approach for colistin quantification, where the bias of prodrug hydrolysis in liquid samples is decreased. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Fast and accurate: high-speed metrological large-range AFM for surface and nanometrology

    NASA Astrophysics Data System (ADS)

    Dai, Gaoliang; Koenders, Ludger; Fluegge, Jens; Hemmleb, Matthias

    2018-05-01

    Low measurement speed remains a major shortcoming of the scanning probe microscopic technique. It not only leads to a low measurement throughput, but a significant measurement drift over the long measurement time needed (up to hours or even days). To overcome this challenge, PTB, the national metrology institute of Germany, has developed a high-speed metrological large-range atomic force microscope (HS Met. LR-AFM) capable of measuring speeds up to 1 mm s‑1. This paper has introduced the design concept in detail. After modelling scanning probe microscopic measurements, our results suggest that the signal spectrum of the surface to be measured is the spatial spectrum of the surface scaled by the scanning speed. The higher the scanning speed , the broader the spectrum to be measured. To realise an accurate HS Met. LR-AFM, our solution is to combine different stages/sensors synchronously in measurements, which provide a much larger spectrum area for high-speed measurement capability. Two application examples have been demonstrated. The first is a new concept called reference areal surface metrology. Using the developed HS Met. LR-AFM, surfaces are measured accurately and traceably at a speed of 500 µm s‑1 and the results are applied as a reference 3D data map of the surfaces. By correlating the reference 3D data sets and 3D data sets of tools under calibration, which are measured at the same surface, it has the potential to comprehensively characterise the tools, for instance, the spectrum properties of the tools. The investigation results of two commercial confocal microscopes are demonstrated, indicating very promising results. The second example is the calibration of a kind of 3D nano standard, which has spatially distributed landmarks, i.e. special unique features defined by 3D-coordinates. Experimental investigations confirmed that the calibration accuracy is maintained at a measurement speed of 100 µm s‑1, which improves the calibration efficiency by a

  20. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  1. Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node

    NASA Astrophysics Data System (ADS)

    Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi

    2015-03-01

    In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.

  2. Highly accurate quantitative spectroscopy of massive stars in the Galaxy

    NASA Astrophysics Data System (ADS)

    Nieva, María-Fernanda; Przybilla, Norbert

    2017-11-01

    Achieving high accuracy and precision in stellar parameter and chemical composition determinations is challenging in massive star spectroscopy. On one hand, the target selection for an unbiased sample build-up is complicated by several types of peculiarities that can occur in individual objects. On the other hand, composite spectra are often not recognized as such even at medium-high spectral resolution and typical signal-to-noise ratios, despite multiplicity among massive stars is widespread. In particular, surveys that produce large amounts of automatically reduced data are prone to oversight of details that turn hazardous for the analysis with techniques that have been developed for a set of standard assumptions applicable to a spectrum of a single star. Much larger systematic errors than anticipated may therefore result because of the unrecognized true nature of the investigated objects, or much smaller sample sizes of objects for the analysis than initially planned, if recognized. More factors to be taken care of are the multiple steps from the choice of instrument over the details of the data reduction chain to the choice of modelling code, input data, analysis technique and the selection of the spectral lines to be analyzed. Only when avoiding all the possible pitfalls, a precise and accurate characterization of the stars in terms of fundamental parameters and chemical fingerprints can be achieved that form the basis for further investigations regarding e.g. stellar structure and evolution or the chemical evolution of the Galaxy. The scope of the present work is to provide the massive star and also other astrophysical communities with criteria to evaluate the quality of spectroscopic investigations of massive stars before interpreting them in a broader context. The discussion is guided by our experiences made in the course of over a decade of studies of massive star spectroscopy ranging from the simplest single objects to multiple systems.

  3. High- and low-pressure pneumotachometers measure respiration rates accurately in adverse environments

    NASA Technical Reports Server (NTRS)

    Fagot, R. J.; Mc Donald, R. T.; Roman, J. A.

    1968-01-01

    Respiration-rate transducers in the form of pneumotachometers measure respiration rates of pilots operating high performance research aircraft. In each low pressure or high pressure oxygen system a sensor is placed in series with the pilots oxygen supply line to detect gas flow accompanying respiration.

  4. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Wu, Zedong; Alkhalifah, Tariq

    2018-07-01

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is highly accurate and efficient.

  5. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  6. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  7. Accurate calibration of a molecular beam time-of-flight mass spectrometer for on-line analysis of high molecular weight species.

    PubMed

    Apicella, B; Wang, X; Passaro, M; Ciajolo, A; Russo, C

    2016-10-15

    Time-of-Flight (TOF) Mass Spectrometry is a powerful analytical technique, provided that an accurate calibration by standard molecules in the same m/z range of the analytes is performed. Calibration in a very large m/z range is a difficult task, particularly in studies focusing on the detection of high molecular weight clusters of different molecules or high molecular weight species. External calibration is the most common procedure used for TOF mass spectrometric analysis in the gas phase and, generally, the only available standards are made up of mixtures of noble gases, covering a small mass range for calibration, up to m/z 136 (higher mass isotope of xenon). In this work, an accurate calibration of a Molecular Beam Time-of Flight Mass Spectrometer (MB-TOFMS) is presented, based on the use of water clusters up to m/z 3000. The advantages of calibrating a MB-TOFMS with water clusters for the detection of analytes with masses above those of the traditional calibrants such as noble gases were quantitatively shown by statistical calculations. A comparison of the water cluster and noble gases calibration procedures in attributing the masses to a test mixture extending up to m/z 800 is also reported. In the case of the analysis of combustion products, another important feature of water cluster calibration was shown, that is the possibility of using them as "internal standard" directly formed from the combustion water, under suitable experimental conditions. The water clusters calibration of a MB-TOFMS gives rise to a ten-fold reduction in error compared to the traditional calibration with noble gases. The consequent improvement in mass accuracy in the calibration of a MB-TOFMS has important implications in various fields where detection of high molecular mass species is required. In combustion products analysis, it is also possible to obtain a new calibration spectrum before the acquisition of each spectrum, only modifying some operative conditions. Copyright © 2016

  8. How accurately does high tibial osteotomy correct the mechanical axis of an arthritic varus knee? A systematic review.

    PubMed

    Van den Bempt, Maxim; Van Genechten, Wouter; Claes, Toon; Claes, Steven

    2016-12-01

    The aim of this study was to give an overview of the accuracy of coronal limb alignment correction after high tibial osteotomy (HTO) for the arthritic varus knee by performing a systematic review of the literature. The databases PubMed, MEDLINE and Cochrane Library were screened for relevant articles. Only prospective clinical studies with the accuracy of alignment correction by performing HTO as primary or secondary objective were included. Fifteen studies were included in this systematic review and were subdivided in 23 cohorts. A total of 966 procedures were considered. Nine cohorts used computer navigation during HTO and the other 14 cohorts used a conventional method. In seven computer navigation cohorts, at least 75% of the study population fell into the accepted "range of accuracy" (AR) as proposed by the different studies, but only six out of 14 conventional cohorts reached this percentage. Four out of eight conventional cohorts that provided data on under- and overcorrection, had a tendency to undercorrection. The accuracy of coronal alignment corrections using conventional HTO falls short. The number of procedures outside the proposed AR is surprising and exposes a critical concern for modern HTO. Computer navigation might improve the accuracy of correction, but its use is not widespread among orthopedic surgeons. Although HTO procedures have been shown to be successful in the treatment of unicompartmental knee arthritis when performed accurately, the results of this review stress the importance of ongoing efforts in order to improve correction accuracy in modern HTO. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  10. High-performance electronic image stabilisation for shift and rotation correction

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, D. L.; Wu, F.

    2014-06-01

    A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.

  11. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  12. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  13. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-10

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  14. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  15. 12-GHz thin-film transistors on transferrable silicon nanomembranes for high-performance flexible electronics.

    PubMed

    Sun, Lei; Qin, Guoxuan; Seo, Jung-Hun; Celler, George K; Zhou, Weidong; Ma, Zhenqiang

    2010-11-22

    Multigigahertz flexible electronics are attractive and have broad applications. A gate-after-source/drain fabrication process using preselectively doped single-crystal silicon nanomembranes (SiNM) is an effective approach to realizing high device speed. However, further downscaling this approach has become difficult in lithography alignment. In this full paper, a local alignment scheme in combination with more accurate SiNM transfer measures for minimizing alignment errors is reported. By realizing 1 μm channel alignment for the SiNMs on a soft plastic substrate, thin-film transistors with a record speed of 12 GHz maximum oscillation frequency are demonstrated. These results indicate the great potential of properly processed SiNMs for high-performance flexible electronics.

  16. Large Aperture "Photon Bucket" Optical Receiver Performance in High Background Environments

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.; Hoppe, D.

    2011-01-01

    The potential development of large aperture groundbased "photon bucket" optical receivers for deep space communications, with acceptable performance even when pointing close to the sun, is receiving considerable attention. Sunlight scattered by the atmosphere becomes significant at micron wavelengths when pointing to a few degrees from the sun, even with the narrowest bandwidth optical filters. In addition, high quality optical apertures in the 10-30 meter range are costly and difficult to build with accurate surfaces to ensure narrow fields-of-view (FOV). One approach currently under consideration is to polish the aluminum reflector panels of large 34-meter microwave antennas to high reflectance, and accept the relatively large FOV generated by state-of-the-art polished aluminum panels with rms surface accuracies on the order of a few microns, corresponding to several-hundred micro-radian FOV, hence generating centimeter-diameter focused spots at the Cassegrain focus of 34-meter antennas. Assuming pulse-position modulation (PPM) and Poisson-distributed photon-counting detection, a "polished panel" photon-bucket receiver with large FOV will collect hundreds of background photons per PPM slot, along with comparable signal photons due to its large aperture. It is demonstrated that communications performance in terms of PPM symbol-error probability in high-background high-signal environments depends more strongly on signal than on background photons, implying that large increases in background energy can be compensated by a disproportionally small increase in signal energy. This surprising result suggests that large optical apertures with relatively poor surface quality may nevertheless provide acceptable performance for deep-space optical communications, potentially enabling the construction of cost-effective hybrid RF/optical receivers in the future.

  17. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  18. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Learning fast accurate movements requires intact frontostriatal circuits

    PubMed Central

    Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro

    2013-01-01

    The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037

  20. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yan

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm ismore » significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.« less

  1. Performance seeking control (PSC) for the F-15 highly integrated digital electronic control (HIDEC) aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.

    1995-01-01

    The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.

  2. Determination of dasatinib in the tablet dosage form by ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis.

    PubMed

    Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel

    2017-01-01

    Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Therapeutic Drug Monitoring of Phenytoin by Simple, Rapid, Accurate, Highly Sensitive and Novel Method and Its Clinical Applications.

    PubMed

    Shaikh, Abdul S; Guo, Ruichen

    2017-01-01

    Phenytoin has very challenging pharmacokinetic properties. To prevent its toxicity and ensure efficacy, continuous therapeutic monitoring is required. It is hard to get a simple, accurate, rapid, easily available, economical and highly sensitive assay in one method for therapeutic monitoring of phenytoin. The present study is directed towards establishing and validating a simpler, rapid, an accurate, highly sensitive, novel and environment friendly liquid chromatography/mass spectrometry (LC/MS) method for offering rapid and reliable TDM results of phenytoin in epileptic patients to physicians and clinicians for making immediate and rational decision. 27 epileptics patients with uncontrolled seizures or suspected of non-compliance or toxicity of phenytoin were selected and advised for TDM of phenytoin by neurologists of Qilu Hospital Jinan, China. The LC/MS assay was used for performing of therapeutic monitoring of phenytoin. The Agilent 1100 LC/MS system was used for TDM. The mixture of Ammonium acetate 5mM: Methanol at (35: 65 v/v) was used for the composition of mobile phase. The Diamonsil C18 (150mm×4.6mm, 5μm) column was used for the extraction of analytes in plasma. The samples were prepared with one step simple protein precipitation method. The technique was validated with the guidelines of International Conference on Harmonisation (ICH). The calibration curve demonstrated decent linearity within (0.2-20 µg/mL) concentration range with linearity equation, y= 0.0667855 x +0.00241785 and correlation coefficient (R2) of 0.99928. The specificity, recovery, linearity, accuracy, precision and stability results were within the accepted limits. The concentration of 0.2 µg/mL was observed as lower limit of quantitation (LLOQ), which is 12.5 times lower than the currently available enzyme-multiplied immunoassay technique (EMIT) for measurement of phenytoin in epilepsy patients. A rapid, simple, economical, precise, highly sensitive and novel LC/MS assay has been

  4. Comprehensive identification of bioactive compounds of avocado peel by liquid chromatography coupled to ultra-high-definition accurate-mass Q-TOF.

    PubMed

    Figueroa, Jorge G; Borrás-Linares, Isabel; Lozano-Sánchez, Jesús; Segura-Carretero, Antonio

    2018-04-15

    Industrially the avocado pulp is exploited principally as oil and paste, generating a huge quantity of peel and seed as by-products. Avocado peel is a promising inexpensive candidate for recovery phenolic compounds. The aim of this work was to identify the bioactive compounds present in an extract of avocado peel obtained by a green extraction technique. Accelerated solvent extraction was performed using water and ethanol as extraction solvents. Liquid chromatography coupled to ultra-high-definition accurate-mass spectrometry was used in order to identify the bioactive compounds. A total of sixty-one compounds belonging to eleven families were identified. Procyanidins, flavonols, hydroxybenzoic and hydroxycinnamic acids were the most common compounds. A sum of thirty-five compounds has been identified here for the first time in avocado peel. These results confirm the potential of avocado peel as a source of bioactive ingredients for its use in the food, cosmetic or pharmaceutical sector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.

    2006-09-01

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.

  6. [High performance liquid chromatogram (HPLC) determination of adenosine phosphates in rat myocardium].

    PubMed

    Miao, Yu; Wang, Cheng-long; Yin, Hui-jun; Shi, Da-zhuo; Chen, Ke-ji

    2005-04-18

    To establish method for the quantitative determination of adenosine phosphates in rat myocardium by optimized high performance liquid chromatogram (HPLC). ODS HYPERSIL C(18) column and a mobile phase of 50 mmol/L tribasic potassium phosphate buffer solution (pH 6.5), with UV detector at 254 nm were used. The average recovery rates of myocardial adenosine triphosphate (ATP), adenosine diphosphate (ADP) and adenosine monophosphate (AMP) were 99%-107%, 96%-104% and 95%-119%, respectively; relative standard deviations (RSDs) of within-day and between-days were less than 1.5% and 5.1%, respectively. The method is simple, rapid and accurate, and can be used to analyse the adenosine phosphates in myocardium.

  7. High-Performance Thermoelectric Semiconductors

    NASA Technical Reports Server (NTRS)

    Fleurial, Jean-Pierre; Caillat, Thierry; Borshchevsky, Alexander

    1994-01-01

    Figures of merit almost double current state-of-art thermoelectric materials. IrSb3 is semiconductor found to exhibit exceptional thermoelectric properties. CoSb3 and RhSb3 have same skutterudite crystallographic structure as IrSb3, and exhibit exceptional transport properties expected to contribute to high thermoelectric performance. These three compounds form solid solutions. Combination of properties offers potential for development of new high-performance thermoelectric materials for more efficient thermoelectric power generators, coolers, and detectors.

  8. How do gender and anxiety affect students' self-assessment and actual performance on a high-stakes clinical skills examination?

    PubMed

    Colbert-Getz, Jorie M; Fleishman, Carol; Jung, Julianna; Shilkofski, Nicole

    2013-01-01

    Research suggests that medical students are not accurate in self-assessment, but it is not clear whether students over- or underestimate their skills or how certain characteristics correlate with accuracy in self-assessment. The goal of this study was to determine the effect of gender and anxiety on accuracy of students' self-assessment and on actual performance in the context of a high-stakes assessment. Prior to their fourth year of medical school, two classes of medical students at Johns Hopkins University School of Medicine completed a required clinical skills exam in fall 2010 and 2011, respectively. Two hundred two students rated their anxiety in anticipation of the exam and predicted their overall scores in the history taking and physical examination performance domains. A self-assessment deviation score was calculated by subtracting each student's predicted score from his or her score as rated by standardized patients. When students self-assessed their data gathering performance, there was a weak negative correlation between their predicted scores and their actual scores on the examination. Additionally, there was an interaction effect of anxiety and gender on both self-assessment deviation scores and actual performance. Specifically, females with high anxiety were more accurate in self-assessment and achieved higher actual scores compared with males with high anxiety. No differences by gender emerged for students with moderate or low anxiety. Educators should take into account not only gender but also the role of emotion, in this case anxiety, when planning interventions to help improve accuracy of students' self-assessment.

  9. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the

  10. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Multiple-frequency continuous wave ultrasonic system for accurate distance measurement

    NASA Astrophysics Data System (ADS)

    Huang, C. F.; Young, M. S.; Li, Y. C.

    1999-02-01

    A highly accurate multiple-frequency continuous wave ultrasonic range-measuring system for use in air is described. The proposed system uses a method heretofore applied to radio frequency distance measurement but not to air-based ultrasonic systems. The method presented here is based upon the comparative phase shifts generated by three continuous ultrasonic waves of different but closely spaced frequencies. In the test embodiment to confirm concept feasibility, two low cost 40 kHz ultrasonic transducers are set face to face and used to transmit and receive ultrasound. Individual frequencies are transmitted serially, each generating its own phase shift. For any given frequency, the transmitter/receiver distance modulates the phase shift between the transmitted and received signals. Comparison of the phase shifts allows a highly accurate evaluation of target distance. A single-chip microcomputer-based multiple-frequency continuous wave generator and phase detector was designed to record and compute the phase shift information and the resulting distance, which is then sent to either a LCD or a PC. The PC is necessary only for calibration of the system, which can be run independently after calibration. Experiments were conducted to test the performance of the whole system. Experimentally, ranging accuracy was found to be within ±0.05 mm, with a range of over 1.5 m. The main advantages of this ultrasonic range measurement system are high resolution, low cost, narrow bandwidth requirements, and ease of implementation.

  12. Directional RNA-seq reveals highly complex condition-dependent transcriptomes in E. coli K12 through accurate full-length transcripts assembling.

    PubMed

    Li, Shan; Dong, Xia; Su, Zhengchang

    2013-07-30

    Although prokaryotic gene transcription has been studied over decades, many aspects of the process remain poorly understood. Particularly, recent studies have revealed that transcriptomes in many prokaryotes are far more complex than previously thought. Genes in an operon are often alternatively and dynamically transcribed under different conditions, and a large portion of genes and intergenic regions have antisense RNA (asRNA) and non-coding RNA (ncRNA) transcripts, respectively. Ironically, similar studies have not been conducted in the model bacterium E coli K12, thus it is unknown whether or not the bacterium possesses similar complex transcriptomes. Furthermore, although RNA-seq becomes the major method for analyzing the complexity of prokaryotic transcriptome, it is still a challenging task to accurately assemble full length transcripts using short RNA-seq reads. To fill these gaps, we have profiled the transcriptomes of E. coli K12 under different culture conditions and growth phases using a highly specific directional RNA-seq technique that can capture various types of transcripts in the bacterial cells, combined with a highly accurate and robust algorithm and tool TruHMM (http://bioinfolab.uncc.edu/TruHmm_package/) for assembling full length transcripts. We found that 46.9 ~ 63.4% of expressed operons were utilized in their putative alternative forms, 72.23 ~ 89.54% genes had putative asRNA transcripts and 51.37 ~ 72.74% intergenic regions had putative ncRNA transcripts under different culture conditions and growth phases. As has been demonstrated in many other prokaryotes, E. coli K12 also has a highly complex and dynamic transcriptomes under different culture conditions and growth phases. Such complex and dynamic transcriptomes might play important roles in the physiology of the bacterium. TruHMM is a highly accurate and robust algorithm for assembling full-length transcripts in prokaryotes using directional RNA-seq short reads.

  13. A tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1992-01-01

    This invention is a semi-interpenetrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. An improved high temperature matrix resin is provided which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance and moisture and solvent resistances.

  14. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  15. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  16. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  17. Detection of Free Polyamines in Plants Subjected to Abiotic Stresses by High-Performance Liquid Chromatography (HPLC).

    PubMed

    Gong, Xiaoqing; Liu, Ji-Hong

    2017-01-01

    High-performance liquid chromatography (HPLC) is a sensitive, rapid, and accurate technique to detect and characterize various metabolites from plants. The metabolites are extracted with different solvents and eluted with appropriate mobile phases in a designed HPLC program. Polyamines are known to accumulate under abiotic stress conditions in various plant species and thought to provide protection against oxidative stress by scavenging reactive oxygen species. Here, we describe a common method to detect the free polyamines in plant tissues both qualitatively and quantitatively.

  18. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  19. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  20. High-performance fused indium gallium arsenide/silicon photodiode

    NASA Astrophysics Data System (ADS)

    Kang, Yimin

    Modern long haul, high bit rate fiber-optic communication systems demand photodetectors with high sensitivity. Avalanche photodiodes (APDs) exhibit superior sensitivity performance than other types of photodetectors by virtual of its internal gain mechanism. This dissertation work further advances the APD performance by applying a novel materials integration technique. It is the first successful demonstration of wafer fused InGaAs/Si APDs with low dark current and low noise. APDs generally adopt separate absorption and multiplication (SAM) structure, which allows independent optimization of materials properties in two distinct regions. While the absorption material needs to have high absorption coefficient in the target wavelength range to achieve high quantum efficiency, it is desirable for the multiplication material to have large discrepancy between its electron and hole ionization coefficients to reduce noise. According to these criteria, InGaAs and Si are the ideal materials combination. Wafer fusion is the enabling technique that makes this theoretical ideal an experimental possibility. APDs fabricated on the fused InGaAs/Si wafer with mesa structure exhibit low dark current and low noise. Special device fabrication techniques and high quality wafer fusion reduce dark current to nano ampere level at unity gain, comparable to state-of-the-art commercial III/V APDs. The small excess noise is attributed to the large difference in ionization coefficients between electrons and holes in silicon. Detailed layer structure designs are developed specifically for fused InGaAs/Si APDs based on principles similar to those used in traditional InGaAs/InP APDs. An accurate yet straightforward technique for device structural parameters extraction is also proposed. The extracted results from the fabricated APDs agree with device design parameters. This agreement also confirms that the fusion interface has negligible effect on electric field distributions for devices fabricated

  1. Pressurized planar electrochromatography, high-performance thin-layer chromatography and high-performance liquid chromatography--comparison of performance.

    PubMed

    Płocharz, Paweł; Klimek-Turek, Anna; Dzido, Tadeusz H

    2010-07-16

    Kinetic performance, measured by plate height, of High-Performance Thin-Layer Chromatography (HPTLC), High-Performance Liquid Chromatography (HPLC) and Pressurized Planar Electrochromatography (PPEC) was compared for the systems with adsorbent of the HPTLC RP18W plate from Merck as the stationary phase and the mobile phase composed of acetonitrile and buffer solution. The HPLC column was packed with the adsorbent, which was scrapped from the chromatographic plate mentioned. An additional HPLC column was also packed with adsorbent of 5 microm particle diameter, C18 type silica based (LiChrosorb RP-18 from Merck). The dependence of plate height of both HPLC and PPEC separating systems on flow velocity of the mobile phase and on migration distance of the mobile phase in TLC system was presented applying test solute (prednisolone succinate). The highest performance, amongst systems investigated, was obtained for the PPEC system. The separation efficiency of the systems investigated in the paper was additionally confirmed by the separation of test component mixture composed of six hormones. 2010 Elsevier B.V. All rights reserved.

  2. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  3. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  4. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  5. Putative identification of new p-coumaroyl glycoside flavonoids in grape by ultra-high performance liquid chromatography/high-resolution mass spectrometry.

    PubMed

    Panighel, Annarita; De Rosso, Mirko; Dalla Vedova, Antonio; Flamini, Riccardo

    2015-02-28

    Grape polyphenols are antioxidant compounds, markers in vine chemotaxonomy, and involved in color stabilization of red wines. Sugar acylation usually confers higher stability on glycoside derivatives and this effect is enhanced by an aromatic substituent such as p-coumaric acid. Until now, only p-coumaroyl anthocyanins have been found in grape. A method of 'suspect screening analysis' by ultra-high-performance liquid chromatography/high-resolution mass spectrometry (UHPLC/QTOFMS) has recently been developed to study grape metabolomics. In the present study, this approach was used to identify new polyphenols in grape by accurate mass measurement, MS/MS fragmentation, and study of correlations between fragments observed and putative structures. Three putative p-coumaroyl flavonoids were identified in Raboso Piave grape extract: a dihydrokaempferide-3-O-p-coumaroylhexoside-like flavanone, isorhamnetin-3-O-p-coumaroylglucoside, and a chrysoeriol-p-coumaroylhexoside-like flavone. Accurate MS provided structural characterization of functional groups, and literature data indicates their probable position in the molecule. A fragmentation scheme is proposed for each compound. Compounds were identified by overlapping various analytical methods according to recommendations in the MS-based metabolomics literature. Stereochemistry and the definitive position of substituents in the molecule can only be confirmed by isolation and characterization or synthesis of each compound. These findings suggest addressing research of acylated polyphenol glycosides to other grape varieties. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Sensitive, accurate and rapid detection of trace aliphatic amines in environmental samples with ultrasonic-assisted derivatization microextraction using a new fluorescent reagent for high performance liquid chromatography.

    PubMed

    Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao

    2014-07-25

    A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE PAGES

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong; ...

    2017-07-11

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  8. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  9. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    PubMed

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  10. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  11. Robust high-performance nanoliter-volume single-cell multiple displacement amplification on planar substrates.

    PubMed

    Leung, Kaston; Klaus, Anders; Lin, Bill K; Laks, Emma; Biele, Justina; Lai, Daniel; Bashashati, Ali; Huang, Yi-Fei; Aniba, Radhouane; Moksa, Michelle; Steif, Adi; Mes-Masson, Anne-Marie; Hirst, Martin; Shah, Sohrab P; Aparicio, Samuel; Hansen, Carl L

    2016-07-26

    The genomes of large numbers of single cells must be sequenced to further understanding of the biological significance of genomic heterogeneity in complex systems. Whole genome amplification (WGA) of single cells is generally the first step in such studies, but is prone to nonuniformity that can compromise genomic measurement accuracy. Despite recent advances, robust performance in high-throughput single-cell WGA remains elusive. Here, we introduce droplet multiple displacement amplification (MDA), a method that uses commercially available liquid dispensing to perform high-throughput single-cell MDA in nanoliter volumes. The performance of droplet MDA is characterized using a large dataset of 129 normal diploid cells, and is shown to exceed previously reported single-cell WGA methods in amplification uniformity, genome coverage, and/or robustness. We achieve up to 80% coverage of a single-cell genome at 5× sequencing depth, and demonstrate excellent single-nucleotide variant (SNV) detection using targeted sequencing of droplet MDA product to achieve a median allelic dropout of 15%, and using whole genome sequencing to achieve false and true positive rates of 9.66 × 10(-6) and 68.8%, respectively, in a G1-phase cell. We further show that droplet MDA allows for the detection of copy number variants (CNVs) as small as 30 kb in single cells of an ovarian cancer cell line and as small as 9 Mb in two high-grade serous ovarian cancer samples using only 0.02× depth. Droplet MDA provides an accessible and scalable method for performing robust and accurate CNV and SNV measurements on large numbers of single cells.

  12. Methodology for comparing worldwide performance of diverse weight-constrained high energy laser systems

    NASA Astrophysics Data System (ADS)

    Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.

    2005-06-01

    The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.

  13. Low-Voltage High-Performance UV Photodetectors: An Interplay between Grain Boundaries and Debye Length.

    PubMed

    Bo, Renheng; Nasiri, Noushin; Chen, Hongjun; Caputo, Domenico; Fu, Lan; Tricoli, Antonio

    2017-01-25

    Accurate detection of UV light by wearable low-power devices has many important applications including environmental monitoring, space to space communication, and defense. Here, we report the structural engineering of ultraporous ZnO nanoparticle networks for fabrication of very low-voltage high-performance UV photodetectors. A record high photo- to dark-current ratio of 3.3 × 10 5 and detectivity of 3.2 × 10 12 Jones at an ultralow operation bias of 2 mV and low UV-light intensity of 86 μW·cm -2 are achieved by controlling the interplay between grain boundaries and surface depletion depth of ZnO nanoscale semiconductors. An optimal window of structural properties is determined by varying the particle size of ultraporous nanoparticle networks from 10 to 42 nm. We find that small electron-depleted nanoparticles (≤40 nm) are necessary to minimize the dark-current; however, the rise in photocurrent is tampered with decreasing particle size due to the increasing density of grain boundaries. These findings reveal that nanoparticles with a size close to twice their Debye length are required for high photo- to dark-current ratio and detectivity, while further decreasing their size decreases the photodetector performance.

  14. MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology

    PubMed Central

    Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota

    2015-01-01

    We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860

  15. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  16. SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.

    PubMed

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-07-15

    In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.

  17. Determination of Betaine in Forsythia Suspensa by High Performance Capillary Electrophoresis

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Dong, Guoliang; Wang, Lintong

    2017-12-01

    This paper presents the determination of betaine content of Forsythia suspensa by high performance capillary electrophoresis (HPCE) method. The borax solution was chosen as buffer solution, and its concentration was 40 mmol with capillary column (75μm×52/60cm) at a constant voltage of 20kV and injecting pressure time of 10s at 20°C. Linearity was kept in the concent ration range of 0.0113-1.45mg·ml-1 of betaine with correlation coefficient of 0.999. The recovery was in the range of 97%-117% (n=5), The content of betaine was 281.5 mg·g-1and RSD value of 9.6% (n=6) in Forsythia suspensa. This method has the advantage of rapid, accurate and good repeatability in separation and determination of betaine in Forsythia suspensa.

  18. High-speed separation and characterization of major constituents in Radix Paeoniae Rubra by fast high-performance liquid chromatography coupled with diode-array detection and time-of-flight mass spectrometry.

    PubMed

    Liu, E-Hu; Qi, Lian-Wen; Li, Bin; Peng, Yong-Bo; Li, Ping; Li, Chang-Yin; Cao, Jun

    2009-01-01

    A fast high-performance liquid chromatography (HPLC) method coupled with diode-array detection (DAD) and electrospray ionization time-of-flight mass spectrometry (ESI-TOFMS) has been developed for rapid separation and sensitive identification of major constituents in Radix Paeoniae Rubra (RPR). The total analysis time on a short column packed with 1.8-microm porous particles was about 20 min without a loss in resolution, six times faster than the performance of a conventional column analysis (115 min). The MS fragmentation behavior and structural characterization of major compounds in RPR were investigated here for the first time. The targets were rapidly screened from RPR matrix using a narrow mass window of 0.01 Da to restructure extracted ion chromatograms. Accurate mass measurements (less than 5 ppm error) for both the deprotonated molecule and characteristic fragment ions represent reliable identification criteria for these compounds in complex matrices with similar if not even better performance compared with tandem mass spectrometry. A total of 26 components were screened and identified in RPR including 11 monoterpene glycosides, 11 galloyl glucoses and 4 other phenolic compounds. From the point of time savings, resolving power, accurate mass measurement capability and full spectral sensitivity, the established fast HPLC/DAD/TOFMS method turns out to be a highly useful technique to identify constituents in complex herbal medicines. (c) 2008 John Wiley & Sons, Ltd.

  19. CodingQuarry: highly accurate hidden Markov model gene prediction in fungal genomes using RNA-seq transcripts.

    PubMed

    Testa, Alison C; Hane, James K; Ellwood, Simon R; Oliver, Richard P

    2015-03-11

    The impact of gene annotation quality on functional and comparative genomics makes gene prediction an important process, particularly in non-model species, including many fungi. Sets of homologous protein sequences are rarely complete with respect to the fungal species of interest and are often small or unreliable, especially when closely related species have not been sequenced or annotated in detail. In these cases, protein homology-based evidence fails to correctly annotate many genes, or significantly improve ab initio predictions. Generalised hidden Markov models (GHMM) have proven to be invaluable tools in gene annotation and, recently, RNA-seq has emerged as a cost-effective means to significantly improve the quality of automated gene annotation. As these methods do not require sets of homologous proteins, improving gene prediction from these resources is of benefit to fungal researchers. While many pipelines now incorporate RNA-seq data in training GHMMs, there has been relatively little investigation into additionally combining RNA-seq data at the point of prediction, and room for improvement in this area motivates this study. CodingQuarry is a highly accurate, self-training GHMM fungal gene predictor designed to work with assembled, aligned RNA-seq transcripts. RNA-seq data informs annotations both during gene-model training and in prediction. Our approach capitalises on the high quality of fungal transcript assemblies by incorporating predictions made directly from transcript sequences. Correct predictions are made despite transcript assembly problems, including those caused by overlap between the transcripts of adjacent gene loci. Stringent benchmarking against high-confidence annotation subsets showed CodingQuarry predicted 91.3% of Schizosaccharomyces pombe genes and 90.4% of Saccharomyces cerevisiae genes perfectly. These results are 4-5% better than those of AUGUSTUS, the next best performing RNA-seq driven gene predictor tested. Comparisons against

  20. Complexity of physiological responses decreases in high-stress musical performance.

    PubMed

    Williamon, Aaron; Aufegger, Lisa; Wasley, David; Looney, David; Mandic, Danilo P

    2013-12-06

    For musicians, performing in front of an audience can cause considerable apprehension; indeed, performance anxiety is felt throughout the profession, with wide ranging symptoms arising irrespective of age, skill level and amount of practice. A key indicator of stress is frequency-specific fluctuations in the dynamics of heart rate known as heart rate variability (HRV). Recent developments in sensor technology have made possible the measurement of physiological parameters reflecting HRV non-invasively and outside of the laboratory, opening research avenues for real-time performer feedback to help improve stress management. However, the study of stress using standard algorithms has led to conflicting and inconsistent results. Here, we present an innovative and rigorous approach which combines: (i) a controlled and repeatable experiment in which the physiological response of an expert musician was evaluated in a low-stress performance and a high-stress recital for an audience of 400 people, (ii) a piece of music with varying physical and cognitive demands, and (iii) dynamic stress level assessment with standard and state-of-the-art HRV analysis algorithms such as those within the domain of complexity science which account for higher order stress signatures. We show that this offers new scope for interpreting the autonomic nervous system response to stress in real-world scenarios, with the evolution of stress levels being consistent with the difficulty of the music being played, superimposed on the stress caused by performing in front of an audience. For an emerging class of algorithms that can analyse HRV independent of absolute data scaling, it is shown that complexity science performs a more accurate assessment of average stress levels, thus providing greater insight into the degree of physiological change experienced by musicians when performing in public.

  1. Complexity of physiological responses decreases in high-stress musical performance

    PubMed Central

    Williamon, Aaron; Aufegger, Lisa; Wasley, David; Looney, David; Mandic, Danilo P.

    2013-01-01

    For musicians, performing in front of an audience can cause considerable apprehension; indeed, performance anxiety is felt throughout the profession, with wide ranging symptoms arising irrespective of age, skill level and amount of practice. A key indicator of stress is frequency-specific fluctuations in the dynamics of heart rate known as heart rate variability (HRV). Recent developments in sensor technology have made possible the measurement of physiological parameters reflecting HRV non-invasively and outside of the laboratory, opening research avenues for real-time performer feedback to help improve stress management. However, the study of stress using standard algorithms has led to conflicting and inconsistent results. Here, we present an innovative and rigorous approach which combines: (i) a controlled and repeatable experiment in which the physiological response of an expert musician was evaluated in a low-stress performance and a high-stress recital for an audience of 400 people, (ii) a piece of music with varying physical and cognitive demands, and (iii) dynamic stress level assessment with standard and state-of-the-art HRV analysis algorithms such as those within the domain of complexity science which account for higher order stress signatures. We show that this offers new scope for interpreting the autonomic nervous system response to stress in real-world scenarios, with the evolution of stress levels being consistent with the difficulty of the music being played, superimposed on the stress caused by performing in front of an audience. For an emerging class of algorithms that can analyse HRV independent of absolute data scaling, it is shown that complexity science performs a more accurate assessment of average stress levels, thus providing greater insight into the degree of physiological change experienced by musicians when performing in public. PMID:24068177

  2. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  3. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  4. [Determination of glycyrrhizinic acid in biotransformation system by reversed-phase high performance liquid chromatography].

    PubMed

    Li, Hui; Lu, Dingqiang; Liu, Weimin

    2004-05-01

    A method for determining glycyrrhizinic acid in the biotransformation system by reversed-phase high performance liquid chromatography (RP-HPLC) was developed. The HPLC conditions were as follows: Hypersil C18 column (4.6 mm i.d. x 250 mm, 5 microm) with a mixture of methanol-water-acetic acid (70:30:1, v/v) as the mobile phase; flow rate at 1.0 mL/min; and UV detection at 254 nm. The linear range of glycyrrhizinic acid was 0.2-20 microg. The recoveries were 98%-103% with relative standard deviations between 0.16% and 1.58% (n = 3). The method is simple, rapid and accurate for determining glycyrrhizinic acid.

  5. Directional RNA-seq reveals highly complex condition-dependent transcriptomes in E. coli K12 through accurate full-length transcripts assembling

    PubMed Central

    2013-01-01

    Background Although prokaryotic gene transcription has been studied over decades, many aspects of the process remain poorly understood. Particularly, recent studies have revealed that transcriptomes in many prokaryotes are far more complex than previously thought. Genes in an operon are often alternatively and dynamically transcribed under different conditions, and a large portion of genes and intergenic regions have antisense RNA (asRNA) and non-coding RNA (ncRNA) transcripts, respectively. Ironically, similar studies have not been conducted in the model bacterium E coli K12, thus it is unknown whether or not the bacterium possesses similar complex transcriptomes. Furthermore, although RNA-seq becomes the major method for analyzing the complexity of prokaryotic transcriptome, it is still a challenging task to accurately assemble full length transcripts using short RNA-seq reads. Results To fill these gaps, we have profiled the transcriptomes of E. coli K12 under different culture conditions and growth phases using a highly specific directional RNA-seq technique that can capture various types of transcripts in the bacterial cells, combined with a highly accurate and robust algorithm and tool TruHMM (http://bioinfolab.uncc.edu/TruHmm_package/) for assembling full length transcripts. We found that 46.9 ~ 63.4% of expressed operons were utilized in their putative alternative forms, 72.23 ~ 89.54% genes had putative asRNA transcripts and 51.37 ~ 72.74% intergenic regions had putative ncRNA transcripts under different culture conditions and growth phases. Conclusions As has been demonstrated in many other prokaryotes, E. coli K12 also has a highly complex and dynamic transcriptomes under different culture conditions and growth phases. Such complex and dynamic transcriptomes might play important roles in the physiology of the bacterium. TruHMM is a highly accurate and robust algorithm for assembling full-length transcripts in prokaryotes using directional RNA

  6. High performance TWT development for the microwave power module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whaley, D.R.; Armstrong, C.M.; Groshart, G.

    1996-12-31

    Northrop Grumman`s ongoing development of microwave power modules (MPM) provides microwave power at various power levels, frequencies, and bandwidths for a variety of applications. Present day requirements for the vacuum power booster traveling wave tubes of the microwave power module are becoming increasingly more demanding, necessitating the need for further enhancement of tube performance. The MPM development program at Northrop Grumman is designed specifically to meet this need by construction and test of a series of new tubes aimed at verifying computation and reaching high efficiency design goals. Tubes under test incorporate several different helix designs, as well as varyingmore » electron gun and magnetic confinement configurations. Current efforts also include further development of state-of-the-art TWT modeling and computational methods at Northrop Grumman incorporating new, more accurate models into existing design tools and developing new tools to be used in all aspects of traveling wave tube design. Current status of the Northrop Grumman MPM TWT development program will be presented.« less

  7. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The

  8. INL High Performance Building Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennifer D. Morton

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflectmore » an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental

  9. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  10. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  11. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application

  12. Determination of Betaine in Lycium Barbarum L. by High Performance Capillary Electrophoresis

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Wang, Chunyan; Peng, Xuewei

    2017-12-01

    This paper presents the determination of betaine content in Lycium barbarum L. by high performance capillary electrophoresis (HPCE) method. The borax solution was chosen as buffer solution, and its concentration was 40 mmol at a constant voltage of 20kV and injecting pressure time of 10s at 20°C. Linearity was kept in the concent ration range of 0.0113∼1.45mg of betaine with correlation coefficient of 0.9. The recovery was in the range of 97.95%∼126% (n=4). The sample content of betaine was 29.3mg/g and RSD 6.4% (n=6). This method is specific, simple and rapid and accurate, which is suitable for the detection of the content of betaine in Lycium barbarum L.

  13. Numerical algorithm comparison for the accurate and efficient computation of high-incidence vortical flow

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    1991-01-01

    Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.

  14. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; hide

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  15. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  16. Probabilistic performance-based design for high performance control systems

    NASA Astrophysics Data System (ADS)

    Micheli, Laura; Cao, Liang; Gong, Yongqiang; Cancelli, Alessandro; Laflamme, Simon; Alipour, Alice

    2017-04-01

    High performance control systems (HPCS) are advanced damping systems capable of high damping performance over a wide frequency bandwidth, ideal for mitigation of multi-hazards. They include active, semi-active, and hybrid damping systems. However, HPCS are more expensive than typical passive mitigation systems, rely on power and hardware (e.g., sensors, actuators) to operate, and require maintenance. In this paper, a life cycle cost analysis (LCA) approach is proposed to estimate the economic benefit these systems over the entire life of the structure. The novelty resides in the life cycle cost analysis in the performance based design (PBD) tailored to multi-level wind hazards. This yields a probabilistic performance-based design approach for HPCS. Numerical simulations are conducted on a building located in Boston, MA. LCA are conducted for passive control systems and HPCS, and the concept of controller robustness is demonstrated. Results highlight the promise of the proposed performance-based design procedure.

  17. [Analysis of the preservative chlorphenesin in cosmetics by high performance liquid chromatography].

    PubMed

    Zhu, Huijuan; Zhang, Weiqiang; Yang, Yanwei; Zhu, Ying

    2014-01-01

    An analytical method was developed for the determination of the preservative of chlorphenesin in cosmetics by high performance liquid chromatography (HPLC). A C18 column (250 mm x 4.6 mm, 5 microm) and a photodiode array detector were used. The mobile phase was methanol-water (55:45, v/v) with a flow rate of 1.0 mL/min. The detection wavelength was set at 280 nm and the column temperature was 25 degrees C. The limit of detection was 3 ng. A good linear relationship was obtained between the peak area and the mass concentration of chlorphenesin in the range of 1 - 500 mg/L and the correlation coefficient was 1.000 0. The recoveries of chlorphenesin at different spiked levels were 99.0% - 103% with the relative standard deviations (RSD) < or = 1.2%. Interference test and sample test were also applied meanwhile and validation experiments were performed by three laboratories. The method is simple, sensitive, accurate, stable and suitable for the determination of chlorphenesin in cosmetics.

  18. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  19. Individual Differences in Accurately Judging Personality From Text.

    PubMed

    Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian

    2016-08-01

    This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. © 2015 Wiley Periodicals, Inc.

  20. Accurate continuous geographic assignment from low- to high-density SNP data.

    PubMed

    Guillot, Gilles; Jónsson, Hákon; Hinge, Antoine; Manchih, Nabil; Orlando, Ludovic

    2016-04-01

    Large-scale genotype datasets can help track the dispersal patterns of epidemiological outbreaks and predict the geographic origins of individuals. Such genetically-based geographic assignments also show a range of possible applications in forensics for profiling both victims and criminals, and in wildlife management, where poaching hotspot areas can be located. They, however, require fast and accurate statistical methods to handle the growing amount of genetic information made available from genotype arrays and next-generation sequencing technologies. We introduce a novel statistical method for geopositioning individuals of unknown origin from genotypes. Our method is based on a geostatistical model trained with a dataset of georeferenced genotypes. Statistical inference under this model can be implemented within the theoretical framework of Integrated Nested Laplace Approximation, which represents one of the major recent breakthroughs in statistics, as it does not require Monte Carlo simulations. We compare the performance of our method and an alternative method for geospatial inference, SPA in a simulation framework. We highlight the accuracy and limits of continuous spatial assignment methods at various scales by analyzing genotype datasets from a diversity of species, including Florida Scrub-jay birds Aphelocoma coerulescens, Arabidopsis thaliana and humans, representing 41-197,146 SNPs. Our method appears to be best suited for the analysis of medium-sized datasets (a few tens of thousands of loci), such as reduced-representation sequencing data that become increasingly available in ecology. http://www2.imm.dtu.dk/∼gigu/Spasiba/ gilles.b.guillot@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. High performance direct absorption spectroscopy of pure and binary mixture hydrocarbon gases in the 6-11 μm range

    NASA Astrophysics Data System (ADS)

    Heinrich, Robert; Popescu, Alexandru; Hangauer, Andreas; Strzoda, Rainer; Höfling, Sven

    2017-08-01

    The availability of accurate and fast hydrocarbon analyzers, capable of real-time operation while enabling feedback-loops, would lead to a paradigm change in the petro-chemical industry. Primarily gas chromatographs measure the composition of hydrocarbon process streams. Due to sophisticated gas sampling, these analyzers are limited in response time. As hydrocarbons absorb in the mid-infrared spectral range, the employment of fast spectroscopic systems is highly attractive due to significantly reduced maintenance costs and the capability to setup real-time process control. New developments in mid-infrared laser systems pave the way for the development of high-performance analyzers provided that accurate spectral models are available for multi-species detection. In order to overcome current deficiencies in the availability of spectroscopic data, we developed a laser-based setup covering the 6-11 μm wavelength range. The presented system is designated as laboratory reference system. Its spectral accuracy is at least 6.6× 10^{-3} cm^{-1} with a precision of 3× 10^{-3} cm^{-1}. With a "per point" minimum detectable absorption of 1.3× 10^{-3} cm^{-1} Hz^{{-}{1/2}} it allows us to perform systematic measurements of hydrocarbon spectra of the first 7 alkanes under conditions which are not tabulated in spectroscopic database. We exemplify the system performance with measured direct absorption spectra of methane, propane, iso-butane, and a mixture of methane and propane.

  2. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  3. New Synthesis Of High-Performance Bismaleimides

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H.; Lowther, Sharon; Cannon, Michelle; Smith, Janice; Whitely, Karen

    1991-01-01

    New general synthesis of tough and easy-to-process high-performance bismaleimides (BMI's) developed. Involves reaction of acetylene-terminated compounds with BMI's or biscitraconimides. Offers matrix resins and adhesives having combined advantages of toughness characteristic of thermoplastics and easy processability characteristic of thermosetting materials. Scheme has potential for providing high-performance matrix resins surviving well at high temperatures and absorb little moisture.

  4. High-performance coatings for micromechanical mirrors.

    PubMed

    Gatto, Alexandre; Yang, Minghong; Kaiser, Norbert; Heber, Jörg; Schmidt, Jan Uwe; Sandner, Thilo; Schenk, Harald; Lakner, Hubert

    2006-03-01

    High-performance coatings for micromechanical mirrors were developed. The high-reflective metal systems can be integrated into the technology of MOEMS, such as spatial light modulators and microscanning mirrors from the near-infrared down to the vacuum-ultraviolet spectral regions. The reported metal designs permit high optical performances to be merged with suitable mechanical properties and fitting complementary metal-oxide semiconductor compatibility.

  5. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  6. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  7. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  8. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  9. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  10. Can single empirical algorithms accurately predict inland shallow water quality status from high resolution, multi-sensor, multi-temporal satellite data?

    NASA Astrophysics Data System (ADS)

    Theologou, I.; Patelaki, M.; Karantzalos, K.

    2015-04-01

    Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.

  11. Brief assessment of food insecurity accurately identifies high-risk US adults.

    PubMed

    Gundersen, Craig; Engelhard, Emily E; Crumbaugh, Amy S; Seligman, Hilary K

    2017-06-01

    To facilitate the introduction of food insecurity screening into clinical settings, we examined the test performance of two-item screening questions for food insecurity against the US Department of Agriculture's Core Food Security Module. We examined sensitivity, specificity and accuracy of various two-item combinations of questions assessing food insecurity in the general population and high-risk population subgroups. 2013 Current Population Survey December Supplement, a population-based US survey. All survey participants from the general population and high-risk subgroups. The test characteristics of multiple two-item combinations of questions assessing food insecurity had adequate sensitivity (>97 %) and specificity (>70 %) for widespread adoption as clinical screening measures. We recommend two specific items for clinical screening programmes based on their widespread current use and high sensitivity for detecting food insecurity. These items query how often the household 'worried whether food would run out before we got money to buy more' and how often 'the food that we bought just didn't last and we didn't have money to get more'. The recommended items have sensitivity across high-risk population subgroups of ≥97 % and a specificity of ≥74 % for food insecurity.

  12. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats

    PubMed Central

    Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532

  13. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  14. Dinosaurs can fly -- High performance refining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or areamore » teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.« less

  15. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.

  16. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  17. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  18. A validated method for measurement of serum total, serum free, and salivary cortisol, using high-performance liquid chromatography coupled with high-resolution ESI-TOF mass spectrometry.

    PubMed

    Montskó, Gergely; Tarjányi, Zita; Mezősi, Emese; Kovács, Gábor L

    2014-04-01

    Blood cortisol level is routinely analysed in laboratory medicine, but the immunoassays in widespread use have the disadvantage of cross-reactivity with some commonly used steroid drugs. Mass spectrometry has become a method of increasing importance for cortisol estimation. However, current methods do not offer the option of accurate mass identification. Our objective was to develop a mass spectrometry method to analyse salivary, serum total, and serum free cortisol via accurate mass identification. The analysis was performed on a Bruker micrOTOF high-resolution mass spectrometer. Sample preparation involved protein precipitation, serum ultrafiltration, and solid-phase extraction. Limit of quantification was 12.5 nmol L(-1) for total cortisol, 440 pmol L(-1) for serum ultrafiltrate, and 600 pmol L(-1) for saliva. Average intra-assay variation was 4.7%, and inter-assay variation was 6.6%. Mass accuracy was <2.5 ppm. Serum total cortisol levels were in the range 35.6-1088 nmol L(-1), and serum free cortisol levels were in the range 0.5-12.4 nmol L(-1). Salivary cortisol levels were in the range 0.7-10.4 nmol L(-1). Mass accuracy was equal to or below 2.5 ppm, resulting in a mass error less than 1 mDa and thus providing high specificity. We did not observe any interference with routinely used steroidal drugs. The method is capable of specific cortisol quantification in different matrices on the basis of accurate mass identification.

  19. Anthropometric, biomechanical, and isokinetic strength predictors of ball release speed in high-performance cricket fast bowlers.

    PubMed

    Wormgoor, Shohn; Harden, Lois; Mckinon, Warrick

    2010-07-01

    Fast bowling is fundamental to all forms of cricket. The purpose of this study was to identify parameters that contribute to high ball release speeds in cricket fast bowlers. We assessed anthropometric dimensions, concentric and eccentric isokinetic strength of selected knee and shoulder muscle groups, and specific aspects of technique from a single delivery in 28 high-performance fast bowlers (age 22.0 +/- 3.0 years, ball release speed 34.0 +/- 1.3 m s(-1)). Six 50-Hz cameras and the Ariel Performance Analysis System software were used to analyse the fast and accurate deliveries. Using Pearson's correlation, parameters that showed significant associations with ball release speed were identified. The findings suggest that greater front leg knee extension at ball release (r=0.52), shoulder alignment in the transverse plane rotated further away from the batsman at front foot strike (r=0.47), greater ankle height during the delivery stride (r=0.44), and greater shoulder extension strength (r=0.39) contribute significantly to higher ball release speeds. Predictor variables failed to allow their incorporation into a multivariate model, which is known to exist in less accomplished bowlers, suggesting that factors that determine ball release speed found in other groups may not apply to high-performance fast bowlers.

  20. Methodology for the preliminary design of high performance schools in hot and humid climates

    NASA Astrophysics Data System (ADS)

    Im, Piljae

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the accelerated dissemination of energy efficient design. For the development of the toolkit, first, a survey was performed to identify high performance measures available today being implemented in new K-5 school buildings. Then an existing case-study school building in a hot and humid climate was selected and analyzed to understand the energy use pattern in a school building and to be used in developing a calibrated simulation. Based on the information from the previous step, an as-built and calibrated simulation was then developed. To accomplish this, five calibration steps were performed to match the simulation results with the measured energy use. The five steps include: (1) Using an actual 2006 weather file with measured solar radiation, (2) Modifying lighting & equipment schedule using ASHRAE's RP-1093 methods, (3) Using actual equipment performance curves (i.e., scroll chiller), (4) Using the Winkelmann's method for the underground floor heat transfer, and (5) Modifying the HVAC and room setpoint temperature based on the measured field data. Next, the calibrated simulation of the case-study K-5 school was compared to an ASHRAE Standard 90.1-1999 code-compliant school. In the next step, the energy savings potentials from the application of several high performance measures to an equivalent ASHRAE Standard 90.1-1999 code-compliant school. The high performance measures applied included the recommendations from the ASHRAE Advanced Energy Design Guides (AEDG) for K-12 and other high performance measures from the literature review as well as a daylighting strategy and solar PV and thermal systems. The results show that the net

  1. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  2. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  3. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  4. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    NASA Technical Reports Server (NTRS)

    Wallace, D. B.

    1980-01-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  5. Striving for Excellence Sometimes Hinders High Achievers: Performance-Approach Goals Deplete Arithmetical Performance in Students with High Working Memory Capacity

    PubMed Central

    Crouzevialle, Marie; Smeding, Annique; Butera, Fabrizio

    2015-01-01

    We tested whether the goal to attain normative superiority over other students, referred to as performance-approach goals, is particularly distractive for high-Working Memory Capacity (WMC) students—that is, those who are used to being high achievers. Indeed, WMC is positively related to high-order cognitive performance and academic success, a record of success that confers benefits on high-WMC as compared to low-WMC students. We tested whether such benefits may turn out to be a burden under performance-approach goal pursuit. Indeed, for high achievers, aiming to rise above others may represent an opportunity to reaffirm their positive status—a stake susceptible to trigger disruptive outcome concerns that interfere with task processing. Results revealed that with performance-approach goals—as compared to goals with no emphasis on social comparison—the higher the students’ WMC, the lower their performance at a complex arithmetic task (Experiment 1). Crucially, this pattern appeared to be driven by uncertainty regarding the chances to outclass others (Experiment 2). Moreover, an accessibility measure suggested the mediational role played by status-related concerns in the observed disruption of performance. We discuss why high-stake situations can paradoxically lead high-achievers to sub-optimally perform when high-order cognitive performance is at play. PMID:26407097

  6. Accurate temperature measurement by temperature field analysis in diamond anvil cell for thermal transport study of matter under high pressures

    NASA Astrophysics Data System (ADS)

    Yue, Donghui; Ji, Tingting; Qin, Tianru; Wang, Jia; Liu, Cailong; Jiao, Hui; Zhao, Lin; Han, Yonghao; Gao, Chunxiao

    2018-02-01

    The study on the thermal transport properties of matter under high pressure is important but is hard to fulfill in a diamond anvil cell (DAC) because the accurate measurement of the temperature gradient within the sample of DAC is very difficult. In most cases, the sample temperature can be read accurately from the thermocouples that are directly attached to the lateral edges of diamond anvils because both the sample and diamond anvils can be uniformly heated up to a given temperature. But for the thermal transport property studies in DAC, an artificial temperature distribution along the compression axis is a prerequisite. Obviously, the temperature of the top or bottom surface of the sample cannot be substituted by that of diamond anvils although diamond anvils can be considered as a good medium for heat conduction. With temperature field simulation by finite element analysis, it is found that big measurement errors can occur and are fatal to the correct analysis of thermal transport properties of materials. Thus, a method of combining both the four-thermocouple configuration and temperature field analysis is presented for the accurate temperature distribution measurement in DAC, which is based on the single-function relationship between temperature distribution and sample thermal conductivity.

  7. Accurate van der Waals force field for gas adsorption in porous materials.

    PubMed

    Sun, Lei; Yang, Li; Zhang, Ya-Dong; Shi, Qi; Lu, Rui-Feng; Deng, Wei-Qiao

    2017-09-05

    An accurate van der Waals force field (VDW FF) was derived from highly precise quantum mechanical (QM) calculations. Small molecular clusters were used to explore van der Waals interactions between gas molecules and porous materials. The parameters of the accurate van der Waals force field were determined by QM calculations. To validate the force field, the prediction results from the VDW FF were compared with standard FFs, such as UFF, Dreiding, Pcff, and Compass. The results from the VDW FF were in excellent agreement with the experimental measurements. This force field can be applied to the prediction of the gas density (H 2 , CO 2 , C 2 H 4 , CH 4 , N 2 , O 2 ) and adsorption performance inside porous materials, such as covalent organic frameworks (COFs), zeolites and metal organic frameworks (MOFs), consisting of H, B, N, C, O, S, Si, Al, Zn, Mg, Ni, and Co. This work provides a solid basis for studying gas adsorption in porous materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  9. Fuzzy Reasoning to More Accurately Determine Void Areas on Optical Micrographs of Composite Structures

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne

    2013-01-01

    Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.

  10. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  11. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  12. High-performance IR detector modules

    NASA Astrophysics Data System (ADS)

    Wendler, Joachim; Cabanski, Wolfgang; Rühlich, Ingo; Ziegler, Johann

    2004-02-01

    The 3rd generation of infrared (IR) detection modules is expected to provide higher video resolution, advanced functions like multi band or multi color capability, higher frame rates, and better thermal resolution. AIM has developed staring and linear high performance focal plane arrays (FPA) integrated into detector/dewar cooler assemblies (IDCA). Linear FPA"s support high resolution formats such as 1920 x 1152 (HDTV), 1280 x 960, or 1536 x 1152. Standard format for staring FPA"s is 640 x 512. In this configuration, QEIP devices sensitive in the 8 10 µm band as well as MCT devices sensitive in the 3.4 5.0 µm band are available. A 256 x 256 high speed detection module allows a full frame rate >800 Hz. Especially usability of long wavelength devices in high performance FLIR systems does not only depend on the classical electrooptical performance parameters such as NEDT, detectivity, and response homogeneity, but are mainly characterized by the stability of the correction coefficients used for image correction. The FPA"s are available in suited integrated detector/dewar cooler assemblies. The linear cooling engines are designed for maximum stability of the focal plane temperature, low operating temperatures down to 60K, high MTTF lifetimes of 6000h and above even under high ambient temperature conditions. The IDCA"s are equipped with AIM standard or custom specific command and control electronics (CCE) providing a well defined interface to the system electronics. Video output signals are provided as 14 bit digital data rates up to 80 MHz for the high speed devices.

  13. Teaching high-performance skills using above-real-time training

    NASA Technical Reports Server (NTRS)

    Guckenberger, Dutch; Uliano, Kevin C.; Lane, Norman E.

    1993-01-01

    The above real-time training (ARTT) concept is an approach to teaching high-performance skills. ARTT refers to a training paradigm that places the operator in a simulated environment that functions at faster than normal time. It represents a departure from the intuitive, but not often supported, feeling that the best practice is determined by the training environment with the highest fidelity. This approach is hypothesized to provide greater 'transfer value' per simulation trial, by incorporating training techniques and instructional features into the simulator. Two related experiments are discussed. In the first, 25 naive male subjects performed three tank gunnery tasks on a simulator under varying levels of time acceleration (i.e., 1.0x, 1.6x, 2.0x, sequential, and mixed). They were then transferred to a standard (1.0x) condition for testing. Every accelerated condition or combination of conditions produced better training and transfer than the standard condition. Most effective was the presentation of trials at 1.0x, 1.6x, and 2.0x in a random order during training. Overall, the best ARTT group scored about 50 percent higher and trained in 25 percent less time compared to the real-time control group. In the second experiment, 24 mission-capable F-16 pilots performed three tasks on a part-task F-16A flight simulator under varying levels of time compression (i.e., 1.0x, 1.5x, 2.0x, and random). All subjects were then tested in a real-time environment. The emergency procedure (EP) task results showed increased accuracy for the ARTT groups. In testing (transfer), the ARTT groups not only performed the EP more accurately, but dealt with a simultaneous enemy significantly better than a real-time control group. Although the findings on an air combat maneuvering task and stern conversion task were mixed, most measures indicated that the ARTT groups performed better and faster than a real-time control group. Other implications for ARTT are discussed along with future

  14. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  15. A highly accurate analytical solution for the surface fields of a short vertical wire antenna lying on a multilayer ground

    NASA Astrophysics Data System (ADS)

    Parise, M.

    2018-01-01

    A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.

  16. High performance data transfer

    NASA Astrophysics Data System (ADS)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  17. Does ultrasonography accurately diagnose acute cholecystitis? Improving diagnostic accuracy based on a review at a regional hospital

    PubMed Central

    Hwang, Hamish; Marsh, Ian; Doyle, Jason

    2014-01-01

    Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607

  18. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  19. Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks

    PubMed Central

    Khan, Komal Saifullah; Tariq, Muhammad

    2014-01-01

    Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739

  20. Separation and quantitation of polyethylene glycols 400 and 3350 from human urine by high-performance liquid chromatography.

    PubMed

    Ryan, C M; Yarmush, M L; Tompkins, R G

    1992-04-01

    Polyethylene glycol 3350 (PEG 3350) is useful as an orally administered probe to measure in vivo intestinal permeability to macromolecules. Previous methods to detect polyethylene glycol (PEG) excreted in the urine have been hampered by inherent inaccuracies associated with liquid-liquid extraction and turbidimetric analysis. For accurate quantitation by previous methods, radioactive labels were required. This paper describes a method to separate and quantitate PEG 3350 and PEG 400 in human urine that is independent of radioactive labels and is accurate in clinical practice. The method uses sized regenerated cellulose membranes and mixed ion-exchange resin for sample preparation and high-performance liquid chromatography with refractive index detection for analysis. The 24-h excretion for normal individuals after an oral dose of 40 g of PEG 3350 and 5 g of PEG 400 was 0.12 +/- 0.04% of the original dose of PEG 3350 and 26.3 +/- 5.1% of the original dose of PEG 400.

  1. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.

  2. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair

    PubMed Central

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894

  3. Accurate and Standardized Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Patterson, Tiffany; Asrress, Kaleab N; Marber, Michael; Redwood, Simon; Smith, Nicolas P; Lee, Jack

    2017-05-01

    Coronary wave intensity analysis (cWIA) has increasingly been applied in the clinical research setting to distinguish between the proximal and distal mechanical influences on coronary blood flow. Recently, a cWIA-derived clinical index demonstrated prognostic value in predicting functional recovery postmyocardial infarction. Nevertheless, the known operator dependence of the cWIA metrics currently hampers its routine application in clinical practice. Specifically, it was recently demonstrated that the cWIA metrics are highly dependent on the chosen Savitzky-Golay filter parameters used to smooth the acquired traces. Therefore, a novel method to make cWIA standardized and automatic was proposed and evaluated in vivo. The novel approach combines an adaptive Savitzky-Golay filter with high-order central finite differencing after ensemble-averaging the acquired waveforms. Its accuracy was assessed using in vivo human data. The proposed approach was then modified to automatically perform beat wise cWIA. Finally, the feasibility (accuracy and robustness) of the method was evaluated. The automatic cWIA algorithm provided satisfactory accuracy under a wide range of noise scenarios (≤10% and ≤20% error in the estimation of wave areas and peaks, respectively). These results were confirmed when beat-by-beat cWIA was performed. An accurate, standardized, and automated cWIA was developed. Moreover, the feasibility of beat wise cWIA was demonstrated for the first time. The proposed algorithm provides practitioners with a standardized technique that could broaden the application of cWIA in the clinical practice as enabling multicenter trials. Furthermore, the demonstrated potential of beatwise cWIA opens the possibility investigating the coronary physiology in real time.

  4. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  5. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren; Kelley, Maxwell; Marshall, John

    2017-08-01

    Discretisation of the horizontal pressure gradient force in layered ocean models is a challenging task, with non-trivial interactions between the thermodynamics of the fluid and the geometry of the layers often leading to numerical difficulties. We present two new finite-volume schemes for the pressure gradient operator designed to address these issues. In each case, the horizontal acceleration is computed as an integration of the contact pressure force that acts along the perimeter of an associated momentum control-volume. A pair of new schemes are developed by exploring different control-volume geometries. Non-linearities in the underlying equation-of-state definitions and thermodynamic profiles are treated using a high-order accurate numerical integration framework, designed to preserve hydrostatic balance in a non-linear manner. Numerical experiments show that the new methods achieve high levels of consistency, maintaining hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer geometries, non-linear equations-of-state and non-uniform vertical stratification profiles. These results suggest that the new pressure gradient formulations may be appropriate for general circulation models that employ hybrid vertical coordinates and/or terrain-following representations.

  6. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  7. INTERCOMPARISON OF PERFORMANCE OF RF COIL GEOMETRIES FOR HIGH FIELD MOUSE CARDIAC MRI

    PubMed Central

    Constantinides, Christakis; Angeli, S.; Gkagkarellis, S.; Cofer, G.

    2012-01-01

    Multi-turn spiral surface coils are constructed in flat and cylindrical arrangements and used for high field (7.1 T) mouse cardiac MRI. Their electrical and imaging performances, based on experimental measurements, simulations, and MRI experiments in free space, and under phantom, and animal loading conditions, are compared with a commercially available birdcage coil. Results show that the four-turn cylindrical spiral coil exhibits improved relative SNR (rSNR) performance to the flat coil counterpart, and compares fairly well with a commercially available birdcage coil. Phantom experiments indicate a 50% improvement in the SNR for penetration depths ≤ 6.1 mm from the coil surface compared to the birdcage coil, and an increased penetration depth at the half-maximum field response of 8 mm in the 4-spiral cylindrical coil case, in contrast to 2.9 mm in the flat 4-turn spiral case. Quantitative comparison of the performance of the two spiral coil geometries in anterior, lateral, inferior, and septal regions of the murine heart yield maximum mean percentage rSNR increases of the order of 27–167% in vivo post-mortem (cylindrical compared to flat coil). The commercially available birdcage outperforms the cylindrical spiral coil in rSNR by a factor of 3–5 times. The comprehensive approach and methodology adopted to accurately design, simulate, implement, and test radiofrequency coils of any geometry and type, under any loading conditions, can be generalized for any application of high field mouse cardiac MRI. PMID:23204945

  8. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  9. Exploring KM Features of High-Performance Companies

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Wen

    2007-12-01

    For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.

  10. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of

  11. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment.

    PubMed

    Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott

    2011-07-28

    Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models

  12. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    PubMed Central

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-01-01

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. PMID:18502191

  13. Study of High-Performance Satellite Bus System

    NASA Astrophysics Data System (ADS)

    Shirai, Tatsuya; Noda, Atsushi; Tsuiki, Atsuo

    2002-01-01

    Speaking of Low Earth Orbit (LEO) satellites like earth observation satellites, the light-weighing and high performance bus system will make an great contribution to mission components development.Also, the rising ratio of payload to total mass will reduce the launch cost.Office of Research and Development in National Space Development Agency of Japan (NASDA) is studying such a sophisticated satellite bus system.The system is expected to consist of the following advanced components and subsystems which in parallel have been developed from the element level by the Office. (a) Attitude control system (ACS) This subsystem will provide function to very accurately determine and control the satellite attitude with a next generation star tracker, a GPS receiver, and the onboard software to achieve this function. (b) Electric power system (EPS) This subsystem will be getting much lighter and powerful by utilizing the more efficient solar battery cell, power MOS FET, and DC/DC converter.Besides, to cumulate and supply the power, the Office will also study a Litium battery for space which is light and small enough to contribute to reducing size and weight of the EPS. (c) Onboard computing system (OCS) This computing system will provide function of high speed processing.The MPU (Multi Processing Unit) cell in the OCS is capable of executing approximately 200 MIPS (Mega Instructions Per Second).The OCS will play an important role not only enough for the ACS to function well but also enough for the image processing data to be handled. (d) Thermal control system (TCS) As a thermal control system, mission-friendly system is under study.A small hybrid fluid thermal control system that the Office is studying with a combination of mechanical pump loop and capillary pump loop will be robust to change of thermal loads and facilitate the system to control the temperature. (e) Communications system (CS) In order to transmit high rate data, the office is studying an optical link system

  14. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  15. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  16. High-Resolution Photoionization, Photoelectron and Photodissociation Studies. Determination of Accurate Energetic and Spectroscopic Database for Combustion Radicals and Molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, Cheuk-Yiu

    2016-04-25

    The main goal of this research program was to obtain accurate thermochemical and spectroscopic data, such as ionization energies (IEs), 0 K bond dissociation energies, 0 K heats of formation, and spectroscopic constants for radicals and molecules and their ions of relevance to combustion chemistry. Two unique, generally applicable vacuum ultraviolet (VUV) laser photoion-photoelectron apparatuses have been developed in our group, which have used for high-resolution photoionization, photoelectron, and photodissociation studies for many small molecules of combustion relevance.

  17. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA

    PubMed Central

    Wright, Imogen A.; Travers, Simon A.

    2014-01-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618

  18. Design and performance investigation of a highly accurate apodized fiber Bragg grating-based strain sensor in single and quasi-distributed systems.

    PubMed

    Ali, Taha A; Shehata, Mohamed I; Mohamed, Nazmi A

    2015-06-01

    In this work, fiber Bragg grating (FBG) strain sensors in single and quasi-distributed systems are investigated, seeking high-accuracy measurement. Since FBG-based strain sensors of small lengths are preferred in medical applications, and that causes the full width at half-maximum (FWHM) to be larger, a new apodization profile is introduced for the first time, to the best of our knowledge, with a remarkable FWHM at small sensor lengths compared to the Gaussian and Nuttall profiles, in addition to a higher mainlobe slope at these lengths. A careful selection of apodization profiles with detailed investigation is performed-using sidelobe analysis and the FWHM, which are primary judgment factors especially in a quasi-distributed configuration. A comparison between the elite selection of apodization profiles (extracted from related literature) and the proposed new profile is carried out covering the reflectivity peak, FWHM, and sidelobe analysis. The optimization process concludes that the proposed new profile with a chosen small length (L) of 10 mm and Δnac of 1.4×10-4 is the optimum choice for single stage and quasi-distributed strain-sensor networks, even better than the Gaussian profile at small sensor lengths. The proposed profile achieves the smallest FWHM of 15 GHz (suitable for UDWDM), and the highest mainlobe slope of 130 dB/nm. For the quasi-distributed scenario, a noteworthy high isolation of 6.953 dB is achieved while applying a high strain value of 1500 μstrain (με) for a five-stage strain-sensing network. Further investigation was undertaken, proving that consistency in choosing the apodization profile in the quasi-distributed network is mandatory. A test was made of the inclusion of a uniform apodized sensor among other apodized sensors with the proposed profile in an FBG strain-sensor network.

  19. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    PubMed Central

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  20. Influence relevance voting: an accurate and interpretable virtual high throughput screening method.

    PubMed

    Swamidass, S Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Shiou-Chuan; Baldi, Pierre

    2009-04-01

    Given activity training data from high-throughput screening (HTS) experiments, virtual high-throughput screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by nonlinearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also nonlinearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house support vector machine (SVM) method. On these benchmark data sets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS.

  1. A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.

    PubMed

    Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C

    2017-05-01

    A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  3. MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering

    PubMed Central

    Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu

    2009-01-01

    Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124

  4. Simultaneous quantification of withanolides in Withania somnifera by a validated high-performance thin-layer chromatographic method.

    PubMed

    Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S

    2008-01-01

    This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.

  5. Performance of the first structure built with high performance concrete in Virginia.

    DOT National Transportation Integrated Search

    2001-08-01

    This study evaluated the preparation and placement operations, concrete properties, cost-effectiveness, and performance over 5 years of the first bridge containing high performance concrete built by the Virginia Department of Transportation. High per...

  6. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  7. Human salivary glucose analysis by high-performance ion-exchange chromatography and pulsed amperometric detection.

    PubMed

    Gough, H; Luke, G A; Beeley, J A; Geddes, D A

    1996-02-01

    The aim of this project was to develop an analytical procedure with the required level of sensitivity for the determination of glucose concentrations in small volumes of unstimulated fasting whole saliva. The technique involves high-performance ion-exchange chromatography at high pH and pulsed amperometric detection. It has a high level of reproducibility, a sensitivity as low as 0.1 mumol/l and requires only 50 microliters samples (sensitivity = 0.002 pmol). Inhibition of glucose metabolism, by procedures such as collection into 0.1% (w/v) sodium fluoride, was shown to be essential if accurate results are to be obtained. Collection on to ice followed by storage at -20 degrees C was shown to be unsuitable and resulted in glucose loss by degradation. There were inter- and intraindividual variations in the glucose concentration in unstimulated mixed saliva (range; 0.02-0.4 mmol/l). The procedure can be used for the analysis of other salivary carbohydrates and for monitoring the clearance of dietary carbohydrates from the mouth.

  8. Rehabilitation Characteristics in High-Performance Hospitals after Acute Stroke.

    PubMed

    Sawabe, Masashi; Momosaki, Ryo; Hasebe, Kiyotaka; Sawaguchi, Akira; Kasuga, Seiji; Asanuma, Daichi; Suzuki, Shoya; Miyauchi, Narimi; Abo, Masahiro

    2018-05-22

    Rehabilitation characteristics in high-performance hospitals after acute stroke are not clarified. This retrospective observational study aimed to clarify the characteristics of high-performance hospitals in acute stroke rehabilitation. Patients with stroke discharged from participating acute hospitals were extracted from the Japan Rehabilitation Database for the period 2006-2015. We found 6855 patients from 14 acute hospitals who were eligible for analysis in this study after applying exclusion criteria. We divided facilities into high-performance hospitals and low-performance hospitals using the median of the Functional Independent Measure efficiency for each hospital. We compared rehabilitation characteristics between high- and low-performance hospitals. High-performance hospitals had significantly shorter length of stay. More patients were discharged to home in the high-performance hospitals compared with low-performance hospitals. Patients in high-performance hospitals received greater amounts of physical, occupational, and speech therapy. Patients in high-performance hospitals engaged in more self-exercise, weekend exercise, and exercise in wards. There was more participation of board-certified physiatrists and social workers in high-performance hospitals. Our data suggested that amount, timing, and type of rehabilitation, and participation of multidisciplinary staff are essential for high performance in acute stroke rehabilitation. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  9. High performance dielectric materials development

    NASA Astrophysics Data System (ADS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-09-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  10. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  11. Indoor Air Quality in High Performance Schools

    EPA Pesticide Factsheets

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  12. Indoor Air Quality in High Performance Schools

    EPA Pesticide Factsheets

    2017-02-14

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  13. Quality Evaluation of Potentilla fruticosa L. by High Performance Liquid Chromatography Fingerprinting Associated with Chemometric Methods.

    PubMed

    Liu, Wei; Wang, Dongmei; Liu, Jianjun; Li, Dengwu; Yin, Dongxue

    2016-01-01

    The present study was performed to assess the quality of Potentilla fruticosa L. sampled from distinct regions of China using high performance liquid chromatography (HPLC) fingerprinting coupled with a suite of chemometric methods. For this quantitative analysis, the main active phytochemical compositions and the antioxidant activity in P. fruticosa were also investigated. Considering the high percentages and antioxidant activities of phytochemicals, P. fruticosa samples from Kangding, Sichuan were selected as the most valuable raw materials. Similarity analysis (SA) of HPLC fingerprints, hierarchical cluster analysis (HCA), principle component analysis (PCA), and discriminant analysis (DA) were further employed to provide accurate classification and quality estimates of P. fruticosa. Two principal components (PCs) were collected by PCA. PC1 separated samples from Kangding, Sichuan, capturing 57.64% of the variance, whereas PC2 contributed to further separation, capturing 18.97% of the variance. Two kinds of discriminant functions with a 100% discrimination ratio were constructed. The results strongly supported the conclusion that the eight samples from different regions were clustered into three major groups, corresponding with their morphological classification, for which HPLC analysis confirmed the considerable variation in phytochemical compositions and that P. fruticosa samples from Kangding, Sichuan were of high quality. The results of SA, HCA, PCA, and DA were in agreement and performed well for the quality assessment of P. fruticosa. Consequently, HPLC fingerprinting coupled with chemometric techniques provides a highly flexible and reliable method for the quality evaluation of traditional Chinese medicines.

  14. Quality Evaluation of Potentilla fruticosa L. by High Performance Liquid Chromatography Fingerprinting Associated with Chemometric Methods

    PubMed Central

    Liu, Wei; Wang, Dongmei; Liu, Jianjun; Li, Dengwu; Yin, Dongxue

    2016-01-01

    The present study was performed to assess the quality of Potentilla fruticosa L. sampled from distinct regions of China using high performance liquid chromatography (HPLC) fingerprinting coupled with a suite of chemometric methods. For this quantitative analysis, the main active phytochemical compositions and the antioxidant activity in P. fruticosa were also investigated. Considering the high percentages and antioxidant activities of phytochemicals, P. fruticosa samples from Kangding, Sichuan were selected as the most valuable raw materials. Similarity analysis (SA) of HPLC fingerprints, hierarchical cluster analysis (HCA), principle component analysis (PCA), and discriminant analysis (DA) were further employed to provide accurate classification and quality estimates of P. fruticosa. Two principal components (PCs) were collected by PCA. PC1 separated samples from Kangding, Sichuan, capturing 57.64% of the variance, whereas PC2 contributed to further separation, capturing 18.97% of the variance. Two kinds of discriminant functions with a 100% discrimination ratio were constructed. The results strongly supported the conclusion that the eight samples from different regions were clustered into three major groups, corresponding with their morphological classification, for which HPLC analysis confirmed the considerable variation in phytochemical compositions and that P. fruticosa samples from Kangding, Sichuan were of high quality. The results of SA, HCA, PCA, and DA were in agreement and performed well for the quality assessment of P. fruticosa. Consequently, HPLC fingerprinting coupled with chemometric techniques provides a highly flexible and reliable method for the quality evaluation of traditional Chinese medicines. PMID:26890416

  15. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  16. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  17. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  18. Noise of High-Performance Aircraft at Afterburner

    DTIC Science & Technology

    2015-10-07

    Naval Research Project Title : Noise of High-Performance Aircraft at Afterburner Principal Investigator Dr. Christopher Tam Department...to 08/14/2015 Noise of High-Performance Aircraft at Afterburner Tam, Christopher Sponsored Research Administratiion Florida State University

  19. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  20. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  1. Moisture Performance of High-R Wall Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nay B.; Kochkin, Vladimir

    High-performance homes offer improved comfort, lower utility bills, and assured durability. The next generation of building enclosures is a key step toward achieving high-performance goals through decreasing energy load demand and enabling advanced space-conditioning systems. Yet the adoption of high-R enclosures and particularly high-R walls has been a slow-growing trend because mainstream builders are hesitant to make the transition. In a survey of builders on this topic, one of the challenges identifi ed is an industry-wide concern about the long-term moisture performance of energy-effi cient walls. This study takes a step toward addressing this concern through direct monitoring of themore » moisture performance of high-R walls in occupied homes in several climate zones. In addition, the robustness of the design and modeling tools for selecting high-R wall solutions is evaluated using the monitored data from the field. The information and knowledge gained through this research will provide an objective basis for decision-making so that builders can implement advanced designs with confidence.« less

  2. PredSTP: a highly accurate SVM based model to predict sequential cystine stabilized peptides.

    PubMed

    Islam, S M Ashiqul; Sajed, Tanvir; Kearney, Christopher Michel; Baker, Erich J

    2015-07-05

    Numerous organisms have evolved a wide range of toxic peptides for self-defense and predation. Their effective interstitial and macro-environmental use requires energetic and structural stability. One successful group of these peptides includes a tri-disulfide domain arrangement that offers toxicity and high stability. Sequential tri-disulfide connectivity variants create highly compact disulfide folds capable of withstanding a variety of environmental stresses. Their combination of toxicity and stability make these peptides remarkably valuable for their potential as bio-insecticides, antimicrobial peptides and peptide drug candidates. However, the wide sequence variation, sources and modalities of group members impose serious limitations on our ability to rapidly identify potential members. As a result, there is a need for automated high-throughput member classification approaches that leverage their demonstrated tertiary and functional homology. We developed an SVM-based model to predict sequential tri-disulfide peptide (STP) toxins from peptide sequences. One optimized model, called PredSTP, predicted STPs from training set with sensitivity, specificity, precision, accuracy and a Matthews correlation coefficient of 94.86%, 94.11%, 84.31%, 94.30% and 0.86, respectively, using 200 fold cross validation. The same model outperforms existing prediction approaches in three independent out of sample testsets derived from PDB. PredSTP can accurately identify a wide range of cystine stabilized peptide toxins directly from sequences in a species-agnostic fashion. The ability to rapidly filter sequences for potential bioactive peptides can greatly compress the time between peptide identification and testing structural and functional properties for possible antimicrobial and insecticidal candidates. A web interface is freely available to predict STP toxins from http://crick.ecs.baylor.edu/.

  3. Rapid separation and characterization of diterpenoid alkaloids in processed roots of Aconitum carmichaeli using ultra high performance liquid chromatography coupled with hybrid linear ion trap-Orbitrap tandem mass spectrometry.

    PubMed

    Xu, Wen; Zhang, Jing; Zhu, Dayuan; Huang, Juan; Huang, Zhihai; Bai, Junqi; Qiu, Xiaohui

    2014-10-01

    The lateral root of Aconitum carmichaeli, a popular traditional Chinese medicine, has been widely used to treat rheumatic diseases. For decades, diterpenoid alkaloids have dominated the phytochemical and biomedical research on this plant. In this study, a rapid and sensitive method based on ultra high performance liquid chromatography coupled with linear ion trap-Orbitrap tandem mass spectrometry was developed to characterize the diterpenoid alkaloids in Aconitum carmichaeli. Based on an optimized chromatographic condition, more than 120 diterpenoid alkaloids were separated with good resolution. Using a systematic strategy that combines high resolution separation, highly accurate mass measurements and a good understanding of the diagnostic fragment-based fragmentation patterns, these diterpenoid alkaloids were identified or tentatively identified. The identification of these chemicals provided essential data for further phytochemical studies and toxicity research of Aconitum carmichaeli. Moreover, the ultra high performance liquid chromatography with linear ion trap-Orbitrap mass spectrometry platform was an effective and accurate tool for rapid qualitative analysis of secondary metabolite productions from natural resources. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. Team Development for High Performance Management.

    ERIC Educational Resources Information Center

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  6. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  7. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  8. High performance aluminum–cerium alloys for high-temperature applications

    DOE PAGES

    Sims, Zachary C.; Rios, Orlando R.; Weiss, David; ...

    2017-08-01

    Light-weight high-temperature alloys are important to the transportation industry where weight, cost, and operating temperature are major factors in the design of energy efficient vehicles. Aluminum alloys fill this gap economically but lack high-temperature mechanical performance. Alloying aluminum with cerium creates a highly castable alloy, compatible with traditional aluminum alloy additions, that exhibits dramatically improved high-temperature performance. These compositions display a room temperature ultimate tensile strength of 400 MPa and yield strength of 320 MPa, with 80% mechanical property retention at 240 °C. A mechanism is identified that addresses the mechanical property stability of the Al-alloys to at least 300more » °C and their microstructural stability to above 500 °C which may enable applications without the need for heat treatment. Lastly, neutron diffraction under load provides insight into the unusual mechanisms driving the mechanical strength.« less

  9. Accurate on-chip measurement of the Seebeck coefficient of high mobility small molecule organic semiconductors

    NASA Astrophysics Data System (ADS)

    Warwick, C. N.; Venkateshvaran, D.; Sirringhaus, H.

    2015-09-01

    We present measurements of the Seebeck coefficient in two high mobility organic small molecules, 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) and 2,9-didecyl-dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene (C10-DNTT). The measurements are performed in a field effect transistor structure with high field effect mobilities of approximately 3 cm2/V s. This allows us to observe both the charge concentration and temperature dependence of the Seebeck coefficient. We find a strong logarithmic dependence upon charge concentration and a temperature dependence within the measurement uncertainty. Despite performing the measurements on highly polycrystalline evaporated films, we see an agreement in the Seebeck coefficient with modelled values from Shi et al. [Chem. Mater. 26, 2669 (2014)] at high charge concentrations. We attribute deviations from the model at lower charge concentrations to charge trapping.

  10. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  11. Accurate Mobile Urban Mapping via Digital Map-Based SLAM †

    PubMed Central

    Roh, Hyunchul; Jeong, Jinyong; Cho, Younggun; Kim, Ayoung

    2016-01-01

    This paper presents accurate urban map generation using digital map-based Simultaneous Localization and Mapping (SLAM). Throughout this work, our main objective is generating a 3D and lane map aiming for sub-meter accuracy. In conventional mapping approaches, achieving extremely high accuracy was performed by either (i) exploiting costly airborne sensors or (ii) surveying with a static mapping system in a stationary platform. Mobile scanning systems recently have gathered popularity but are mostly limited by the availability of the Global Positioning System (GPS). We focus on the fact that the availability of GPS and urban structures are both sporadic but complementary. By modeling both GPS and digital map data as measurements and integrating them with other sensor measurements, we leverage SLAM for an accurate mobile mapping system. Our proposed algorithm generates an efficient graph SLAM and achieves a framework running in real-time and targeting sub-meter accuracy with a mobile platform. Integrated with the SLAM framework, we implement a motion-adaptive model for the Inverse Perspective Mapping (IPM). Using motion estimation derived from SLAM, the experimental results show that the proposed approaches provide stable bird’s-eye view images, even with significant motion during the drive. Our real-time map generation framework is validated via a long-distance urban test and evaluated at randomly sampled points using Real-Time Kinematic (RTK)-GPS. PMID:27548175

  12. Determination of accurate vertical atmospheric profiles of extinction and turbulence

    NASA Astrophysics Data System (ADS)

    Hammel, Steve; Campbell, James; Hallenborg, Eric

    2017-09-01

    Our ability to generate an accurate vertical profile characterizing the atmosphere from the surface to a point above the boundary layer top is quite rudimentary. The region from a land or sea surface to an altitude of 3000 meters is dynamic and particularly important to the performance of many active optical systems. Accurate and agile instruments are necessary to provide measurements in various conditions, and models are needed to provide the framework and predictive capability necessary for system design and optimization. We introduce some of the path characterization instruments and describe the first work to calibrate and validate them. Along with a verification of measurement accuracy, the tests must also establish each instruments performance envelope. Measurement of these profiles in the field is a problem, and we will present a discussion of recent field test activity to address this issue. The Comprehensive Atmospheric Boundary Layer Extinction/Turbulence Resolution Analysis eXperiment (CABLE/TRAX) was conducted late June 2017. There were two distinct objectives for the experiment: 1) a comparison test of various scintillometers and transmissometers on a homogeneous horizontal path; 2) a vertical profile experiment. In this paper we discuss only the vertical profiling effort, and we describe the instruments that generated data for vertical profiles of absorption, scattering, and turbulence. These three profiles are the core requirements for an accurate assessment of laser beam propagation.

  13. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  14. A spectroscopic transfer standard for accurate atmospheric CO measurements

    NASA Astrophysics Data System (ADS)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  15. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  16. Continuous Digital Light Processing (cDLP): Highly Accurate Additive Manufacturing of Tissue Engineered Bone Scaffolds.

    PubMed

    Dean, David; Jonathan, Wallace; Siblani, Ali; Wang, Martha O; Kim, Kyobum; Mikos, Antonios G; Fisher, John P

    2012-03-01

    Highly accurate rendering of the external and internal geometry of bone tissue engineering scaffolds effects fit at the defect site, loading of internal pore spaces with cells, bioreactor-delivered nutrient and growth factor circulation, and scaffold resorption. It may be necessary to render resorbable polymer scaffolds with 50 μm or less accuracy to achieve these goals. This level of accuracy is available using Continuous Digital Light processing (cDLP) which utilizes a DLP(®) (Texas Instruments, Dallas, TX) chip. One such additive manufacturing device is the envisionTEC (Ferndale, MI) Perfactory(®). To use cDLP we integrate a photo-crosslinkable polymer, a photo-initiator, and a biocompatible dye. The dye attenuates light, thereby limiting the depth of polymerization. In this study we fabricated scaffolds using the well-studied resorbable polymer, poly(propylene fumarate) (PPF), titanium dioxide (TiO(2)) as a dye, Irgacure(®) 819 (BASF [Ciba], Florham Park, NJ) as an initiator, and diethyl fumarate as a solvent to control viscosity.

  17. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  18. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  19. High-temperature testing of high performance fiber reinforced concrete

    NASA Astrophysics Data System (ADS)

    Fořt, Jan; Vejmelková, Eva; Pavlíková, Milena; Trník, Anton; Čítek, David; Kolísko, Jiří; Černý, Robert; Pavlík, Zbyšek

    2016-06-01

    The effect of high-temperature exposure on properties of High Performance Fiber Reinforced Concrete (HPFRC) is researched in the paper. At first, reference measurements are done on HPFRC samples without high-temperature loading. Then, the HPFRC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000 °C. For the temperature loaded samples, measurement of residual mechanical and basic physical properties is done. Linear thermal expansion coefficient as function of temperature is accessed on the basis of measured thermal strain data. Additionally, simultaneous difference scanning calorimetry (DSC) and thermogravimetry (TG) analysis is performed in order to observe and explain material changes at elevated temperature. It is found that the applied high temperature loading significantly increases material porosity due to the physical, chemical and combined damage of material inner structure, and negatively affects also the mechanical strength. Linear thermal expansion coefficient exhibits significant dependence on temperature and changes of material structure. The obtained data will find use as input material parameters for modelling the damage of HPFRC structures exposed to the fire and high temperature action.

  20. Preschoolers can make highly accurate judgments of learning.

    PubMed

    Lipowski, Stacy L; Merriman, William E; Dunlosky, John

    2013-08-01

    Preschoolers' ability to make judgments of learning (JOLs) was examined in 3 experiments in which they were taught proper names for animals. In Experiment 1, when judgments were made immediately after studying, nearly every child predicted subsequent recall of every name. When judgments were made after a delay, fewer showed this response tendency. The delayed JOLs of those who predicted at least 1 recall failure were still overconfident, however, and were not correlated with final recall. In Experiment 2, children received a second study trial with feedback, made JOLs after a delay, and completed an additional forced-choice judgment task. In this task, an animal whose name had been recalled was pitted against an animal whose name had not been recalled, and the children chose the one they were more likely to remember later. Compared with Experiment 1, more children predicted at least 1 recall failure and predictions were moderately accurate. In the forced-choice task, animal names that had just been successfully recalled were typically chosen over ones that had not. Experiment 3 examined the effect of providing an additional retrieval attempt on delayed JOLs. Half of the children received a single study session, and half received an additional study session with feedback. Children in the practice group showed less overconfidence than those in the no-practice group. Taken together, the results suggest that, with minimal task experience, most preschoolers understand that they will not remember everything and that if they cannot recall something at present, they are unlikely to recall it in the future. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  1. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    PubMed

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  2. Identification of allocryptopine and protopine metabolites in rat liver S9 by high-performance liquid chromatography/quadrupole-time-of-flight mass spectrometry.

    PubMed

    Huang, Ya-Jun; Xiao, Sa; Sun, Zhi-Liang; Zeng, Jian-Guo; Liu, Yi-Song; Liu, Zhao-Ying

    2016-07-15

    Allocryptopine (AL) and protopine (PR) have been extensively studied because of their anti-parasitic, anti-arrhythmic, anti-thrombotic, anti-inflammatory and anti-bacterial activity. However, limited information on the pharmacokinetics and metabolism of AL and PR has been reported. Therefore, the purpose of the present study was to investigate the in vitro metabolism of AL and PR in rat liver S9 using a rapid and accurate high-performance liquid chromatography/quadrupole-time-of-flight mass spectrometry (HPLC/QqTOFMS) method. The incubation mixture was processed with 15% trichloroacetic acid (TCA). Multiple scans of AL and PR metabolites and accurate mass measurements were automatically performed simultaneously through data-dependent acquisition in only a 30-min analysis. The structural elucidations of these metabolites were performed by comparing their changes in accurate molecular masses and product ions with those of the precursor ion or metabolite. Eight and five metabolites of AL and PR were identified in rat liver S9, respectively. Among these metabolites, seven and two metabolites of AL and PR were identified in the first time, respectively. The demethylenation of the 2,3-methylenedioxy, the demethylation of the 9,10-vicinal methoxyl group and the 2,3-methylenedioxy group were the main metabolic pathways of AL and PR in liver S9, respectively. In addition, the cleavage of the methylenedioxy group of the drugs and subsequent methylation or O-demethylation were also the common metabolic pathways of drugs in liver S9. In addition, the hydroxylation reaction was also the metabolic pathway of AL. This was the first investigation of in vitro metabolism of AL and PR in rat liver S9. The detailed structural elucidations of AL and PR metabolites were performed using a rapid and accurate HPLC/QqTOFMS method. The metabolic pathways of AL and PR in rat were tentatively proposed based on these characterized metabolites and early reports. Copyright © 2016 John Wiley

  3. Highly accurate prediction of emotions surrounding the attacks of September 11, 2001 over 1-, 2-, and 7-year prediction intervals.

    PubMed

    Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N

    2016-06-01

    In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Highly accurate prediction of emotions surrounding the attacks of September 11, 2001 over 1-, 2-, and 7-year prediction intervals

    PubMed Central

    Doré, B.P.; Meksin, R.; Mather, M.; Hirst, W.; Ochsner, K.N

    2016-01-01

    In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting 1) the overall intensity of their future negative emotion, and 2) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. PMID:27100309

  5. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA.

    PubMed

    Wright, Imogen A; Travers, Simon A

    2014-07-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Determination of three steroidal saponins from Ophiopogon japonicus (Liliaceae) via high-performance liquid chromatography with mass spectrometry.

    PubMed

    Wang, Yongyi; Xu, Jinzhong; Qu, Haibin

    2013-01-01

    A simple and accurate analytical method was developed for simultaneous quantification of three steroidal saponins in the roots of Ophiopogon japonicus via high-performance liquid chromatography (HPLC) with mass spectrometry (MS) in this study. Separation was performed on a Tigerkin C(18) column and detection was performed by mass spectrometry. A mobile phase consisted of 0.02% formic acid in water (v/v) and 0.02% formic acid in acetonitrile (v/v) was used with a flow rate of 0.5 mL min(-1). The quantitative HPLC-MS method was validated for linearity, precision, repeatability, stability, recovery, limits of detection and quantification. This developed method provides good linearity (r >0.9993), intra- and inter-day precisions (RSD <4.18%), repeatability (RSD <5.05%), stability (RSD <2.08%) and recovery (93.82-102.84%) for three steroidal saponins. It could be considered as a suitable quality control method for O. japonicus.

  7. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  8. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  9. Promising high monetary rewards for future task performance increases intermediate task performance.

    PubMed

    Zedelius, Claire M; Veling, Harm; Bijleveld, Erik; Aarts, Henk

    2012-01-01

    In everyday life contexts and work settings, monetary rewards are often contingent on future performance. Based on research showing that the anticipation of rewards causes improved task performance through enhanced task preparation, the present study tested the hypothesis that the promise of monetary rewards for future performance would not only increase future performance, but also performance on an unrewarded intermediate task. Participants performed an auditory Simon task in which they responded to two consecutive tones. While participants could earn high vs. low monetary rewards for fast responses to every second tone, their responses to the first tone were not rewarded. Moreover, we compared performance under conditions in which reward information could prompt strategic performance adjustments (i.e., when reward information was presented for a relatively long duration) to conditions preventing strategic performance adjustments (i.e., when reward information was presented very briefly). Results showed that high (vs. low) rewards sped up both rewarded and intermediate, unrewarded responses, and the effect was independent of the duration of reward presentation. Moreover, long presentation led to a speed-accuracy trade-off for both rewarded and unrewarded tones, whereas short presentation sped up responses to rewarded and unrewarded tones without this trade-off. These results suggest that high rewards for future performance boost intermediate performance due to enhanced task preparation, and they do so regardless whether people respond to rewards in a strategic or non-strategic manner.

  10. Promising High Monetary Rewards for Future Task Performance Increases Intermediate Task Performance

    PubMed Central

    Zedelius, Claire M.; Veling, Harm; Bijleveld, Erik; Aarts, Henk

    2012-01-01

    In everyday life contexts and work settings, monetary rewards are often contingent on future performance. Based on research showing that the anticipation of rewards causes improved task performance through enhanced task preparation, the present study tested the hypothesis that the promise of monetary rewards for future performance would not only increase future performance, but also performance on an unrewarded intermediate task. Participants performed an auditory Simon task in which they responded to two consecutive tones. While participants could earn high vs. low monetary rewards for fast responses to every second tone, their responses to the first tone were not rewarded. Moreover, we compared performance under conditions in which reward information could prompt strategic performance adjustments (i.e., when reward information was presented for a relatively long duration) to conditions preventing strategic performance adjustments (i.e., when reward information was presented very briefly). Results showed that high (vs. low) rewards sped up both rewarded and intermediate, unrewarded responses, and the effect was independent of the duration of reward presentation. Moreover, long presentation led to a speed-accuracy trade-off for both rewarded and unrewarded tones, whereas short presentation sped up responses to rewarded and unrewarded tones without this trade-off. These results suggest that high rewards for future performance boost intermediate performance due to enhanced task preparation, and they do so regardless whether people respond to rewards in a strategic or non-strategic manner. PMID:22905145

  11. Comparison of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography for the separation of synthetic cathinones.

    PubMed

    Carnes, Stephanie; O'Brien, Stacey; Szewczak, Angelica; Tremeau-Cayel, Lauriane; Rowe, Walter F; McCord, Bruce; Lurie, Ira S

    2017-09-01

    A comparison of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography for the separation of synthetic cathinones has been conducted. Nine different mixtures of bath salts were analyzed in this study. The three different chromatographic techniques were examined using a general set of controlled synthetic cathinones as well as a variety of other synthetic cathinones that exist as positional isomers. Overall 35 different synthetic cathinones were analyzed. A variety of column types and chromatographic modes were examined for developing each separation. For the ultra high performance supercritical fluid chromatography separations, analyses were performed using a series of Torus and Trefoil columns with either ammonium formate or ammonium hydroxide as additives, and methanol, ethanol or isopropanol organic solvents as modifiers. Ultra high performance liquid chromatographic separations were performed in both reversed phase and hydrophilic interaction chromatographic modes using SPP C18 and SPP HILIC columns. Gas chromatography separations were performed using an Elite-5MS capillary column. The orthogonality of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography was examined using principal component analysis. For the best overall separation of synthetic cathinones, the use of ultra high performance supercritical fluid chromatography in combination with gas chromatography is recommended. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Acceleration of FDTD mode solver by high-performance computing techniques.

    PubMed

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  13. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  14. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    ERIC Educational Resources Information Center

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  15. Accurate detection for a wide range of mutation and editing sites of microRNAs from small RNA high-throughput sequencing profiles

    PubMed Central

    Zheng, Yun; Ji, Bo; Song, Renhua; Wang, Shengpeng; Li, Ting; Zhang, Xiaotuo; Chen, Kun; Li, Tianqing; Li, Jinyan

    2016-01-01

    Various types of mutation and editing (M/E) events in microRNAs (miRNAs) can change the stabilities of pre-miRNAs and/or complementarities between miRNAs and their targets. Small RNA (sRNA) high-throughput sequencing (HTS) profiles can contain many mutated and edited miRNAs. Systematic detection of miRNA mutation and editing sites from the huge volume of sRNA HTS profiles is computationally difficult, as high sensitivity and low false positive rate (FPR) are both required. We propose a novel method (named MiRME) for an accurate and fast detection of miRNA M/E sites using a progressive sequence alignment approach which refines sensitivity and improves FPR step-by-step. From 70 sRNA HTS profiles with over 1.3 billion reads, MiRME has detected thousands of statistically significant M/E sites, including 3′-editing sites, 57 A-to-I editing sites (of which 32 are novel), as well as some putative non-canonical editing sites. We demonstrated that a few non-canonical editing sites were not resulted from mutations in genome by integrating the analysis of genome HTS profiles of two human cell lines, suggesting the existence of new editing types to further diversify the functions of miRNAs. Compared with six existing studies or methods, MiRME has shown much superior performance for the identification and visualization of the M/E sites of miRNAs from the ever-increasing sRNA HTS profiles. PMID:27229138

  16. [Determination of sulpride in human plasma by high performance liquid chromatography].

    PubMed

    Yu, X; Luo, Z; Tang, J; Yu, P

    1997-11-01

    This paper describes a reliable method for the pharmacokinetic study of Sulpride in human plasma by reversed-phase high performance liquid chromatography. To compensate the loss of Sulpride during the extraction procedure we used an internal standard very similar in chemical structure and UV absorbance to those of Sulpride. The mobile phase was methanol-water-acetic acid (60:30:1) with a flow rate of 1.2 mL/min. A UV detector was used at 290 nm. The linear range was 5-100 mg/L and the detectable limit was 1.0 mg/L. The recovery and RSD were 97.95%-99.96% and 2.6%-5.1% respectively. The results showed that this method is a sensitive and accurate one which makes the pharmacokinetic study of Sulpride possible. If the concentration was too low to be detected by UV monitor, a fluorescence detector could be used with the excitation wavelength at 299 nm and emission at 342 nm. We analyzed the plasma samples from 30 day-treated psychotic patients and got the satisfactory results.

  17. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  18. Accurate measurement of dispersion data through short and narrow tubes used in very high-pressure liquid chromatography.

    PubMed

    Gritti, Fabrice; McDonald, Thomas; Gilar, Martin

    2015-09-04

    An original method is proposed for the accurate and reproducible measurement of the time-based dispersion properties of short L< 50cm and narrow rc< 50μm tubes at mobile phase flow rates typically used in very high-pressure liquid chromatography (vHPLC). Such tubes are used to minimize sample dispersion in vHPLC; however, their dispersion characteristics cannot be accurately measured at such flow rates due to system dispersion contribution of vHPLC injector and detector. It is shown that using longer and wider tubes (>10μL) enables a reliable measurement of the dispersion data. We confirmed that the dimensionless plot of the reduced dispersion coefficient versus the reduced linear velocity (Peclet number) depends on the aspect ratio, L/rc, of the tube, and unexpectedly also on the diffusion coefficient of the analyte. This dimensionless plot could be easily obtained for a large volume tube, which has the same aspect ratio as that of the short and narrow tube, and for the same diffusion coefficient. The dispersion data for the small volume tube are then directly extrapolated from this plot. For instance, it is found that the maximum volume variances of 75μm×30.5cm and 100μm×30.5cm prototype finger-tightened connecting tubes are 0.10 and 0.30μL(2), respectively, with an accuracy of a few percent and a precision smaller than seven percent. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. High-performance liquid chromatography of oligoguanylates at high pH

    NASA Technical Reports Server (NTRS)

    Stribling, R.; Deamer, D. (Principal Investigator)

    1991-01-01

    Because of the stable self-structures formed by oligomers of guanosine, standard high-performance liquid chromatography techniques for oligonucleotide fractionation are not applicable. Previously, oligoguanylate separations have been carried out at pH 12 using RPC-5 as the packing material. While RPC-5 provides excellent separations, there are several limitations, including the lack of a commercially available source. This report describes a new anion-exchange high-performance liquid chromatography method using HEMA-IEC BIO Q, which successfully separates different forms of the guanosine monomer as well as longer oligoguanylates. The reproducibility and stability at high pH suggests a versatile role for this material.

  20. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag, prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executives summaries for all the Aerodynamic Performance technology areas.

  1. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  2. Performance analysis of high-concentrated multi-junction solar cells in hot climate

    NASA Astrophysics Data System (ADS)

    Ghoneim, Adel A.; Kandil, Kandil M.; Alzanki, Talal H.; Alenezi, Mohammad R.

    2018-03-01

    Multi-junction concentrator solar cells are a promising technology as they can fulfill the increasing energy demand with renewable sources. Focusing sunlight upon the aperture of multi-junction photovoltaic (PV) cells can generate much greater power densities than conventional PV cells. So, concentrated PV multi-junction solar cells offer a promising way towards achieving minimum cost per kilowatt-hour. However, these cells have many aspects that must be fixed to be feasible for large-scale energy generation. In this work, a model is developed to analyze the impact of various atmospheric factors on concentrator PV performance. A single-diode equivalent circuit model is developed to examine multi-junction cells performance in hot weather conditions, considering the impacts of both temperature and concentration ratio. The impacts of spectral variations of irradiance on annual performance of various high-concentrated photovoltaic (HCPV) panels are examined, adapting spectra simulations using the SMARTS model. Also, the diode shunt resistance neglected in the existing models is considered in the present model. The present results are efficiently validated against measurements from published data to within 2% accuracy. Present predictions show that the single-diode model considering the shunt resistance gives accurate and reliable results. Also, aerosol optical depth (AOD) and air mass are most important atmospheric parameters having a significant impact on HCPV cell performance. In addition, the electrical efficiency (η) is noticed to increase with concentration to a certain concentration degree after which it decreases. Finally, based on the model predictions, let us conclude that the present model could be adapted properly to examine HCPV cells' performance over a broad range of operating conditions.

  3. High Efficiency, High Performance Clothes Dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter Pescatore; Phil Carbone

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for bothmore » dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation

  4. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  5. Performance Analysis of Two Early NACA High Speed Propellers with Application to Civil Tiltrotor Configurations

    NASA Technical Reports Server (NTRS)

    Harris, Franklin D.

    1996-01-01

    The helicopter industry is vigorously pursuing development of civil tiltrotors. One key to efficient high speed performance of this rotorcraft is prop-rotor performance. Of equal, if not greater, importance is assurance that the flight envelope is free of aeroelastic instabilities well beyond currently envisioned cuise speeds. This later condition requires study at helical tip Match numbers well in excess of 1.0. Two 1940's 'supersonic' propeller experiments conducted by NACA have provided an immensely valuable data bank with which to study prop-rotor behavior at transonic and supersonic helical tip Mach numbers. Very accurate 'blades alone' data were obtained by using nearly an infinite hub. Tabulated data were recreated from the many thrust and power figures and are included in two Appendices to this report. This data set is exceptionally well suited to re-evaluating classical blade element theories as well as evolving computational fluid dynamic (CFD) analyses. A limited comparison of one propeller's experimental results to a modem rotorcraft CFD code is made. This code, referred to as TURNS, gives very encouraging results. Detailed analysis of the performance data from both propellers is provided in Appendix A. This appendix quantifies the minimum power required to produce usable prop-rotor thrust. The dependence of minimum profile power on Reynolds number is quantified. First order compressibility power losses are quantified as well and a first approximation to design air-foil thickness ratio to avoid compressibility losses is provided. Appendix A's results are applied to study high speed civil tiltrotor cruise performance. Predicted tiltrotor performance is compared to two turboprop commercial transports. The comparison shows that there is no fundamental aerodynamic reason why the rotorcraft industry could not develop civil tiltrotor aircraft which have competitive cruise performance with today's regional, turboprop airlines. Recommendations for future study

  6. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  7. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  8. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  9. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.

  10. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate Alignment of Anatomical Structures.

    PubMed

    Irfanoglu, M Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B; Sadeghi, Neda; Thomas, Cibu P; Pierpaoli, Carlo

    2016-05-15

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Motor equivalence during multi-finger accurate force production

    PubMed Central

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2014-01-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  12. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  13. High Performance Pulse Tube Cryocoolers

    NASA Astrophysics Data System (ADS)

    Olson, J. R.; Roth, E.; Champagne, P.; Evtimov, B.; Nast, T. C.

    2008-03-01

    Lockheed Martin's Advanced Technology Center has been developing pulse tube cryocoolers for more than ten years. Recent innovations include successful testing of four-stage coldheads, no-load temperature below 4 K, and the recent development of a high-efficiency compressor. This paper discusses the predicted performance of single and multiple stage pulse tube coldheads driven by our new 6 kg "M5Midi" compressor, which is capable of 90% efficiency with 200 W input power, and a maximum input power of 1000 W. This compressor retains the simplicity of earlier LM-ATC compressors: it has a moving magnet and an external electrical coil, minimizing organics in the working gas and requiring no electrical penetrations through the pressure wall. Motor losses were minimized during design, resulting in a simple, easily-manufactured compressor with state-of-the-art motor efficiency. The predicted cryocooler performance is presented as simple formulae, allowing an engineer to include the impact of a highly-optimized cryocooler into a full system analysis. Performance is given as a function of the heat rejection temperature and the cold tip temperatures and cooling loads.

  14. High performance HRM: NHS employee perspectives.

    PubMed

    Hyde, Paula; Sparrow, Paul; Boaden, Ruth; Harris, Claire

    2013-01-01

    The purpose of this paper is to examine National Health Service (NHS) employee perspectives of how high performance human resource (HR) practices contribute to their performance. The paper draws on an extensive qualitative study of the NHS. A novel two-part method was used; the first part used focus group data from managers to identify high-performance HR practices specific to the NHS. Employees then conducted a card-sort exercise where they were asked how or whether the practices related to each other and how each practice affected their work. In total, 11 high performance HR practices relevant to the NHS were identified. Also identified were four reactions to a range of HR practices, which the authors developed into a typology according to anticipated beneficiaries (personal gain, organisation gain, both gain and no-one gains). Employees were able to form their own patterns (mental models) of performance contribution for a range of HR practices (60 interviewees produced 91 groupings). These groupings indicated three bundles particular to the NHS (professional development, employee contribution and NHS deal). These mental models indicate employee perceptions about how health services are organised and delivered in the NHS and illustrate the extant mental models of health care workers. As health services are rearranged and financial pressures begin to bite, these mental models will affect employee reactions to changes both positively and negatively. The novel method allows for identification of mental models that explain how NHS workers understand service delivery. It also delineates the complex and varied relationships between HR practices and individual performance.

  15. High skin temperature and hypohydration impair aerobic performance.

    PubMed

    Sawka, Michael N; Cheuvront, Samuel N; Kenefick, Robert W

    2012-03-01

    This paper reviews the roles of hot skin (>35°C) and body water deficits (>2% body mass; hypohydration) in impairing submaximal aerobic performance. Hot skin is associated with high skin blood flow requirements and hypohydration is associated with reduced cardiac filling, both of which act to reduce aerobic reserve. In euhydrated subjects, hot skin alone (with a modest core temperature elevation) impairs submaximal aerobic performance. Conversely, aerobic performance is sustained with core temperatures >40°C if skin temperatures are cool-warm when euhydrated. No study has demonstrated that high core temperature (∼40°C) alone, without coexisting hot skin, will impair aerobic performance. In hypohydrated subjects, aerobic performance begins to be impaired when skin temperatures exceed 27°C, and even warmer skin exacerbates the aerobic performance impairment (-1.5% for each 1°C skin temperature). We conclude that hot skin (high skin blood flow requirements from narrow skin temperature to core temperature gradients), not high core temperature, is the 'primary' factor impairing aerobic exercise performance when euhydrated and that hypohydration exacerbates this effect.

  16. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  17. Advanced Plasma Shape Control to Enable High-Performance Divertor Operation on NSTX-U

    NASA Astrophysics Data System (ADS)

    Vail, Patrick; Kolemen, Egemen; Boyer, Mark; Welander, Anders

    2017-10-01

    This work presents the development of an advanced framework for control of the global plasma shape and its application to a variety of shape control challenges on NSTX-U. Operations in high-performance plasma scenarios will require highly-accurate and robust control of the plasma poloidal shape to accomplish such tasks as obtaining the strong-shaping required for the avoidance of MHD instabilities and mitigating heat flux through regulation of the divertor magnetic geometry. The new control system employs a high-fidelity model of the toroidal current dynamics in NSTX-U poloidal field coils and conducting structures as well as a first-principles driven calculation of the axisymmetric plasma response. The model-based nature of the control system enables real-time optimization of controller parameters in response to time-varying plasma conditions and control objectives. The new control scheme is shown to enable stable and on-demand plasma operations in complicated magnetic geometries such as the snowflake divertor. A recently-developed code that simulates the nonlinear evolution of the plasma equilibrium is used to demonstrate the capabilities of the designed shape controllers. Plans for future real-time implementations on NSTX-U and elsewhere are also presented. Supported by the US DOE under DE-AC02-09CH11466.

  18. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  19. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  20. A highly sensitive and accurate gene expression analysis by sequencing ("bead-seq") for a single cell.

    PubMed

    Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki

    2015-02-15

    Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Determination of Decabrominated Diphenyl Ether in Soils by Soxhlet Extraction and High Performance Liquid Chromatography

    PubMed Central

    Yang, Xing-Jian; Dang, Zhi; Zhang, Fang-Li; Lin, Zhao-Ying; Zou, Meng-Yao; Tao, Xue-Qin; Lu, Gui-Ning

    2013-01-01

    This study described the development of a method based on soxhlet extraction combining high performance liquid chromatography (soxhlet-HPLC) for the accurate detection of BDE-209 in soils. The solvent effect of working standard solutions in HPLC was discussed. Results showed that 1 : 1 of methanol and acetone was the optimal condition which could totally dissolve the BDE-209 in environmental samples and avoid the decrease of the peak area and the peak deformation difference of BDE-209 in HPLC. The preliminary experiment was conducted on the configured grassland (1 μg/g) to validate the method feasibility. The method produced reliable reproducibility, simulated soils (n = 4) RSD 1.0%, and was further verified by the analysis e-waste contaminated soils, RSD range 5.9–11.4%. The contamination level of BDE-209 in burning site was consistent with the previous study of Longtang town but lower than Guiyu town, and higher concentration of BDE-209 in paddy field mainly resulted from the long-standing disassembling area nearby. This accurate and fast method was successfully developed to extract and analyze BDE-209 in soil samples, showing its potential use for replacing GC to determinate BDE-209 in soil samples. PMID:24302876

  2. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  3. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE PAGES

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    2017-01-16

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  4. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures

    PubMed Central

    Irfanoglu, M. Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B.; Sadeghi, Neda; Thomas, Cibu P.; Pierpaoli, Carlo

    2016-01-01

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. PMID:26931817

  5. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  6. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    NASA Astrophysics Data System (ADS)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  7. Seeing and Being Seen: Predictors of Accurate Perceptions about Classmates’ Relationships

    PubMed Central

    Neal, Jennifer Watling; Neal, Zachary P.; Cappella, Elise

    2015-01-01

    This study examines predictors of observer accuracy (i.e. seeing) and target accuracy (i.e. being seen) in perceptions of classmates’ relationships in a predominantly African American sample of 420 second through fourth graders (ages 7 – 11). Girls, children in higher grades, and children in smaller classrooms were more accurate observers. Targets (i.e. pairs of children) were more accurately observed when they occurred in smaller classrooms of higher grades and involved same-sex, high-popularity, and similar-popularity children. Moreover, relationships between pairs of girls were more accurately observed than relationships between pairs of boys. As a set, these findings suggest the importance of both observer and target characteristics for children’s accurate perceptions of classroom relationships. Moreover, the substantial variation in observer accuracy and target accuracy has methodological implications for both peer-reported assessments of classroom relationships and the use of stochastic actor-based models to understand peer selection and socialization processes. PMID:26347582

  8. Highly accurate pulse-per-second timing distribution over optical fibre network using VCSEL side-mode injection

    NASA Astrophysics Data System (ADS)

    Wassin, Shukree; Isoe, George M.; Gamatham, Romeo R. G.; Leitch, Andrew W. R.; Gibbon, Tim B.

    2017-01-01

    Precise and accurate timing signals distributed between a centralized location and several end-users are widely used in both metro-access and speciality networks for Coordinated Universal Time (UTC), GPS satellite systems, banking, very long baseline interferometry and science projects such as SKA radio telescope. Such systems utilize time and frequency technology to ensure phase coherence among data signals distributed across an optical fibre network. For accurate timing requirements, precise time intervals should be measured between successive pulses. In this paper we describe a novel, all optical method for quantifying one-way propagation times and phase perturbations in the fibre length, using pulse-persecond (PPS) signals. The approach utilizes side mode injection of a 1550nm 10Gbps vertical cavity surface emitting laser (VCSEL) at the remote end. A 125 μs one-way time of flight was accurately measured for 25 km G655 fibre. Since the approach is all-optical, it avoids measurement inaccuracies introduced by electro-optical conversion phase delays. Furthermore, the implementation uses cost effective VCSEL technology and suited to a flexible range of network architectures, supporting a number of end-users conducting measurements at the remote end.

  9. A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Stamer, Torsten; Inutsuka, Shu-ichiro

    2018-06-01

    We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.

  10. High-performance analysis of filtered semantic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Fox, Armando; Gilbert, John R.

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry amore » performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance.« less

  11. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.

  12. [Determination of sugars, organic acids and alcohols in microbial consortium fermentation broth from cellulose using high performance liquid chromatography].

    PubMed

    Jiang, Yan; Fan, Guifang; Du, Ran; Li, Peipei; Jiang, Li

    2015-08-01

    A high performance liquid chromatographic method was established for the determination of metabolites (sugars, organic acids and alcohols) in microbial consortium fermentation broth from cellulose. Sulfate was first added in the samples to precipitate calcium ions in microbial consortium culture medium and lower the pH of the solution to avoid the dissociation of organic acids, then the filtrates were effectively separated using high performance liquid chromatography. Cellobiose, glucose, ethanol, butanol, glycerol, acetic acid and butyric acid were quantitatively analyzed. The detection limits were in the range of 0.10-2.00 mg/L. The linear correlation coefficients were greater than 0.999 6 in the range of 0.020 to 1.000 g/L. The recoveries were in the range of 85.41%-115.60% with the relative standard deviations of 0.22% -4.62% (n = 6). This method is accurate for the quantitative analysis of the alcohols, organic acids and saccharides in microbial consortium fermentation broth from cellulose.

  13. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  14. Parallel particle impactor - novel size-selective particle sampler for accurate fractioning of inhalable particles

    NASA Astrophysics Data System (ADS)

    Trakumas, S.; Salter, E.

    2009-02-01

    Adverse health effects due to exposure to airborne particles are associated with particle deposition within the human respiratory tract. Particle size, shape, chemical composition, and the individual physiological characteristics of each person determine to what depth inhaled particles may penetrate and deposit within the respiratory tract. Various particle inertial classification devices are available to fractionate airborne particles according to their aerodynamic size to approximate particle penetration through the human respiratory tract. Cyclones are most often used to sample thoracic or respirable fractions of inhaled particles. Extensive studies of different cyclonic samplers have shown, however, that the sampling characteristics of cyclones do not follow the entire selected convention accurately. In the search for a more accurate way to assess worker exposure to different fractions of inhaled dust, a novel sampler comprising several inertial impactors arranged in parallel was designed and tested. The new design includes a number of separated impactors arranged in parallel. Prototypes of respirable and thoracic samplers each comprising four impactors arranged in parallel were manufactured and tested. Results indicated that the prototype samplers followed closely the penetration characteristics for which they were designed. The new samplers were found to perform similarly for liquid and solid test particles; penetration characteristics remained unchanged even after prolonged exposure to coal mine dust at high concentration. The new parallel impactor design can be applied to approximate any monotonically decreasing penetration curve at a selected flow rate. Personal-size samplers that operate at a few L/min as well as area samplers that operate at higher flow rates can be made based on the suggested design. Performance of such samplers can be predicted with high accuracy employing well-established impaction theory.

  15. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  1. Coupled optical and thermal detailed simulations for the accurate evaluation and performance improvement of molten salts solar towers

    NASA Astrophysics Data System (ADS)

    García-Barberena, Javier; Mutuberria, Amaia; Palacin, Luis G.; Sanz, Javier L.; Pereira, Daniel; Bernardos, Ana; Sanchez, Marcelino; Rocha, Alberto R.

    2017-06-01

    The National Renewable Energy Centre of Spain, CENER, and the Technology & Innovation area of ACS Cobra, as a result of their long term expertise in the CSP field, have developed a high-quality and high level of detail optical and thermal simulation software for the accurate evaluation of Molten Salts Solar Towers. The main purpose of this software is to make a step forward in the state-of-the-art of the Solar Towers simulation programs. Generally, these programs deal with the most critical systems of such plants, i.e. the solar field and the receiver, on an independent basis. Therefore, these programs typically neglect relevant aspects in the operation of the plant as heliostat aiming strategies, solar flux shapes onto the receiver, material physical and operational limitations, transient processes as preheating and secure cloud passing operating modes, and more. The modelling approach implemented in the developed program consists on effectively coupling detailed optical simulations of the heliostat field with also detailed and full-transient thermal simulations of the molten salts tube-based external receiver. The optical model is based on an accurate Monte Carlo ray-tracing method which solves the complete solar field by simulating each of the heliostats at once according to their specific layout in the field. In the thermal side, the tube-based cylindrical external receiver of a Molten Salts Solar Tower is modelled assuming one representative tube per panel, and implementing the specific connection layout of the panels as well as the internal receiver pipes. Each tube is longitudinally discretized and the transient energy and mass balances in the temperature dependent molten salts and steel tube models are solved. For this, a one dimensional radial heat transfer model based is used. The thermal model is completed with a detailed control and operation strategy module, able to represent the appropriate operation of the plant. An integration framework has been

  2. 1998 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    McMillin, S. Naomi (Editor)

    1999-01-01

    NASA's High-Speed Research Program sponsored the 1998 Aerodynamic Performance Technical Review on February 9-13, in Los Angeles, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, and Flight Controls. The review objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientists and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT simulation results were presented along with executive summaries for all the Aerodynamic Performance technology areas. The HSR Aerodynamic Performance Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration. Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas of the HSR Program.

  3. Analysis of lipoprotein profiles of healthy cats by gel-permeation high-performance liquid chromatography

    PubMed Central

    MIZUTANI, Hisashi; SAKO, Toshinori; OKUDA, Hiroko; ARAI, Nobuaki; KURIYAMA, Koji; MORI, Akihiro; YOSHIMURA, Itaru; KOYAMA, Hidekazu

    2016-01-01

    Density gradient ultracentrifugation (DGUC) and gel electrophoresis are conventionally used to obtain lipoprotein profiles of animals. We recently applied high-performance liquid chromatography with a gel permeation column (GP-HPLC) and an on-line dual enzymatic system to dogs for lipoprotein profile analysis. We compared the GP-HPLC with DGUC as a method to obtain a feline lipoprotein profile. The lipoprotein profiles showed large and small peaks, which corresponded to high-density lipoprotein (HDL) and low-density lipoprotein (LDL), respectively, whereas very low-density lipoprotein (VLDL) and chylomicron (CM) were only marginally detected. This profile was very similar to that of dogs reported previously. Healthy cats also had a small amount of cholesterol-rich particles distinct from the normal LDL or HDL profile. There was no difference in lipoprotein profiles between the sexes, but males had a significantly larger LDL particle size (P=0.015). This study shows the feasibility of GP-HPLC for obtaining accurate lipoprotein profiles with small sample volumes and provides valuable reference data for healthy cats that should facilitate diagnoses. PMID:27170431

  4. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  5. New analytical model for the ozone electronic ground state potential surface and accurate ab initio vibrational predictions at high energy range.

    PubMed

    Tyuterev, Vladimir G; Kochanov, Roman V; Tashkun, Sergey A; Holka, Filip; Szalay, Péter G

    2013-10-07

    An accurate description of the complicated shape of the potential energy surface (PES) and that of the highly excited vibration states is of crucial importance for various unsolved issues in the spectroscopy and dynamics of ozone and remains a challenge for the theory. In this work a new analytical representation is proposed for the PES of the ground electronic state of the ozone molecule in the range covering the main potential well and the transition state towards the dissociation. This model accounts for particular features specific to the ozone PES for large variations of nuclear displacements along the minimum energy path. The impact of the shape of the PES near the transition state (existence of the "reef structure") on vibration energy levels was studied for the first time. The major purpose of this work was to provide accurate theoretical predictions for ozone vibrational band centres at the energy range near the dissociation threshold, which would be helpful for understanding the very complicated high-resolution spectra and its analyses currently in progress. Extended ab initio electronic structure calculations were carried out enabling the determination of the parameters of a minimum energy path PES model resulting in a new set of theoretical vibrational levels of ozone. A comparison with recent high-resolution spectroscopic data on the vibrational levels gives the root-mean-square deviations below 1 cm(-1) for ozone band centres up to 90% of the dissociation energy. New ab initio vibrational predictions represent a significant improvement with respect to all previously available calculations.

  6. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  7. Quantum Tunneling Affects Engine Performance.

    PubMed

    Som, Sibendu; Liu, Wei; Zhou, Dingyu D Y; Magnotti, Gina M; Sivaramakrishnan, Raghu; Longman, Douglas E; Skodje, Rex T; Davis, Michael J

    2013-06-20

    We study the role of individual reaction rates on engine performance, with an emphasis on the contribution of quantum tunneling. It is demonstrated that the effect of quantum tunneling corrections for the reaction HO2 + HO2 = H2O2 + O2 can have a noticeable impact on the performance of a high-fidelity model of a compression-ignition (e.g., diesel) engine, and that an accurate prediction of ignition delay time for the engine model requires an accurate estimation of the tunneling correction for this reaction. The three-dimensional model includes detailed descriptions of the chemistry of a surrogate for a biodiesel fuel, as well as all the features of the engine, such as the liquid fuel spray and turbulence. This study is part of a larger investigation of how the features of the dynamics and potential energy surfaces of key reactions, as well as their reaction rate uncertainties, affect engine performance, and results in these directions are also presented here.

  8. ICSH recommendations for assessing automated high-performance liquid chromatography and capillary electrophoresis equipment for the quantitation of HbA2.

    PubMed

    Stephens, A D; Colah, R; Fucharoen, S; Hoyer, J; Keren, D; McFarlane, A; Perrett, D; Wild, B J

    2015-10-01

    Automated high performance liquid chromatography and Capillary electrophoresis are used to quantitate the proportion of Hemoglobin A2 (HbA2 ) in blood samples order to enable screening and diagnosis of carriers of β-thalassemia. Since there is only a very small difference in HbA2 levels between people who are carriers and people who are not carriers such analyses need to be both precise and accurate. This paper examines the different parameters of such equipment and discusses how they should be assessed. © 2015 John Wiley & Sons Ltd.

  9. Explicitly correlated benchmark calculations on C8H8 isomer energy separations: how accurate are DFT, double-hybrid, and composite ab initio procedures?

    NASA Astrophysics Data System (ADS)

    Karton, Amir; Martin, Jan M. L.

    2012-10-01

    Accurate isomerization energies are obtained for a set of 45 C8H8 isomers by means of the high-level, ab initio W1-F12 thermochemical protocol. The 45 isomers involve a range of hydrocarbon functional groups, including (linear and cyclic) polyacetylene, polyyne, and cumulene moieties, as well as aromatic, anti-aromatic, and highly-strained rings. Performance of a variety of DFT functionals for the isomerization energies is evaluated. This proves to be a challenging test: only six of the 56 tested functionals attain root mean square deviations (RMSDs) below 3 kcal mol-1 (the performance of MP2), namely: 2.9 (B972-D), 2.8 (PW6B95), 2.7 (B3PW91-D), 2.2 (PWPB95-D3), 2.1 (ωB97X-D), and 1.2 (DSD-PBEP86) kcal mol-1. Isomers involving highly-strained fused rings or long cumulenic chains provide a 'torture test' for most functionals. Finally, we evaluate the performance of composite procedures (e.g. G4, G4(MP2), CBS-QB3, and CBS-APNO), as well as that of standard ab initio procedures (e.g. MP2, SCS-MP2, MP4, CCSD, and SCS-CCSD). Both connected triples and post-MP4 singles and doubles are important for accurate results. SCS-MP2 actually outperforms MP4(SDQ) for this problem, while SCS-MP3 yields similar performance as CCSD and slightly bests MP4. All the tested empirical composite procedures show excellent performance with RMSDs below 1 kcal mol-1.

  10. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  11. Improved confinement in highly powered high performance scenarios on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrie, Thomas W.; Osborne, Thomas; Fenstermacher, Max E.

    DIII-D has recently demonstrated improved energy confinement by injecting neutral deuterium gas into high performance near-double null divertor (DND) plasmas during high power operation. Representative parameters for these plasmas are: q 95 = 6, P IN up to 15 MW, H 98 = 1.4–1.8, and β N = 2.5–4.0. The ion B xmore » $$\\triangledown$$B direction is away from the primary X-point. While plasma conditions at lower to moderate power input (e.g., 11 MW) are shown to be favorable to successful puff-and-pump radiating divertor applications, particularly when using argon seeds, plasma behavior at higher powers (e.g., ≥14 MW) may make successful puff-and-pump operation more problematic. In contrast to lower powered high performance plasmas, both $$\\tau$$ E and β N in the high power cases (≥14 MW) increased and ELM frequency decreased, as density was raised by deuterium gas injection. Improved performance in the higher power plasmas was tied to higher pedestal pressure, which according to peeling-ballooning mode stability analysis using the ELITE code could increase with density along the kink/peeling stability threshold, while the pedestal pressure gradient in the lower power discharges were limited by the ballooning threshold. This resulted in improved fueling efficiency and ≈10% higher $$\\tau$$ E and β N than is normally observed in comparable high performance plasmas on DIII-D. Applying the puff-and-pump radiating divertor approach at moderate versus high power input is shown to result in a much different evolution in core and pedestal plasma behavior. In conclusion, we find that injecting deuterium gas into these highly powered DND plasmas may open up a new avenue for achieving elevated plasma performance, including better fueling, but the resulting higher density may also complicate application of a radiating divertor approach to heat flux reduction in present-day tokamaks, if scenarios involving second-harmonic electron cyclotron heating are used.« less

  12. Improved confinement in highly powered high performance scenarios on DIII-D

    DOE PAGES

    Petrie, Thomas W.; Osborne, Thomas; Fenstermacher, Max E.; ...

    2017-06-09

    DIII-D has recently demonstrated improved energy confinement by injecting neutral deuterium gas into high performance near-double null divertor (DND) plasmas during high power operation. Representative parameters for these plasmas are: q 95 = 6, P IN up to 15 MW, H 98 = 1.4–1.8, and β N = 2.5–4.0. The ion B xmore » $$\\triangledown$$B direction is away from the primary X-point. While plasma conditions at lower to moderate power input (e.g., 11 MW) are shown to be favorable to successful puff-and-pump radiating divertor applications, particularly when using argon seeds, plasma behavior at higher powers (e.g., ≥14 MW) may make successful puff-and-pump operation more problematic. In contrast to lower powered high performance plasmas, both $$\\tau$$ E and β N in the high power cases (≥14 MW) increased and ELM frequency decreased, as density was raised by deuterium gas injection. Improved performance in the higher power plasmas was tied to higher pedestal pressure, which according to peeling-ballooning mode stability analysis using the ELITE code could increase with density along the kink/peeling stability threshold, while the pedestal pressure gradient in the lower power discharges were limited by the ballooning threshold. This resulted in improved fueling efficiency and ≈10% higher $$\\tau$$ E and β N than is normally observed in comparable high performance plasmas on DIII-D. Applying the puff-and-pump radiating divertor approach at moderate versus high power input is shown to result in a much different evolution in core and pedestal plasma behavior. In conclusion, we find that injecting deuterium gas into these highly powered DND plasmas may open up a new avenue for achieving elevated plasma performance, including better fueling, but the resulting higher density may also complicate application of a radiating divertor approach to heat flux reduction in present-day tokamaks, if scenarios involving second-harmonic electron cyclotron heating are used.« less

  13. High-Performance Bipropellant Engine

    NASA Technical Reports Server (NTRS)

    Biaglow, James A.; Schneider, Steven J.

    1999-01-01

    TRW, under contract to the NASA Lewis Research Center, has successfully completed over 10 000 sec of testing of a rhenium thrust chamber manufactured via a new-generation powder metallurgy. High performance was achieved for two different propellants, N2O4- N2H4 and N2O4 -MMH. TRW conducted 44 tests with N2O4-N2H4, accumulating 5230 sec of operating time with maximum burn times of 600 sec and a specific impulse Isp of 333 sec. Seventeen tests were conducted with N2O4-MMH for an additional 4789 sec and a maximum Isp of 324 sec, with a maximum firing duration of 700 sec. Together, the 61 tests totalled 10 019 sec of operating time, with the chamber remaining in excellent condition. Of these tests, 11 lasted 600 to 700 sec. The performance of radiation-cooled rocket engines is limited by their operating temperature. For the past two to three decades, the majority of radiation-cooled rockets were composed of a high-temperature niobium alloy (C103) with a disilicide oxide coating (R512) for oxidation resistance. The R512 coating practically limits the operating temperature to 1370 C. For the Earth-storable bipropellants commonly used in satellite and spacecraft propulsion systems, a significant amount of fuel film cooling is needed. The large film-cooling requirement extracts a large penalty in performance from incomplete mixing and combustion. A material system with a higher temperature capability has been matured to the point where engines are being readied for flight, particularly the 100-lb-thrust class engine. This system has powder rhenium (Re) as a substrate material with an iridium (Ir) oxidation-resistant coating. Again, the operating temperature is limited by the coating; however, Ir is capable of long-life operation at 2200 C. For Earth-storable bipropellants, this allows for the virtual elimination of fuel film cooling (some film cooling is used for thermal control of the head end). This has resulted in significant increases in specific impulse performance

  14. Influence of obesity on accurate and rapid arm movement performed from a standing posture.

    PubMed

    Berrigan, F; Simoneau, M; Tremblay, A; Hue, O; Teasdale, N

    2006-12-01

    Obesity yields a decreased postural stability. The potentially negative impact of obesity on the control of upper limb movements, however, has not been documented. This study sought to examine if obesity imposes an additional balance control constraint limiting the speed and accuracy with which an upper limb goal-directed movement performed from an upright standing position can be executed. Eight healthy lean subjects (body mass index (BMI) between 20.9 and 25.0 kg/m(2)) and nine healthy obese subjects (BMI between 30.5 and 48.6 kg/m(2)) pointed to a target located in front of them from an upright standing posture. The task was to aim at the target as fast and as precisely as possible after an auditory signal. The difficulty of the task was varied by using different target sizes (0.5, 1.0, 2.5 and 5.0 cm width). Hand movement time (MT) and velocity profiles were measured to quantify the aiming. Centre of pressure and segmental kinematics were analysed to document postural stability. When aiming, the forward centre of pressure (CP) displacement was greater for the obese group than for the normal BMI group (4.6 and 1.9 cm, respectively). For the obese group, a decrease in the target size was associated with an increase in backward CP displacement and CP peak speed whereas for the normal BMI group backward CP displacements and CP peak speed were about the same across all target sizes. Obese participants aimed at the target moving their whole body forward whereas the normal BMI subjects predominantly made an elbow extension and shoulder flexion. For both groups, MT increased with a decreasing target size. Compare to the normal BMI group, this effect was exacerbated for the obese group. For the two smallest targets, movements were on average 115 and 145 ms slower for the obese than for the normal BMI group suggesting that obesity added a balance constraint and limited the speed with which an accurate movement could be done. Obesity, because of its effects on the control

  15. Heart rate during basketball game play and volleyball drills accurately predicts oxygen uptake and energy expenditure.

    PubMed

    Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J

    2015-09-01

    There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.

  16. High-Quality Carbohydrates and Physical Performance

    PubMed Central

    Kanter, Mitch

    2018-01-01

    While all experts agreed that protein needs for performance are likely greater than believed in past generations, particularly for strength training athletes, and that dietary fat could sustain an active person through lower-intensity training bouts, current research still points to carbohydrate as an indispensable energy source for high-intensity performance. PMID:29449746

  17. High-wafer-yield, high-performance vertical cavity surface-emitting lasers

    NASA Astrophysics Data System (ADS)

    Li, Gabriel S.; Yuen, Wupen; Lim, Sui F.; Chang-Hasnain, Constance J.

    1996-04-01

    Vertical cavity surface emitting lasers (VCSELs) with very low threshold current and voltage of 340 (mu) A and 1.5 V is achieved. The molecular beam epitaxially grown wafers are grown with a highly accurate, low cost and versatile pre-growth calibration technique. One- hundred percent VCSEL wafer yield is obtained. Low threshold current is achieved with a native oxide confined structure with excellent current confinement. Single transverse mode with stable, predetermined polarization direction up to 18 times threshold is also achieved, due to stable index guiding provided by the structure. This is the highest value reported to data for VCSELs. We have established that p-contact annealing in these devices is crucial for low voltage operation, contrary to the general belief. Uniform doping in the mirrors also appears not to be inferior to complicated doping engineering. With these design rules, very low threshold voltage VCSELs are achieved with very simple growth and fabrication steps.

  18. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  19. Highly curved image sensors: a practical approach for improved optical performance

    NASA Astrophysics Data System (ADS)

    Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff

    2017-06-01

    The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.

  20. Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations

    DTIC Science & Technology

    2011-09-30

    System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Sequoia Scientific, Inc,2700 Richards Road, Suite 107,Bellevue,WA,98005 8. PERFORMING...EcoLight-S 1.0 Users’ Guide and Technical Documentation. Sequoia Scientific, Inc., Bellevue, WA, 38 pages. Mobley, C. D., 2011. Fast light calculations

  1. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated

  2. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  3. An optimization-based approach for high-order accurate discretization of conservation laws with discontinuous solutions

    NASA Astrophysics Data System (ADS)

    Zahr, M. J.; Persson, P.-O.

    2018-07-01

    This work introduces a novel discontinuity-tracking framework for resolving discontinuous solutions of conservation laws with high-order numerical discretizations that support inter-element solution discontinuities, such as discontinuous Galerkin or finite volume methods. The proposed method aims to align inter-element boundaries with discontinuities in the solution by deforming the computational mesh. A discontinuity-aligned mesh ensures the discontinuity is represented through inter-element jumps while smooth basis functions interior to elements are only used to approximate smooth regions of the solution, thereby avoiding Gibbs' phenomena that create well-known stability issues. Therefore, very coarse high-order discretizations accurately resolve the piecewise smooth solution throughout the domain, provided the discontinuity is tracked. Central to the proposed discontinuity-tracking framework is a discrete PDE-constrained optimization formulation that simultaneously aligns the computational mesh with discontinuities in the solution and solves the discretized conservation law on this mesh. The optimization objective is taken as a combination of the deviation of the finite-dimensional solution from its element-wise average and a mesh distortion metric to simultaneously penalize Gibbs' phenomena and distorted meshes. It will be shown that our objective function satisfies two critical properties that are required for this discontinuity-tracking framework to be practical: (1) possesses a local minima at a discontinuity-aligned mesh and (2) decreases monotonically to this minimum in a neighborhood of radius approximately h / 2, whereas other popular discontinuity indicators fail to satisfy the latter. Another important contribution of this work is the observation that traditional reduced space PDE-constrained optimization solvers that repeatedly solve the conservation law at various mesh configurations are not viable in this context since severe overshoot and

  4. 1999 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Hahne, David E. (Editor)

    1999-01-01

    NASA's High-Speed Research Program sponsored the 1999 Aerodynamic Performance Technical Review on February 8-12, 1999 in Anaheim, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in the areas of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High Lift, and Flight Controls. The review objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among die scientists and engineers working on HSCT aerodynamics. In particular, single and midpoint optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT simulation results were presented, along with executive summaries for all the Aerodynamic Performance technology areas. The HSR Aerodynamic Performance Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration. Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas of the HSR Program. This Volume 2/Part 2 publication covers the tools and methods development session.

  5. ExScalibur: A High-Performance Cloud-Enabled Suite for Whole Exome Germline and Somatic Mutation Identification.

    PubMed

    Bao, Riyue; Hernandez, Kyle; Huang, Lei; Kang, Wenjun; Bartom, Elizabeth; Onel, Kenan; Volchenboum, Samuel; Andrade, Jorge

    2015-01-01

    Whole exome sequencing has facilitated the discovery of causal genetic variants associated with human diseases at deep coverage and low cost. In particular, the detection of somatic mutations from tumor/normal pairs has provided insights into the cancer genome. Although there is an abundance of publicly-available software for the detection of germline and somatic variants, concordance is generally limited among variant callers and alignment algorithms. Successful integration of variants detected by multiple methods requires in-depth knowledge of the software, access to high-performance computing resources, and advanced programming techniques. We present ExScalibur, a set of fully automated, highly scalable and modulated pipelines for whole exome data analysis. The suite integrates multiple alignment and variant calling algorithms for the accurate detection of germline and somatic mutations with close to 99% sensitivity and specificity. ExScalibur implements streamlined execution of analytical modules, real-time monitoring of pipeline progress, robust handling of errors and intuitive documentation that allows for increased reproducibility and sharing of results and workflows. It runs on local computers, high-performance computing clusters and cloud environments. In addition, we provide a data analysis report utility to facilitate visualization of the results that offers interactive exploration of quality control files, read alignment and variant calls, assisting downstream customization of potential disease-causing mutations. ExScalibur is open-source and is also available as a public image on Amazon cloud.

  6. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    ERIC Educational Resources Information Center

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  7. Design and Performance Evaluation of Real-time Endovascular Interventional Surgical Robotic System with High Accuracy.

    PubMed

    Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan

    2018-05-15

    Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Mechanism for accurate, protein-assisted DNA annealing by Deinococcus radiodurans DdrB

    PubMed Central

    Sugiman-Marangos, Seiji N.; Weiss, Yoni M.; Junop, Murray S.

    2016-01-01

    Accurate pairing of DNA strands is essential for repair of DNA double-strand breaks (DSBs). How cells achieve accurate annealing when large regions of single-strand DNA are unpaired has remained unclear despite many efforts focused on understanding proteins, which mediate this process. Here we report the crystal structure of a single-strand annealing protein [DdrB (DNA damage response B)] in complex with a partially annealed DNA intermediate to 2.2 Å. This structure and supporting biochemical data reveal a mechanism for accurate annealing involving DdrB-mediated proofreading of strand complementarity. DdrB promotes high-fidelity annealing by constraining specific bases from unauthorized association and only releases annealed duplex when bound strands are fully complementary. To our knowledge, this mechanism provides the first understanding for how cells achieve accurate, protein-assisted strand annealing under biological conditions that would otherwise favor misannealing. PMID:27044084

  9. Accurate evaluation and analysis of functional genomics data and methods

    PubMed Central

    Greene, Casey S.; Troyanskaya, Olga G.

    2016-01-01

    The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703

  10. Highly accurate calculation of rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Ansorg, M.; Kleinwächter, A.; Meinel, R.

    2002-01-01

    A new spectral code for constructing general-relativistic models of rapidly rotating stars with an unprecedented accuracy is presented. As a first application, we reexamine uniformly rotating homogeneous stars and compare our results with those obtained by several previous codes. Moreover, representative relativistic examples corresponding to highly flattened rotating bodies are given.

  11. Approaches to High-Performance Preparative Chromatography of Proteins

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Liu, Fu-Feng; Shi, Qing-Hong

    Preparative liquid chromatography is widely used for the purification of chemical and biological substances. Different from high-performance liquid chromatography for the analysis of many different components at minimized sample loading, high-performance preparative chromatography is of much larger scale and should be of high resolution and high capacity at high operation speed and low to moderate pressure drop. There are various approaches to this end. For biochemical engineers, the traditional way is to model and optimize a purification process to make it exert its maximum capability. For high-performance separations, however, we need to improve chromatographic technology itself. We herein discuss four approaches in this review, mainly based on the recent studies in our group. The first is the development of high-performance matrices, because packing material is the central component of chromatography. Progress in the fabrication of superporous materials in both beaded and monolithic forms are reviewed. The second topic is the discovery and design of affinity ligands for proteins. In most chromatographic methods, proteins are separated based on their interactions with the ligands attached to the surface of porous media. A target-specific ligand can offer selective purification of desired proteins. Third, electrochromatography is discussed. An electric field applied to a chromatographic column can induce additional separation mechanisms besides chromatography, and result in electrokinetic transport of protein molecules and/or the fluid inside pores, thus leading to high-performance separations. Finally, expanded-bed adsorption is described for process integration to reduce separation steps and process time.

  12. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  13. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  14. Accurate electromagnetic modeling of terahertz detectors

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; McGrath, William R.

    2004-01-01

    Twin slot antennas coupled to superconducting devices have been developed over the years as single pixel detectors in the terahertz (THz) frequency range for space-based and astronomy applications. Used either for mixing or direct detection, they have been object of several investigations, and are currently being developed for several missions funded or co-funded by NASA. Although they have shown promising performance in terms of noise and sensitivity, so far they have usually also shown a considerable disagreement in terms of performance between calculations and measurements, especially when considering center frequency and bandwidth. In this paper we present a thorough and accurate electromagnetic model of complete detector and we compare the results of calculations with measurements. Starting from a model of the embedding circuit, the effect of all the other elements in the detector in the coupled power have been analyzed. An extensive variety of measured and calculated data, as presented in this paper, demonstrates the effectiveness and reliability of the electromagnetic model at frequencies between 600 GHz and 2.5THz.

  15. Multishelled NiO Hollow Microspheres for High-performance Supercapacitors with Ultrahigh Energy Density and Robust Cycle Life

    NASA Astrophysics Data System (ADS)

    Qi, Xinhong; Zheng, Wenji; Li, Xiangcun; He, Gaohong

    2016-09-01

    Multishelled NiO hollow microspheres for high-performance supercapacitors have been prepared and the formation mechanism has been investigated. By using resin microspheres to absorb Ni2+ and subsequent proper calcinations, the shell numbers, shell spacing and exterior shell structure were facilely controlled via varying synthetic parameters. Particularly, the exterior shell structure that accurately associated with the ion transfer is finely controlled by forming a single shell or closed exterior double-shells. Among multishelled NiO hollow microspheres, the triple-shelled NiO with an outer single-shelled microspheres show a remarkable capacity of 1280 F g-1 at 1 A g-1, and still keep a high value of 704 F g-1 even at 20 A g-1. The outstanding performances are attributed to its fast ion/electron transfer, high specific surface area and large shell space. The specific capacitance gradually increases to 108% of its initial value after 2500 cycles, demonstrating its high stability. Importantly, the 3S-NiO-HMS//RGO@Fe3O4 asymmetric supercapacitor shows an ultrahigh energy density of 51.0 Wh kg-1 at a power density of 800 W kg-1, and 78.8% capacitance retention after 10,000 cycles. Furthermore, multishelled NiO can be transferred into multishelled Ni microspheres with high-efficient H2 generation rate of 598.5 mL H2 min-1 g-1Ni for catalytic hydrolysis of NH3BH3 (AB).

  16. Establishment of a Quick and Highly Accurate Breath Test for ALDH2 Genotyping

    PubMed Central

    Aoyama, Ikuo; Ohashi, Shinya; Amanuma, Yusuke; Hirohashi, Kenshiro; Mizumoto, Ayaka; Funakoshi, Makiko; Tsurumaki, Mihoko; Nakai, Yukie; Tanaka, Katsuyuki; Hanada, Mariko; Uesaka, Aki; Chiba, Tsutomu; Muto, Manabu

    2017-01-01

    Objectives: Acetaldehyde, the first metabolite of ethanol, is a definite carcinogen for the esophagus, head, and neck; and aldehyde dehydrogenase 2 (ALDH2) is a mitochondrial enzyme that catalyzes the metabolism of acetaldehyde. The ALDH2 genotype exists as ALDH2*1/*1 (active ALDH2), ALDH2*1/*2 (heterozygous inactive ALDH2), and ALDH2*2/*2 (homozygous inactive ALDH2). Many epidemiological studies have reported that ALDH2*2 carriers are at high risk for esophageal or head and neck squamous cell carcinomas by habitual drinking. Therefore, identification of ALDH2*2 carriers would be helpful for the prevention of those cancers, but there have been no methods suitable for mass screening to identify these individuals. Methods: One hundred and eleven healthy volunteers (ALDH2*1/*1 carriers: 53; ALDH2*1/*2 carriers: 48; and ALDH2*2/*2 carriers: 10) were recruited. Breath samples were collected after drinking 100 ml of 0.5% ethanol using specially designed gas bags, and breath ethanol and acetaldehyde levels were measured by semiconductor gas chromatography. Results: The median (range) breath acetaldehyde levels at 1 min after alcohol ingestion were 96.1 (18.1–399.0) parts per billion (p.p.b.) for the ALDH2*1/*1 genotype, 333.5 (78.4–1218.4) p.p.b. for the ALDH2*1/*2 genotype, and 537.1 (213.2–1353.8) p.p.b. for the ALDH2*2/*2 genotype. The breath acetaldehyde levels in ALDH2*2 carriers were significantly higher than for the ALDH2*1/*1 genotype. Notably, the ratio of breath acetaldehyde level-to-breath ethanol level could identify carriers of the ALDH2*2 allele very accurately (whole accuracy; 96.4%). Conclusions: Our novel breath test is a useful tool for identifying ALDH2*2 carriers, who are at high risk for esophageal and head and neck cancers. PMID:28594397

  17. Noise of High-Performance Aircraft at Afterburner

    DTIC Science & Technology

    2015-03-30

    Quarterly progress report 3. DATES COVERED (From - To) 12-15-2014 to 04-03-2015 4. TITLE AND SUBTITLE Noise of High-Performance Aircraft at Afterburner ...generation of a high- performance aircraft operating at afterburner condition. The new noise components are indirect combustion noise produced by the...spectrum is reported 15. SUBJECT TERMS Jet noise at afterburner 16. SECURITY CLASSIFICATION OF: a. REPORT u b. ABSTRACT u c. THIS PAGE u 17

  18. SU-E-T-112: Experimental Characterization of a Novel Thermal Reservoir for Consistent and Accurate Annealing of High-Sensitivity TLDs.

    PubMed

    Donahue, W; Bongiorni, P; Hearn, R; Rodgers, J; Nath, R; Chen, Z

    2012-06-01

    To develop and characterize a novel thermal reservoir for consistent and accurate annealing of high-sensitivity thermoluminescence dosimeters (TLD-100H) for dosimetry of brachytherapy sources. The sensitivity of TLD-100H is about 18 times that of TLD-100 which has clear advantages in for interstitial brachytherapy sources. However, the TLD-100H requires a short high temperature annealing cycle (15 min.) and opening and closing the oven door causes significant temperature fluctuations leading to unreliable measurements. A new thermal reservoir made of aluminum alloy was developed to provide stable temperature environment in a standard hot air oven. The thermal reservoir consisted of a 20 cm × 20 cm × 8 cm Al block with a machine-milled chamber in the middle to house the aluminum TLD holding tray. The thermal reservoir was placed inside the oven until it reaches thermal equilibrium with oven chamber. The temperatures of the oven chamber, heat reservoir, and TLD holding tray were monitored by two independent thermo-couples which interfaced digitally to a control computer. A LabView interface was written for monitoring and recording the temperatures in TLD holding tray, the thermal reservoir, and oven chamber. The temperature profiles were measured as a function of oven-door open duration. The settings for oven chamber temperature and oven door open-close duration were optimized to achieve a stable temperature of 240 0C in the TLD holding tray. Complete temperature profiles of the TLD annealing tray over the entire annealing process were obtained. A LabView interface was written for monitoring and recording the temperatures in TLD holding The use of the thermal reservoir has significantly reduced the temperature fluctuations caused by the opening of oven door when inserting the TLD holding tray into the oven chamber. It has enabled consistent annealing of high-sensitivity TLDs. A comprehensive characterization of a custom-built novel thermal reservoir for annealing

  19. 3D printed high performance strain sensors for high temperature applications

    NASA Astrophysics Data System (ADS)

    Rahman, Md Taibur; Moser, Russell; Zbib, Hussein M.; Ramana, C. V.; Panat, Rahul

    2018-01-01

    Realization of high temperature physical measurement sensors, which are needed in many of the current and emerging technologies, is challenging due to the degradation of their electrical stability by drift currents, material oxidation, thermal strain, and creep. In this paper, for the first time, we demonstrate that 3D printed sensors show a metamaterial-like behavior, resulting in superior performance such as high sensitivity, low thermal strain, and enhanced thermal stability. The sensors were fabricated using silver (Ag) nanoparticles (NPs), using an advanced Aerosol Jet based additive printing method followed by thermal sintering. The sensors were tested under cyclic strain up to a temperature of 500 °C and showed a gauge factor of 3.15 ± 0.086, which is about 57% higher than that of those available commercially. The sensor thermal strain was also an order of magnitude lower than that of commercial gages for operation up to a temperature of 500 °C. An analytical model was developed to account for the enhanced performance of such printed sensors based on enhanced lateral contraction of the NP films due to the porosity, a behavior akin to cellular metamaterials. The results demonstrate the potential of 3D printing technology as a pathway to realize highly stable and high-performance sensors for high temperature applications.

  20. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  1. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  2. Automatic identification approach for high-performance liquid chromatography-multiple reaction monitoring fatty acid global profiling.

    PubMed

    Tie, Cai; Hu, Ting; Jia, Zhi-Xin; Zhang, Jin-Lan

    2015-08-18

    Fatty acids (FAs) are a group of lipid molecules that are essential to organisms. As potential biomarkers for different diseases, FAs have attracted increasing attention from both biological researchers and the pharmaceutical industry. A sensitive and accurate method for globally profiling and identifying FAs is required for biomarker discovery. The high selectivity and sensitivity of high-performance liquid chromatography-multiple reaction monitoring (HPLC-MRM) gives it great potential to fulfill the need to identify FAs from complicated matrices. This paper developed a new approach for global FA profiling and identification for HPLC-MRM FA data mining. Mathematical models for identifying FAs were simulated using the isotope-induced retention time (RT) shift (IRS) and peak area ratios between parallel isotope peaks for a series of FA standards. The FA structures were predicated using another model based on the RT and molecular weight. Fully automated FA identification software was coded using the Qt platform based on these mathematical models. Different samples were used to verify the software. A high identification efficiency (greater than 75%) was observed when 96 FA species were identified in plasma. This FAs identification strategy promises to accelerate FA research and applications.

  3. Type-II Superlattice for High Performance LWIR Detectors

    DTIC Science & Technology

    2008-05-15

    Superlattice for High Performance LWIR Detectors 5. FUNDING NUMBERS F49620-03-1-0436 6. AUTHOR(S) M. Razeghi 7. PERFORMING ORGANIZATION NAME(S...298 (Rcv.2-89) Prescribed by ANSI Std. 239-18 298-102 Final Technical Report Type-II Superlattice for High Performance LWIR Detectors Contract No...Short-period InAs/GaSb type-II superlattices for mid- infrared detectors . Physica E: Low- dimensional Systems and Nanostructures, 2006.

  4. Teacher Performance Pay Signals and Student Achievement: Are Signals Accurate, and How well Do They Work?

    ERIC Educational Resources Information Center

    Manzeske, David; Garland, Marshall; Williams, Ryan; West, Benjamin; Kistner, Alexandra Manzella; Rapaport, Amie

    2016-01-01

    High-performing teachers tend to seek out positions at more affluent or academically challenging schools, which tend to hire more experienced, effective educators. Consequently, low-income and minority students are more likely to attend schools with less experienced and less effective educators (see, for example, DeMonte & Hanna, 2014; Office…

  5. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  6. High Skin Temperature and Hypohydration Impair Aerobic Performance

    DTIC Science & Technology

    2012-01-01

    hypohydration) in impairing submaximal aerobic performance. Hot skin is associated with high skin blood flow requirements and hypohydration is...the aerobic performance impairment (-1.5% for each l°C skin temperature). We conclude that hot skin ( high skin blood flow requirements from narrow...associated with high skin blood flow requirements and hypohydration is associated with reduced cardiac filling, both of which act to reduce aerobic

  7. Fully automated laboratory and field-portable goniometer used for performing accurate and precise multiangular reflectance measurements

    NASA Astrophysics Data System (ADS)

    Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily

    2017-10-01

    Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.

  8. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  9. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  10. Arbitrarily accurate twin composite π -pulse sequences

    NASA Astrophysics Data System (ADS)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  11. Simultaneous analysis of small organic acids and humic acids using high performance size exclusion chromatography.

    PubMed

    Qin, Xiaopeng; Liu, Fei; Wang, Guangcai; Weng, Liping

    2012-12-01

    An accurate and fast method for simultaneous determination of small organic acids and much larger humic acids was developed using high performance size exclusion chromatography. Two small organic acids, i.e. salicylic acid and 2,3-dihydroxybenzoic acid, and one purified humic acid material were used in this study. Under the experimental conditions, the UV peaks of salicylic acid and 2,3-dihydroxybenzoic acid were well separated from the peaks of humic acid in the chromatogram. Concentrations of the two small organic acids could be accurately determined from their peak areas. The concentration of humic acid in the mixture could then be derived from mass balance calculations. The measured results agreed well with the nominal concentrations. The detection limits are 0.05 mg/L and 0.01 mg/L for salicylic acid and 2,3-dihydroxybenzoic acid, respectively. Applicability of the method to natural samples was tested using groundwater, glacier, and river water samples (both original and spiked with salicylic acid and 2,3-dihydroxybenzoic acid) with a total organic carbon concentration ranging from 2.1 to 179.5 mg C/L. The results obtained are promising, especially for groundwater samples and river water samples with a total organic carbon concentration below 9 mg C/L. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Determination of modafinil in plasma and urine by reversed phase high-performance liquid-chromatography.

    PubMed

    Schwertner, Harvey A; Kong, Suk Bin

    2005-03-09

    Modafinil (Provigil) is a new wake-promoting drug that is being used for the management of excessive sleepiness in patients with narcolepsy. It has pharmacological properties similar to that of amphetamine, but without some of the side effects associated with amphetamine-like stimulants. Since modafinil has the potential to be abused, accurate drug-screening methods are needed for its analysis. In this study, we developed a high-performance liquid-chromatographic procedure (HPLC) for the quantitative analysis of modafinil in plasma and urine. (Phenylthio)acetic acid was used as an internal standard for the analysis of both plasma and urine. Modafinil was extracted from urine and plasma with ethyl acetate and ethyl acetate-acetic acid (100:1, v/v), respectively, and analyzed on a C18 reverse phase column with methanol-water-acetic acid (500:500:1, v/v) as the mobile phase. Recoveries from urine and plasma were 80.0 and 98.9%, respectively and the limit of quantitation was 0.1 microg/mL at 233 nm. Forty-eight 2-h post-dose urine samples from sham controls and from individuals taking 200 or 400 mg of modafinil were analyzed without knowledge of drug administration. All 16-placebo urine samples and all 32 2-h post-dose urine samples were correctly classified. The analytical procedure is accurate and reproducible and can be used for therapeutic drug monitoring, pharmacokinetic studies, and drug abuse screening.

  13. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  14. Accurate expansion of cylindrical paraxial waves for its straightforward implementation in electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Naserpour, Mahin; Zapata-Rodríguez, Carlos J.

    2018-01-01

    The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.

  15. High-performance, stretchable, wire-shaped supercapacitors.

    PubMed

    Chen, Tao; Hao, Rui; Peng, Huisheng; Dai, Liming

    2015-01-07

    A general approach toward extremely stretchable and highly conductive electrodes was developed. The method involves wrapping a continuous carbon nanotube (CNT) thin film around pre-stretched elastic wires, from which high-performance, stretchable wire-shaped supercapacitors were fabricated. The supercapacitors were made by twisting two such CNT-wrapped elastic wires, pre-coated with poly(vinyl alcohol)/H3PO4 hydrogel, as the electrolyte and separator. The resultant wire-shaped supercapacitors exhibited an extremely high elasticity of up to 350% strain with a high device capacitance up to 30.7 F g(-1), which is two times that of the state-of-the-art stretchable supercapacitor under only 100% strain. The wire-shaped structure facilitated the integration of multiple supercapacitors into a single wire device to meet specific energy and power needs for various potential applications. These supercapacitors can be repeatedly stretched from 0 to 200% strain for hundreds of cycles with no change in performance, thus outperforming all the reported state-of-the-art stretchable electronics. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  17. Note: long range and accurate measurement of deep trench microstructures by a specialized scanning tunneling microscope.

    PubMed

    Ju, Bing-Feng; Chen, Yuan-Liu; Zhang, Wei; Zhu, Wule; Jin, Chao; Fang, F Z

    2012-05-01

    A compact but practical scanning tunneling microscope (STM) with high aspect ratio and high depth capability has been specially developed. Long range scanning mechanism with tilt-adjustment stage is adopted for the purpose of adjusting the probe-sample relative angle to compensate the non-parallel effects. A periodical trench microstructure with a pitch of 10 μm has been successfully imaged with a long scanning range up to 2.0 mm. More innovatively, a deep trench with depth and step height of 23.0 μm has also been successfully measured, and slope angle of the sidewall can approximately achieve 67°. The probe can continuously climb the high step and exploring the trench bottom without tip crashing. The new STM could perform long range measurement for the deep trench and high step surfaces without image distortion. It enables accurate measurement and quality control of periodical trench microstructures.

  18. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  19. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  20. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  1. Accurate and efficient spin integration for particle accelerators

    DOE PAGES

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less

  2. Fast and accurate edge orientation processing during object manipulation

    PubMed Central

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  3. 1999 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Hahne, David E. (Editor)

    1999-01-01

    The High-Speed Research Program sponsored the NASA High-Speed Research Program Aerodynamic Performance Review on February 8-12, 1999 in Anaheim, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of: Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization) and High-Lift. The review objectives were to: (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. The HSR AP Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas within the airframe element of the HSR Program. This Volume 2/Part 1 publication presents the High-Lift Configuration Development session.

  4. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

  5. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to

  6. High performance, high bandgap, lattice-mismatched, GaInP solar cells

    DOEpatents

    Wanlass, Mark W; Carapella, Jeffrey J; Steiner, Myles A

    2016-11-01

    High performance, high bandgap, lattice-mismatched, photovoltaic cells (10), both transparent and non-transparent to sub-bandgap light, are provided as devices for use alone or in combination with other cells in split spectrum apparatus or other applications.

  7. High performance, high bandgap, lattice-mismatched, GaInP solar cells

    DOEpatents

    Wanlass, Mark W.; Carapella, Jeffrey J.; Steiner, Myles A.

    2014-07-08

    High performance, high bandgap, lattice-mismatched, photovoltaic cells (10), both transparent and non-transparent to sub-bandgap light, are provided as devices for use alone or in combination with other cells in split spectrum apparatus or other applications.

  8. [Determination of five synthetic sweeteners in wines using high performance liquid chromatography-tandem mass spectrometry].

    PubMed

    Ji, Chao; Feng, Feng; Chen, Zhengxing; Sun, Li; Chu, Xiaogang

    2010-08-01

    A high performance liquid chromatography-electrospray ionization tandem mass spectrometry (HPLC-ESI MS/MS) method for the determination of five synthetic sweeteners (acesulfame, sodium saccharin, sodium cyclamate, aspartame and neotame) in wines has been developed. The HPLC separation was carried out on an Ultimate C18 column (100 mm x 2.1 mm, 3 microm). Several parameters, including the composition and pH of the mobile phase, column temperature and the monitor ions, were optimized for improving the chromatographic performance and the sensitivity of determination. The results demonstrated that the separation can be completed in less than 5 min by gradient elution with 20 mmol/L ammonium formate and 0.1% (v/v) formic acid (pH 3.8) and methanol as the mobile phase. The column temperature was kept at 45 degrees C. When the analytes were detected by ESI -MS/MS under multiple reaction monitoring mode, the detection limits were 0.6, 5, 1, 0.8 and 0.2 microg/L for acesulfame, sodium saccharin, sodium cyclamate, aspartame and neotame, respectively. The average recoveries ranged from 87.2% to 103%. The relative standard deviations were not more than 1.2%. This method is rapid, accurate, highly sensitive and suitable for the quality control of low concentration of the synthetic sweeteners, which are illegally added to wines and other foods with complex matrices.

  9. Nonacademic Effects of Homework in Privileged, High-Performing High Schools

    ERIC Educational Resources Information Center

    Galloway, Mollie; Conner, Jerusha; Pope, Denise

    2013-01-01

    This study used survey data to examine relations among homework, student well-being, and behavioral engagement in a sample of 4,317 students from 10 high-performing high schools in upper middle class communities. Results indicated that students in these schools average more than 3 hr of homework per night. Students who did more hours of homework…

  10. Rapid and accurate intraoperative pathological diagnosis by artificial intelligence with deep learning technology.

    PubMed

    Zhang, Jing; Song, Yanlin; Xia, Fan; Zhu, Chenjing; Zhang, Yingying; Song, Wenpeng; Xu, Jianguo; Ma, Xuelei

    2017-09-01

    Frozen section is widely used for intraoperative pathological diagnosis (IOPD), which is essential for intraoperative decision making. However, frozen section suffers from some drawbacks, such as time consuming and high misdiagnosis rate. Recently, artificial intelligence (AI) with deep learning technology has shown bright future in medicine. We hypothesize that AI with deep learning technology could help IOPD, with a computer trained by a dataset of intraoperative lesion images. Evidences supporting our hypothesis included the successful use of AI with deep learning technology in diagnosing skin cancer, and the developed method of deep-learning algorithm. Large size of the training dataset is critical to increase the diagnostic accuracy. The performance of the trained machine could be tested by new images before clinical use. Real-time diagnosis, easy to use and potential high accuracy were the advantages of AI for IOPD. In sum, AI with deep learning technology is a promising method to help rapid and accurate IOPD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Ab initio study of the CO-N2 complex: a new highly accurate intermolecular potential energy surface and rovibrational spectrum.

    PubMed

    Cybulski, Hubert; Henriksen, Christian; Dawes, Richard; Wang, Xiao-Gang; Bora, Neha; Avila, Gustavo; Carrington, Tucker; Fernández, Berta

    2018-05-09

    A new, highly accurate ab initio ground-state intermolecular potential-energy surface (IPES) for the CO-N2 complex is presented. Thousands of interaction energies calculated with the CCSD(T) method and Dunning's aug-cc-pVQZ basis set extended with midbond functions were fitted to an analytical function. The global minimum of the potential is characterized by an almost T-shaped structure and has an energy of -118.2 cm-1. The symmetry-adapted Lanczos algorithm was used to compute rovibrational energies (up to J = 20) on the new IPES. The RMSE with respect to experiment was found to be on the order of 0.038 cm-1 which confirms the very high accuracy of the potential. This level of agreement is among the best reported in the literature for weakly bound systems and considerably improves on those of previously published potentials.

  12. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    DTIC Science & Technology

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  13. Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.

    USDA-ARS?s Scientific Manuscript database

    Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...

  14. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  15. A standardized framework for accurate, high-throughput genotyping of recombinant and non-recombinant viral sequences.

    PubMed

    Alcantara, Luiz Carlos Junior; Cassol, Sharon; Libin, Pieter; Deforche, Koen; Pybus, Oliver G; Van Ranst, Marc; Galvão-Castro, Bernardo; Vandamme, Anne-Mieke; de Oliveira, Tulio

    2009-07-01

    Human immunodeficiency virus type-1 (HIV-1), hepatitis B and C and other rapidly evolving viruses are characterized by extremely high levels of genetic diversity. To facilitate diagnosis and the development of prevention and treatment strategies that efficiently target the diversity of these viruses, and other pathogens such as human T-lymphotropic virus type-1 (HTLV-1), human herpes virus type-8 (HHV8) and human papillomavirus (HPV), we developed a rapid high-throughput-genotyping system. The method involves the alignment of a query sequence with a carefully selected set of pre-defined reference strains, followed by phylogenetic analysis of multiple overlapping segments of the alignment using a sliding window. Each segment of the query sequence is assigned the genotype and sub-genotype of the reference strain with the highest bootstrap (>70%) and bootscanning (>90%) scores. Results from all windows are combined and displayed graphically using color-coded genotypes. The new Virus-Genotyping Tools provide accurate classification of recombinant and non-recombinant viruses and are currently being assessed for their diagnostic utility. They have incorporated into several HIV drug resistance algorithms including the Stanford (http://hivdb.stanford.edu) and two European databases (http://www.umcutrecht.nl/subsite/spread-programme/ and http://www.hivrdb.org.uk/) and have been successfully used to genotype a large number of sequences in these and other databases. The tools are a PHP/JAVA web application and are freely accessible on a number of servers including: http://bioafrica.mrc.ac.za/rega-genotype/html/, http://lasp.cpqgm.fiocruz.br/virus-genotype/html/, http://jose.med.kuleuven.be/genotypetool/html/.

  16. High-throughput accurate-wavelength lens-based visible spectrometer.

    PubMed

    Bell, Ronald E; Scotti, Filippo

    2010-10-01

    A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm(-1) grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arc  sec, corresponding to a wavelength error ≤0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all the relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature, and pressure are monitored between the time of calibration and the time of measurement to ensure a persistent wavelength calibration.

  17. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  18. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  19. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  20. Performance model-directed data sieving for high-performance I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how tomore » perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.« less

  1. WRF Test on IBM BG/L:Toward High Performance Application to Regional Climate Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, H S

    The effects of climate change will mostly be felt on local to regional scales (Solomon et al., 2007). To develop better forecast skill in regional climate change, an integrated multi-scale modeling capability (i.e., a pair of global and regional climate models) becomes crucially important in understanding and preparing for the impacts of climate change on the temporal and spatial scales that are critical to California's and nation's future environmental quality and economical prosperity. Accurate knowledge of detailed local impact on the water management system from climate change requires a resolution of 1km or so. To this end, a high performancemore » computing platform at the petascale appears to be an essential tool in providing such local scale information to formulate high quality adaptation strategies for local and regional climate change. As a key component of this modeling system at LLNL, the Weather Research and Forecast (WRF) model is implemented and tested on the IBM BG/L machine. The objective of this study is to examine the scaling feature of WRF on BG/L for the optimal performance, and to assess the numerical accuracy of WRF solution on BG/L.« less

  2. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  3. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics

  4. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  5. Simulation and performance analysis of a novel high-accuracy sheathless microfluidic impedance cytometer with coplanar electrode layout.

    PubMed

    Caselli, Federica; Bisegna, Paolo

    2017-10-01

    The performance of a novel microfluidic impedance cytometer (MIC) with coplanar configuration is investigated in silico. The main feature of the device is the ability to provide accurate particle-sizing despite the well-known measurement sensitivity to particle trajectory. The working principle of the device is presented and validated by means of an original virtual laboratory providing close-to-experimental synthetic data streams. It is shown that a metric correlating with particle trajectory can be extracted from the signal traces and used to compensate the trajectory-induced error in the estimated particle size, thus reaching high-accuracy. An analysis of relevant parameters of the experimental setup is also presented. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  7. Near Real-Time Probabilistic Damage Diagnosis Using Surrogate Modeling and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Zubair, Mohammad; Ranjan, Desh

    2017-01-01

    This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.

  8. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  9. A sensitive and accurate method for the determination of perfluoroalkyl and polyfluoroalkyl substances in human serum using a high performance liquid chromatography-online solid phase extraction-tandem mass spectrometry.

    PubMed

    Yu, Chang Ho; Patel, Bhupendra; Palencia, Marilou; Fan, Zhihua Tina

    2017-01-13

    A selective, sensitive, and accurate analytical method for the measurement of perfluoroalkyl and polyfluoroalkyl substances (PFASs) in human serum, utilizing LC-MS/MS (liquid chromatography-tandem mass spectrometry), was developed and validated according to the Centers for Disease Control and Prevention (CDC) guidelines for biological sample analysis. Tests were conducted to determine the optimal analytical column, mobile phase composition and pH, gradient program, and cleaning procedure. The final analytical column selected for analysis was an extra densely bonded silica-packed reverse-phase column (Agilent XDB-C 8 , 3.0×100mm, 3.5μm). Mobile phase A was an aqueous buffer solution containing 10mM ammonium acetate (pH=4.3). Mobile phase B was a mixture of methanol and acetonitrile (1:1, v/v). The gradient program was programmed by initiating a fast elution (%B, from 40 to 65%) between 1.0 and 1.5min, followed by a slow elution (%B: 65-80%) in the period of 1.5-7.5min. The cleanup procedures were augmented by cleaning with (1) various solvents (isopropyl alcohol, methanol, acetonitrile, and reverse osmosis-purified water); (2) extensive washing steps for the autosampler and solid phase extraction (SPE) cartridge; and (3) a post-analysis cleaning step for the whole system. Under the above conditions, the resolution and sensitivity were significantly improved. Twelve target PFASs were baseline-separated (2.5-7.0min) within a 10-min of acquisition time. The limits of detection (LODs) were 0.01ng/mL or lower for all of the target compounds, making this method 5 times more sensitive than previously published methods. The newly developed method was validated in the linear range of 0.01-50ng/mL, and the accuracy (recovery between 80 and 120%) and precision (RSD<20%) were acceptable at three spiked levels (0.25, 2.5, and 25ng/mL). The method development and validation results demonstrated that this method was precise, accurate, and robust, with high-throughput (∼10min

  10. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  11. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  12. High-performance vertical organic transistors.

    PubMed

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. MASS MEASUREMENTS BY AN ACCURATE AND SENSITIVE SELECTED ION RECORDING TECHNIQUE

    EPA Science Inventory

    Trace-level components of mixtures were successfully identified or confirmed by mass spectrometric accurate mass measurements, made at high resolution with selected ion recording, using GC and LC sample introduction. Measurements were made at 20 000 or 10 000 resolution, respecti...

  14. ExScalibur: A High-Performance Cloud-Enabled Suite for Whole Exome Germline and Somatic Mutation Identification

    PubMed Central

    Huang, Lei; Kang, Wenjun; Bartom, Elizabeth; Onel, Kenan; Volchenboum, Samuel; Andrade, Jorge

    2015-01-01

    Whole exome sequencing has facilitated the discovery of causal genetic variants associated with human diseases at deep coverage and low cost. In particular, the detection of somatic mutations from tumor/normal pairs has provided insights into the cancer genome. Although there is an abundance of publicly-available software for the detection of germline and somatic variants, concordance is generally limited among variant callers and alignment algorithms. Successful integration of variants detected by multiple methods requires in-depth knowledge of the software, access to high-performance computing resources, and advanced programming techniques. We present ExScalibur, a set of fully automated, highly scalable and modulated pipelines for whole exome data analysis. The suite integrates multiple alignment and variant calling algorithms for the accurate detection of germline and somatic mutations with close to 99% sensitivity and specificity. ExScalibur implements streamlined execution of analytical modules, real-time monitoring of pipeline progress, robust handling of errors and intuitive documentation that allows for increased reproducibility and sharing of results and workflows. It runs on local computers, high-performance computing clusters and cloud environments. In addition, we provide a data analysis report utility to facilitate visualization of the results that offers interactive exploration of quality control files, read alignment and variant calls, assisting downstream customization of potential disease-causing mutations. ExScalibur is open-source and is also available as a public image on Amazon cloud. PMID:26271043

  15. Personality Types of Illinois Elementary Principals in High-Poverty, High-Performing Schools

    ERIC Educational Resources Information Center

    Hollowell, Daniel R.

    2016-01-01

    The socio-economic achievement gap is prevalent in schools across the country. There are many high-poverty, high-performing schools that have been successful in closing this achievement gap. This study investigated 30 Illinois elementary school principals from high-poverty, high-achieving schools. Principals were given the Myers-Briggs Type…

  16. Ultra-compact high-performance MCT MWIR engine

    NASA Astrophysics Data System (ADS)

    Lutz, H.; Breiter, R.; Eich, D.; Figgemeier, H.; Oelmaier, R.; Rutzinger, S.; Schenk, H.; Wendler, J.

    2017-02-01

    Size, weight and power (SWaP) reduction is highly desired by applications such as sights for the dismounted soldier or small gimbals for UAVs. But why have high performance and small size of IR systems inevitably exclude each other? Namely, recent development progress in the fields of miniature cryocoolers, short dewars and high operating temperature (HOT) FPAs combined with pitch size reduction opens the door for very compact MWIR-modules while keeping high electro-optical performance. Now, AIM has realized first prototypes of an ultra-compact high-performance MWIR engine in a total volume of only 18cl (60mm length x 60mm height x 50mm width). Impressive SWaP characteristics are completed by a total weight below 400g and a power consumption < 4W in basic imaging mode. The engine consists of a XGA-format (1024x768) MCT detector array with 10μm pitch and a low power consuming ROIC. It is cooled down to a typical operating temperature of 160K by the miniature linear cryocooler SX020. The dewar uses a short coldfinger and is designed to reduce the heat load as much as possible. The cooler drive electronics is implemented in the CCE layout in order to reduce the required space of the printed boards and to save power. Uncorrected 14bit video data is provided via Camera Link. Optionally, a small image processing board can be stacked on top of the CCE to gain access to basic functions such as BPR, 2- point NUC and dynamic reduction. This paper will present the design, functionalities and performance data of the ultra-compact MCT MWIR engine operated at HOT.

  17. High performance bonded neo magnets using high density compaction

    NASA Astrophysics Data System (ADS)

    Herchenroeder, J.; Miller, D.; Sheth, N. K.; Foo, M. C.; Nagarathnam, K.

    2011-04-01

    This paper presents a manufacturing method called Combustion Driven Compaction (CDC) for the manufacture of isotropic bonded NdFeB magnets (bonded Neo). Magnets produced by the CDC method have density up to 6.5 g/cm3 which is 7-10% higher compared to commercially available bonded Neo magnets of the same shape. The performance of an actual seat motor with a representative CDC ring magnet is presented and compared with the seat motor performance with both commercial isotropic bonded Neo and anisotropic NdFeB rings of the same geometry. The comparisons are made at both room and elevated temperatures. The airgap flux for the magnet produced by the proposed method is 6% more compared to the commercial isotropic bonded Neo magnet. After exposure to high temperature due to the superior thermal aging stability of isotropic NdFeB powders the motor performance with this material is comparable to the motor performance with an anisotropic NdFeB magnet.

  18. Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus.

    PubMed

    Hirschmann, J; Schoffelen, J M; Schnitzler, A; van Gerven, M A J

    2017-10-01

    To investigate the possibility of tremor detection based on deep brain activity. We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10 PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual power features. Applying a threshold directly to band-limited power was insufficient for tremor detection (mean area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained from a single contact pair. Within-patient training yielded better accuracy than across-patient training (0.84vs. 0.78, p=0.03), yet tremor could often be detected accurately with either approach. High frequency oscillations (>200Hz) were the best performing individual feature. LFP-based markers of tremor are robust enough to allow for accurate tremor detection in short data segments, provided that appropriate statistical models are used. LFP-based markers of tremor could be useful control signals for closed-loop deep brain stimulation. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  19. Training | High-Performance Computing | NREL

    Science.gov Websites

    Training Training Find training resources for using NREL's high-performance computing (HPC) systems as well as related online tutorials. Upcoming Training HPC User Workshop - June 12th We will be Conference, a group meets to discuss Best Practices in HPC Training. This group developed a list of resources

  20. High-Flux, High Performance H2O2 Catalyst Bed for ISTAR

    NASA Technical Reports Server (NTRS)

    Ponzo, J.

    2005-01-01

    On NASA's ISTAR RBCC program packaging and performance requirements exceeded traditional H2O2 catalyst bed capabilities. Aerojet refined a high performance, monolithic 90% H202 catalyst bed previously developed and demonstrated. This approach to catalyst bed design and fabrication was an enabling technology to the ISTAR tri-fluid engine. The catalyst bed demonstrated 55 starts at throughputs greater than 0.60 lbm/s/sq in for a duration of over 900 seconds in a physical envelope approximately 114 of traditional designs. The catalyst bed uses photoetched plates of metal bonded into a single piece monolithic structure. The precise control of the geometry and complete mixing results in repeatable, quick starting, high performing catalyst bed. Three different beds were designed and tested, with the best performing bed used for tri-fluid engine testing.