Sample records for accurate high performance

  1. Performance Patterns of High, Medium, and Low Performers during and following a Reward versus Non-Reward Contingency Phase

    ERIC Educational Resources Information Center

    Oliver, Renee; Williams, Robert L.

    2006-01-01

    Three contingency conditions were applied to the math performance of 4th and 5th graders: bonus credit for accurately solving math problems, bonus credit for completing math problems, and no bonus credit for accurately answering or completing math problems. Mixed ANOVAs were used in tracking the performance of high, medium, and low performers…

  2. Calibration of X-Ray Observatories

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; L'Dell, Stephen L.

    2011-01-01

    Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th

  3. Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.

    2012-01-01

    The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.

  4. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA

    PubMed Central

    Wright, Imogen A.; Travers, Simon A.

    2014-01-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618

  5. Identifying nursing interventions associated with the accuracy used nursing diagnoses for patients with liver cirrhosis 1

    PubMed Central

    Gimenes, Fernanda Raphael Escobar; Motta, Ana Paula Gobbo; da Silva, Patrícia Costa dos Santos; Gobbo, Ana Flora Fogaça; Atila, Elisabeth; de Carvalho, Emilia Campos

    2017-01-01

    ABSTRACT Objective: to identify the nursing interventions associated with the most accurate and frequently used NANDA International, Inc. (NANDA-I) nursing diagnoses for patients with liver cirrhosis. Method: this is a descriptive, quantitative, cross-sectional study. Results: a total of 12 nursing diagnoses were evaluated, seven of which showed high accuracy (IVC ≥ 0.8); 70 interventions were identified and 23 (32.86%) were common to more than one diagnosis. Conclusion: in general, nurses often perform nursing interventions suggested in the NIC for the seven highly accurate nursing diagnoses identified in this study to care patients with liver cirrhosis. Accurate and valid nursing diagnoses guide the selection of appropriate interventions that nurses can perform to enhance patient safety and thus improve patient health outcomes.

  6. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  7. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA.

    PubMed

    Wright, Imogen A; Travers, Simon A

    2014-07-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Method of measuring thermal conductivity of high performance insulation

    NASA Technical Reports Server (NTRS)

    Hyde, E. H.; Russell, L. D.

    1968-01-01

    Method accurately measures the thermal conductivity of high-performance sheet insulation as a discrete function of temperature. It permits measurements to be made at temperature drops of approximately 10 degrees F across the insulation and ensures measurement accuracy by minimizing longitudinal heat losses in the system.

  9. Liquid Bismuth Feed System for Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Markusic, T. E.; Polzin, K. A.; Stanojev, B. J.

    2006-01-01

    Operation of Hall thrusters with bismuth propellant has been shown to be a promising path toward high-power, high-performance, long-lifetime electric propulsion for spaceflight missions. For example, the VHITAL project aims td accurately, experimentally assess the performance characteristics of 10 kW-class bismuth-fed Hall thrusters - in order to validate earlier results and resuscitate a promising technology that has been relatively dormant for about two decades. A critical element of these tests will be the precise metering of propellant to the thruster, since performance cannot be accurately assessed without an accurate accounting of mass flow rate. Earlier work used a pre/post-test propellant weighing scheme that did not provide any real-time measurement of mass flow rate while the thruster was firing, and makes subsequent performance calculations difficult. The motivation of the present work was to develop a precision liquid bismuth Propellant Management System (PMS) that provides real-time propellant mass flow rate measurement and control, enabling accurate thruster performance measurements. Additionally, our approach emphasizes the development of new liquid metal flow control components and, hence, will establish a basis for the future development of components for application in spaceflight. The design of various critical components in a bismuth PMS are described - reservoir, electromagnetic pump, hotspot flow sensor, and automated control system. Particular emphasis is given to material selection and high-temperature sealing techniques. Open loop calibration test results are reported, which validate the systems capability to deliver bismuth at mass flow rates ranging from 10 to 100 mg/sec with an uncertainty of less than +/- 5%. Results of integrated vaporizer/liquid PMS tests demonstrate all of the necessary elements of a complete bismuth feed system for electric propulsion.

  10. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  11. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  12. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    NASA Astrophysics Data System (ADS)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  13. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  14. Experiences with Acquiring Highly Redundant Spatial Data to Support Driverless Vehicle Technologies

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2018-05-01

    As vehicle technology is moving towards higher autonomy, the demand for highly accurate geospatial data is rapidly increasing, as accurate maps have a huge potential of increasing safety. In particular, high definition 3D maps, including road topography and infrastructure, as well as city models along the transportation corridors represent the necessary support for driverless vehicles. In this effort, a vehicle equipped with high-, medium- and low-resolution active and passive cameras acquired data in a typical traffic environment, represented here by the OSU campus, where GPS/GNSS data are available along with other navigation sensor data streams. The data streams can be used for two purposes. First, high-definition 3D maps can be created by integrating all the sensory data, and Data Analytics/Big Data methods can be tested for automatic object space reconstruction. Second, the data streams can support algorithmic research for driverless vehicle technologies, including object avoidance, navigation/positioning, detecting pedestrians and bicyclists, etc. Crucial cross-performance analyses on map database resolution and accuracy with respect to sensor performance metrics to achieve economic solution for accurate driverless vehicle positioning can be derived. These, in turn, could provide essential information on optimizing the choice of geospatial map databases and sensors' quality to support driverless vehicle technologies. The paper reviews the data acquisition and primary data processing challenges and performance results.

  15. Performance seeking control (PSC) for the F-15 highly integrated digital electronic control (HIDEC) aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.

    1995-01-01

    The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.

  16. Science, technology and mission design for LATOR experiment

    NASA Astrophysics Data System (ADS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L.

    2017-11-01

    The Laser Astrometric Test of Relativity (LATOR) is a Michelson-Morley-type experiment designed to test the Einstein's general theory of relativity in the most intense gravitational environment available in the solar system - the close proximity to the Sun. By using independent time-series of highly accurate measurements of the Shapiro time-delay (laser ranging accurate to 1 cm) and interferometric astrometry (accurate to 0.1 picoradian), LATOR will measure gravitational deflection of light by the solar gravity with accuracy of 1 part in a billion, a factor {30,000 better than currently available. LATOR will perform series of highly-accurate tests of gravitation and cosmology in its search for cosmological remnants of scalar field in the solar system. We present science, technology and mission design for the LATOR mission.

  17. Numerical Prediction of Pitch Damping Stability Derivatives for Finned Projectiles

    DTIC Science & Technology

    2013-11-01

    in part by a grant of high-performance computing time from the U.S. DOD High Performance Computing Modernization Program (HPCMP) at the Army...to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...12 3.3.2 Time -Accurate Simulations

  18. Association of School-Based Physical Activity Opportunities, Socioeconomic Status, and Third-Grade Reading

    ERIC Educational Resources Information Center

    Kern, Ben D.; Graber, Kim C.; Shen, Sa; Hillman, Charles H.; McLoughlin, Gabriella

    2018-01-01

    Background: Socioeconomic status (SES) is the most accurate predictor of academic performance in US schools. Third-grade reading is highly predictive of high school graduation. Chronic physical activity (PA) is shown to improve cognition and academic performance. We hypothesized that school-based PA opportunities (recess and physical education)…

  19. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  20. Antenna Controller Replacement Software

    NASA Technical Reports Server (NTRS)

    Chao, Roger Y.; Morgan, Scott C.; Strain, Martha M.; Rockwell, Stephen T.; Shimizu, Kenneth J.; Tehrani, Barzia J.; Kwok, Jaclyn H.; Tuazon-Wong, Michelle; Valtier, Henry; Nalbandi, Reza; hide

    2010-01-01

    The Antenna Controller Replacement (ACR) software accurately points and monitors the Deep Space Network (DSN) 70-m and 34-m high-efficiency (HEF) ground-based antennas that are used to track primarily spacecraft and, periodically, celestial targets. To track a spacecraft, or other targets, the antenna must be accurately pointed at the spacecraft, which can be very far away with very weak signals. ACR s conical scanning capability collects the signal in a circular pattern around the target, calculates the location of the strongest signal, and adjusts the antenna pointing to point directly at the spacecraft. A real-time, closed-loop servo control algorithm performed every 0.02 second allows accurate positioning of the antenna in order to track these distant spacecraft. Additionally, this advanced servo control algorithm provides better antenna pointing performance in windy conditions. The ACR software provides high-level commands that provide a very easy user interface for the DSN operator. The operator only needs to enter two commands to start the antenna and subreflector, and Master Equatorial tracking. The most accurate antenna pointing is accomplished by aligning the antenna to the Master Equatorial, which because of its small size and sheltered location, has the most stable pointing. The antenna has hundreds of digital and analog monitor points. The ACR software provides compact displays to summarize the status of the antenna, subreflector, and the Master Equatorial. The ACR software has two major functions. First, it performs all of the steps required to accurately point the antenna (and subreflector and Master Equatorial) at the spacecraft (or celestial target). This involves controlling the antenna/ subreflector/Master-Equatorial hardware, initiating and monitoring the correct sequence of operations, calculating the position of the spacecraft relative to the antenna, executing the real-time servo control algorithm to maintain the correct position, and monitoring tracking performance.

  1. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  2. Differentiation of whole grain from refined wheat (T. aestivum) flour using lipid profile of wheat bran, germ, and endosperm with UHPLC-HRAM mass spectrometry”

    USDA-ARS?s Scientific Manuscript database

    A comprehensive analysis of wheat lipids from milling fractions of bran, germ, and endosperm were performed using ultra high-performance liquid chromatography high-resolution accurate-mass multi-stage mass spectrometry (UHPLC-HRAM-MSn) with electrospray ionization (ESI) and atmospheric pressure chem...

  3. Deliberation's blindsight: how cognitive load can improve judgments.

    PubMed

    Hoffmann, Janina A; von Helversen, Bettina; Rieskamp, Jörg

    2013-06-01

    Multitasking poses a major challenge in modern work environments by putting the worker under cognitive load. Performance decrements often occur when people are under high cognitive load because they switch to less demanding--and often less accurate--cognitive strategies. Although cognitive load disturbs performance over a wide range of tasks, it may also carry benefits. In the experiments reported here, we showed that judgment performance can increase under cognitive load. Participants solved a multiple-cue judgment task in which high performance could be achieved by using a similarity-based judgment strategy but not by using a more demanding rule-based judgment strategy. Accordingly, cognitive load induced a shift to a similarity-based judgment strategy, which consequently led to more accurate judgments. By contrast, shifting to a similarity-based strategy harmed judgments in a task best solved by using a rule-based strategy. These results show how important it is to consider the cognitive strategies people rely on to understand how people perform in demanding work environments.

  4. Ranking Reputation and Quality in Online Rating Systems

    PubMed Central

    Liao, Hao; Zeng, An; Xiao, Rui; Ren, Zhuo-Ming; Chen, Duan-Bing; Zhang, Yi-Cheng

    2014-01-01

    How to design an accurate and robust ranking algorithm is a fundamental problem with wide applications in many real systems. It is especially significant in online rating systems due to the existence of some spammers. In the literature, many well-performed iterative ranking methods have been proposed. These methods can effectively recognize the unreliable users and reduce their weight in judging the quality of objects, and finally lead to a more accurate evaluation of the online products. In this paper, we design an iterative ranking method with high performance in both accuracy and robustness. More specifically, a reputation redistribution process is introduced to enhance the influence of highly reputed users and two penalty factors enable the algorithm resistance to malicious behaviors. Validation of our method is performed in both artificial and real user-object bipartite networks. PMID:24819119

  5. The study of forensic toxicology should not be neglected in Japanese universities.

    PubMed

    Ishihara, Kenji; Yajima, Daisuke; Abe, Hiroko; Nagasawa, Sayaka; Nara, Akina; Iwase, Hirotaro

    2015-04-01

    Forensic toxicology is aimed at identifying the relationship between drugs or poison and the cause of death or crime. In the authors' toxicology laboratory at Chiba University, the authors analyze almost every body for drugs and poisons. A simple inspection kit was used in an attempt to ascertain drug abuse. A mass spectrometer is used to perform highly accurate screening. When a poison is detected, quantitative analyses are required. A recent topic of interest is new psychoactive substances (NPS). Although NPS-related deaths may be decreasing, use of NPS as a cause of death is difficult to ascertain. Forensic institutes have recently begun to perform drug and poison tests on corpses. However, this approach presents several problems, as are discussed here. The hope is that highly accurate analyses of drugs and poisons will be performed throughout the country.

  6. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    NASA Astrophysics Data System (ADS)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  7. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.

  9. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    DTIC Science & Technology

    2004-07-01

    steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate

  10. Comprehensive identification and structural characterization of target components from Gelsemium elegans by high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry based on accurate mass databases combined with MS/MS spectra.

    PubMed

    Liu, Yan-Chun; Xiao, Sa; Yang, Kun; Ling, Li; Sun, Zhi-Liang; Liu, Zhao-Ying

    2017-06-01

    This study reports an applicable analytical strategy of comprehensive identification and structure characterization of target components from Gelsemium elegans by using high-performance liquid chromatography quadrupole time-of-flight mass spectrometry (LC-QqTOF MS) based on the use of accurate mass databases combined with MS/MS spectra. The databases created included accurate masses and elemental compositions of 204 components from Gelsemium and their structural data. The accurate MS and MS/MS spectra were acquired through data-dependent auto MS/MS mode followed by an extraction of the potential compounds from the LC-QqTOF MS raw data of the sample. The same was matched using the databases to search for targeted components in the sample. The structures for detected components were tentatively characterized by manually interpreting the accurate MS/MS spectra for the first time. A total of 57 components have been successfully detected and structurally characterized from the crude extracts of G. elegans, but has failed to differentiate some isomers. This analytical strategy is generic and efficient, avoids isolation and purification procedures, enables a comprehensive structure characterization of target components of Gelsemium and would be widely applicable for complicated mixtures that are derived from Gelsemium preparations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. pyQms enables universal and accurate quantification of mass spectrometry data.

    PubMed

    Leufken, Johannes; Niehues, Anna; Sarin, L Peter; Wessel, Florian; Hippler, Michael; Leidel, Sebastian A; Fufezan, Christian

    2017-10-01

    Quantitative mass spectrometry (MS) is a key technique in many research areas (1), including proteomics, metabolomics, glycomics, and lipidomics. Because all of the corresponding molecules can be described by chemical formulas, universal quantification tools are highly desirable. Here, we present pyQms, an open-source software for accurate quantification of all types of molecules measurable by MS. pyQms uses isotope pattern matching that offers an accurate quality assessment of all quantifications and the ability to directly incorporate mass spectrometer accuracy. pyQms is, due to its universal design, applicable to every research field, labeling strategy, and acquisition technique. This opens ultimate flexibility for researchers to design experiments employing innovative and hitherto unexplored labeling strategies. Importantly, pyQms performs very well to accurately quantify partially labeled proteomes in large scale and high throughput, the most challenging task for a quantification algorithm. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-07-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved space-times. In this paper, we assume the background space-time to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local time-stepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed space-times. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  13. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-03-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved spacetimes. In this paper we assume the background spacetime to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully-discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local timestepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a-posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed spacetimes. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  14. Test techniques for model development of repetitive service energy storage capacitors

    NASA Astrophysics Data System (ADS)

    Thompson, M. C.; Mauldin, G. H.

    1984-03-01

    The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.

  15. Opportunities to Intercalibrate Radiometric Sensors From International Space Station

    NASA Technical Reports Server (NTRS)

    Roithmayr, C. M.; Lukashin, C.; Speth, P. W.; Thome, K. J.; Young, D. F.; Wielicki, B. A.

    2012-01-01

    Highly accurate measurements of Earth's thermal infrared and reflected solar radiation are required for detecting and predicting long-term climate change. We consider the concept of using the International Space Station to test instruments and techniques that would eventually be used on a dedicated mission such as the Climate Absolute Radiance and Refractivity Observatory. In particular, a quantitative investigation is performed to determine whether it is possible to use measurements obtained with a highly accurate reflected solar radiation spectrometer to calibrate similar, less accurate instruments in other low Earth orbits. Estimates of numbers of samples useful for intercalibration are made with the aid of year-long simulations of orbital motion. We conclude that the International Space Station orbit is ideally suited for the purpose of intercalibration.

  16. High-speed engine/component performance assessment using exergy and thrust-based methods

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.

    1996-01-01

    This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.

  17. High- and low-pressure pneumotachometers measure respiration rates accurately in adverse environments

    NASA Technical Reports Server (NTRS)

    Fagot, R. J.; Mc Donald, R. T.; Roman, J. A.

    1968-01-01

    Respiration-rate transducers in the form of pneumotachometers measure respiration rates of pilots operating high performance research aircraft. In each low pressure or high pressure oxygen system a sensor is placed in series with the pilots oxygen supply line to detect gas flow accompanying respiration.

  18. DNS of Low-Pressure Turbine Cascade Flows with Elevated Inflow Turbulence Using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.

    2016-01-01

    Recent progress towards developing a new computational capability for accurate and efficient high-fidelity direct numerical simulation (DNS) and large-eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy- stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy, and is implemented in a computationally efficient manner on a modern high performance computer architecture. An inflow turbulence generation procedure based on a linear forcing approach has been incorporated in this framework and DNS conducted to study the effect of inflow turbulence on the suction- side separation bubble in low-pressure turbine (LPT) cascades. The T106 series of airfoil cascades in both lightly (T106A) and highly loaded (T106C) configurations at exit isentropic Reynolds numbers of 60,000 and 80,000, respectively, are considered. The numerical simulations are performed using 8th-order accurate spatial and 4th-order accurate temporal discretization. The changes in separation bubble topology due to elevated inflow turbulence is captured by the present method and the physical mechanisms leading to the changes are explained. The present results are in good agreement with prior numerical simulations but some expected discrepancies with the experimental data for the T106C case are noted and discussed.

  19. Rapid separation and characterization of diterpenoid alkaloids in processed roots of Aconitum carmichaeli using ultra high performance liquid chromatography coupled with hybrid linear ion trap-Orbitrap tandem mass spectrometry.

    PubMed

    Xu, Wen; Zhang, Jing; Zhu, Dayuan; Huang, Juan; Huang, Zhihai; Bai, Junqi; Qiu, Xiaohui

    2014-10-01

    The lateral root of Aconitum carmichaeli, a popular traditional Chinese medicine, has been widely used to treat rheumatic diseases. For decades, diterpenoid alkaloids have dominated the phytochemical and biomedical research on this plant. In this study, a rapid and sensitive method based on ultra high performance liquid chromatography coupled with linear ion trap-Orbitrap tandem mass spectrometry was developed to characterize the diterpenoid alkaloids in Aconitum carmichaeli. Based on an optimized chromatographic condition, more than 120 diterpenoid alkaloids were separated with good resolution. Using a systematic strategy that combines high resolution separation, highly accurate mass measurements and a good understanding of the diagnostic fragment-based fragmentation patterns, these diterpenoid alkaloids were identified or tentatively identified. The identification of these chemicals provided essential data for further phytochemical studies and toxicity research of Aconitum carmichaeli. Moreover, the ultra high performance liquid chromatography with linear ion trap-Orbitrap mass spectrometry platform was an effective and accurate tool for rapid qualitative analysis of secondary metabolite productions from natural resources. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  1. Comparison Between Surf and Multi-Shock Forest Fire High Explosive Burn Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenfield, Nicholas Alexander

    PAGOSA1 has several different burn models used to model high explosive detonation. Two of these, Multi-Shock Forest Fire and Surf, are capable of modeling shock initiation. Accurately calculating shock initiation of a high explosive is important because it is a mechanism for detonation in many accident scenarios (i.e. fragment impact). Comparing the models to pop-plot data give confidence that the models are accurately calculating detonation or lack thereof. To compare the performance of these models, pop-plots2 were created from simulations where one two cm block of PBX 9502 collides with another block of PBX 9502.

  2. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less

  3. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  4. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, Imma; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750+/-0.0049 amu and 270.0786+/-0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098+/-0.0061 amu and 314.1153+/-0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  5. The identification of complete domains within protein sequences using accurate E-values for semi-global alignment

    PubMed Central

    Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.

    2007-01-01

    The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268

  6. Explicitly correlated benchmark calculations on C8H8 isomer energy separations: how accurate are DFT, double-hybrid, and composite ab initio procedures?

    NASA Astrophysics Data System (ADS)

    Karton, Amir; Martin, Jan M. L.

    2012-10-01

    Accurate isomerization energies are obtained for a set of 45 C8H8 isomers by means of the high-level, ab initio W1-F12 thermochemical protocol. The 45 isomers involve a range of hydrocarbon functional groups, including (linear and cyclic) polyacetylene, polyyne, and cumulene moieties, as well as aromatic, anti-aromatic, and highly-strained rings. Performance of a variety of DFT functionals for the isomerization energies is evaluated. This proves to be a challenging test: only six of the 56 tested functionals attain root mean square deviations (RMSDs) below 3 kcal mol-1 (the performance of MP2), namely: 2.9 (B972-D), 2.8 (PW6B95), 2.7 (B3PW91-D), 2.2 (PWPB95-D3), 2.1 (ωB97X-D), and 1.2 (DSD-PBEP86) kcal mol-1. Isomers involving highly-strained fused rings or long cumulenic chains provide a 'torture test' for most functionals. Finally, we evaluate the performance of composite procedures (e.g. G4, G4(MP2), CBS-QB3, and CBS-APNO), as well as that of standard ab initio procedures (e.g. MP2, SCS-MP2, MP4, CCSD, and SCS-CCSD). Both connected triples and post-MP4 singles and doubles are important for accurate results. SCS-MP2 actually outperforms MP4(SDQ) for this problem, while SCS-MP3 yields similar performance as CCSD and slightly bests MP4. All the tested empirical composite procedures show excellent performance with RMSDs below 1 kcal mol-1.

  7. Local regression type methods applied to the study of geophysics and high frequency financial data

    NASA Astrophysics Data System (ADS)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  8. SU-F-P-19: Fetal Dose Estimate for a High-Dose Fluoroscopy Guided Intervention Using Modern Data Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moirano, J

    Purpose: An accurate dose estimate is necessary for effective patient management after a fetal exposure. In the case of a high-dose exposure, it is critical to use all resources available in order to make the most accurate assessment of the fetal dose. This work will demonstrate a methodology for accurate fetal dose estimation using tools that have recently become available in many clinics, and show examples of best practices for collecting data and performing the fetal dose calculation. Methods: A fetal dose estimate calculation was performed using modern data collection tools to determine parameters for the calculation. The reference pointmore » air kerma as displayed by the fluoroscopic system was checked for accuracy. A cumulative dose incidence map and DICOM header mining were used to determine the displayed reference point air kerma. Corrections for attenuation caused by the patient table and pad were measured and applied in order to determine the peak skin dose. The position and depth of the fetus was determined by ultrasound imaging and consultation with a radiologist. The data collected was used to determine a normalized uterus dose from Monte Carlo simulation data. Fetal dose values from this process were compared to other accepted calculation methods. Results: An accurate high-dose fetal dose estimate was made. Comparison to accepted legacy methods were were within 35% of estimated values. Conclusion: Modern data collection and reporting methods ease the process for estimation of fetal dose from interventional fluoroscopy exposures. Many aspects of the calculation can now be quantified rather than estimated, which should allow for a more accurate estimation of fetal dose.« less

  9. Minority Group Status and Bias in College Admissions Criteria

    ERIC Educational Resources Information Center

    Silverman, Bernie I.; And Others

    1976-01-01

    Cleary's and Thorndike's definition of bias in college admissions criteria (ACT scores and high school percentile rank) were examined for black, white, and Jewish students. Use of the admissions criteria tended to overpredict blacks' performance, accurately predict whites' performance, and underpredict that of Jews. In light of Cleary's…

  10. The Effects of Humor on Test Anxiety and Test Performance

    ERIC Educational Resources Information Center

    Tali, Glenda

    2017-01-01

    Testing in an academic setting provokes anxiety in all students in higher education, particularly nursing students. When students experience high levels of anxiety, the resulting decline in test performance often does not represent an accurate assessment of students' academic achievement. This quantitative, experimental study examined the effects…

  11. Adaptation and Fallibility in Experts' Judgments of Novice Performers

    ERIC Educational Resources Information Center

    Larson, Jeffrey S.; Billeter, Darron M.

    2017-01-01

    Competition judges are often selected for their expertise, under the belief that a high level of performance expertise should enable accurate judgments of the competitors. Contrary to this assumption, we find evidence that expertise can reduce judgment accuracy. Adaptation level theory proposes that discriminatory capacity decreases with greater…

  12. Workplace Learning of High Performance Sports Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Clifford J.; Tinning, Richard

    2010-01-01

    The Australian coaching workplace (to be referred to as the State Institute of Sport; SIS) under consideration in this study employs significant numbers of full-time performance sport coaches and can be accurately characterized as a genuine workplace. Through a consideration of the interaction between what the workplace (SIS) affords the…

  13. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    PubMed

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. A rapid enzymatic assay for high-throughput screening of adenosine-producing strains

    PubMed Central

    Dong, Huina; Zu, Xin; Zheng, Ping; Zhang, Dawei

    2015-01-01

    Adenosine is a major local regulator of tissue function and industrially useful as precursor for the production of medicinal nucleoside substances. High-throughput screening of adenosine overproducers is important for industrial microorganism breeding. An enzymatic assay of adenosine was developed by combined adenosine deaminase (ADA) with indophenol method. The ADA catalyzes the cleavage of adenosine to inosine and NH3, the latter can be accurately determined by indophenol method. The assay system was optimized to deliver a good performance and could tolerate the addition of inorganic salts and many nutrition components to the assay mixtures. Adenosine could be accurately determined by this assay using 96-well microplates. Spike and recovery tests showed that this assay can accurately and reproducibly determine increases in adenosine in fermentation broth without any pretreatment to remove proteins and potentially interfering low-molecular-weight molecules. This assay was also applied to high-throughput screening for high adenosine-producing strains. The high selectivity and accuracy of the ADA assay provides rapid and high-throughput analysis of adenosine in large numbers of samples. PMID:25580842

  15. Determination of dasatinib in the tablet dosage form by ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis.

    PubMed

    Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel

    2017-01-01

    Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Mechanical design and performance evaluation for plane grating monochromator in a soft X-ray microscopy beamline at SSRF.

    PubMed

    Gong, Xuepeng; Lu, Qipeng

    2015-01-01

    A new monochromator is designed to develop a high performance soft X-ray microscopy beamline at Shanghai Synchrotron Radiation Facility (SSRF). But owing to its high resolving power and high accurate spectrum output, there exist many technical difficulties. In the paper presented, as two primary design targets for the monochromator, theoretical energy resolution and photon flux of the beamline are calculated. For wavelength scanning mechanism, primary factors affecting the rotary angle errors are presented, and the measuring results are 0.15'' and 0.17'' for plane mirror and plane grating, which means that it is possible to provide sufficient scanning precision to specific wavelength. For plane grating switching mechanism, the repeatabilities of roll, yaw and pitch angles are 0.08'', 0.12'' and 0.05'', which can guarantee the high accurate switch of the plane grating effectively. After debugging, the repeatability of light spot drift reaches to 0.7'', which further improves the performance of the monochromator. The commissioning results show that the energy resolving power is higher than 10000 at Ar L-edge, the photon flux is higher than 1 × 108 photons/sec/200 mA, and the spatial resolution is better than 30 nm, demonstrating that the monochromator performs very well and reaches theoretical predictions.

  17. Star tracking method based on multiexposure imaging for intensified star trackers.

    PubMed

    Yu, Wenbo; Jiang, Jie; Zhang, Guangjun

    2017-07-20

    The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.

  18. Digital phase-lock loop

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1991-01-01

    An improved digital phase lock loop incorporates several distinctive features that attain better performance at high loop gain and better phase accuracy. These features include: phase feedback to a number-controlled oscillator in addition to phase rate; analytical tracking of phase (both integer and fractional cycles); an amplitude-insensitive phase extractor; a more accurate method for extracting measured phase; a method for changing loop gain during a track without loss of lock; and a method for avoiding loss of sampled data during computation delay, while maintaining excellent tracking performance. The advantages of using phase and phase-rate feedback are demonstrated by comparing performance with that of rate-only feedback. Extraction of phase by the method of modeling provides accurate phase measurements even when the number-controlled oscillator phase is discontinuously updated.

  19. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    DTIC Science & Technology

    2012-01-01

    constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on structural components made of high...different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on...ADDRESS(ES) Naval Surface Warfare Center,4104Evans Way Suite 102,Indian Head,MD,20640 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  20. Study on photoelectric parameter measurement method of high capacitance solar cell

    NASA Astrophysics Data System (ADS)

    Zhang, Junchao; Xiong, Limin; Meng, Haifeng; He, Yingwei; Cai, Chuan; Zhang, Bifeng; Li, Xiaohui; Wang, Changshi

    2018-01-01

    The high efficiency solar cells usually have high capacitance characteristic, so the measurement of their photoelectric performance usually requires long pulse width and long sweep time. The effects of irradiance non-uniformity, probe shielding and spectral mismatch on the IV curve measurement are analyzed experimentally. A compensation method for irradiance loss caused by probe shielding is proposed, and the accurate measurement of the irradiance intensity in the IV curve measurement process of solar cell is realized. Based on the characteristics that the open circuit voltage of solar cell is sensitive to the junction temperature, an accurate measurement method of the temperature of solar cell under continuous irradiation condition is proposed. Finally, a measurement method with the characteristic of high accuracy and wide application range for high capacitance solar cell is presented.

  1. Can Scores on an Interim High School Reading Assessment Accurately Predict Low Performance on College Readiness Exams? REL 2016-124

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2016-01-01

    During the 2013/14 school year two Florida school districts sought to develop an early warning system to identify students at risk of low performance on college readiness measures in grade 11 or 12 (such as the SAT or ACT) in order to support them with remedial coursework prior to high school graduation. The study presented in this report provides…

  2. Quality control tests of lab-reared Cydia pomonella and Cactoblastis cactorum field performance: Comparison of laboratory and field bioassays.

    USDA-ARS?s Scientific Manuscript database

    Research, operational, and commercial programs which rely on mass-reared insects of high quality and performance, need accurate methods for monitoring quality degradation during each step of production, handling and release. With continued interest in the use of the sterile insect technique (SIT) a...

  3. Accurate measurement of junctional conductance between electrically coupled cells with dual whole-cell voltage-clamp under conditions of high series resistance.

    PubMed

    Hartveit, Espen; Veruki, Margaret Lin

    2010-03-15

    Accurate measurement of the junctional conductance (G(j)) between electrically coupled cells can provide important information about the functional properties of coupling. With the development of tight-seal, whole-cell recording, it became possible to use dual, single-electrode voltage-clamp recording from pairs of small cells to measure G(j). Experiments that require reduced perturbation of the intracellular environment can be performed with high-resistance pipettes or the perforated-patch technique, but an accompanying increase in series resistance (R(s)) compromises voltage-clamp control and reduces the accuracy of G(j) measurements. Here, we present a detailed analysis of methodologies available for accurate determination of steady-state G(j) and related parameters under conditions of high R(s), using continuous or discontinuous single-electrode voltage-clamp (CSEVC or DSEVC) amplifiers to quantify the parameters of different equivalent electrical circuit model cells. Both types of amplifiers can provide accurate measurements of G(j), with errors less than 5% for a wide range of R(s) and G(j) values. However, CSEVC amplifiers need to be combined with R(s)-compensation or mathematical correction for the effects of nonzero R(s) and finite membrane resistance (R(m)). R(s)-compensation is difficult for higher values of R(s) and leads to instability that can damage the recorded cells. Mathematical correction for R(s) and R(m) yields highly accurate results, but depends on accurate estimates of R(s) throughout an experiment. DSEVC amplifiers display very accurate measurements over a larger range of R(s) values than CSEVC amplifiers and have the advantage that knowledge of R(s) is unnecessary, suggesting that they are preferable for long-duration experiments and/or recordings with high R(s). Copyright (c) 2009 Elsevier B.V. All rights reserved.

  4. Seasonal performance of a malaria rapid diagnosis test at community health clinics in a malaria-hyperendemic region of Burkina Faso

    PubMed Central

    2012-01-01

    Backgound Treatment of confirmed malaria patients with Artemisinin-based Combination Therapy (ACT) at remote areas is the goal of many anti-malaria programs. Introduction of effective and affordable malaria Rapid Diagnosis Test (RDT) in remote areas could be an alternative tool for malaria case management. This study aimed to assess performance of the OptiMAL dipstick for rapid malaria diagnosis in children under five. Methods Malaria symptomatic and asymptomatic children were recruited in a passive manner in two community clinics (CCs). Malaria diagnosis by microscopy and RDT were performed. Performance of the tests was determined. Results RDT showed similar ability (61.2%) to accurately diagnose malaria as microscopy (61.1%). OptiMAL showed a high level of sensitivity and specificity, compared with microscopy, during both transmission seasons (high & low), with a sensitivity of 92.9% vs. 74.9% and a specificity of 77.2% vs. 87.5%. Conclusion By improving the performance of the test through accurate and continuous quality control of the device in the field, OptiMAL could be suitable for use at CCs for the management and control of malaria. PMID:22647557

  5. Learning to combine high variability with high precision: lack of transfer to a different task.

    PubMed

    Wu, Yen-Hsun; Truglio, Thomas S; Zatsiorsky, Vladimir M; Latash, Mark L

    2015-01-01

    The authors studied effects of practicing a 4-finger accurate force production task on multifinger coordination quantified within the uncontrolled manifold hypothesis. During practice, task instability was modified by changing visual feedback gain based on accuracy of performance. The authors also explored the retention of these effects, and their transfer to a prehensile task. Subjects practiced the force production task for 2 days. After the practice, total force variability decreased and performance became more accurate. In contrast, variance of finger forces showed a tendency to increase during the first practice session while in the space of finger modes (hypothetical commands to fingers) the increase was under the significance level. These effects were retained for 2 weeks. No transfer of these effects to the prehensile task was seen, suggesting high specificity of coordination changes. The retention of practice effects without transfer to a different task suggests that further studies on a more practical method of improving coordination are needed.

  6. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    PubMed

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  7. High sample throughput genotyping for estimating C-lineage introgression in the dark honeybee: an accurate and cost-effective SNP-based tool.

    PubMed

    Henriques, Dora; Browne, Keith A; Barnett, Mark W; Parejo, Melanie; Kryger, Per; Freeman, Tom C; Muñoz, Irene; Garnery, Lionel; Highet, Fiona; Jonhston, J Spencer; McCormack, Grace P; Pinto, M Alice

    2018-06-04

    The natural distribution of the honeybee (Apis mellifera L.) has been changed by humans in recent decades to such an extent that the formerly widest-spread European subspecies, Apis mellifera mellifera, is threatened by extinction through introgression from highly divergent commercial strains in large tracts of its range. Conservation efforts for A. m. mellifera are underway in multiple European countries requiring reliable and cost-efficient molecular tools to identify purebred colonies. Here, we developed four ancestry-informative SNP assays for high sample throughput genotyping using the iPLEX Mass Array system. Our customized assays were tested on DNA from individual and pooled, haploid and diploid honeybee samples extracted from different tissues using a diverse range of protocols. The assays had a high genotyping success rate and yielded accurate genotypes. Performance assessed against whole-genome data showed that individual assays behaved well, although the most accurate introgression estimates were obtained for the four assays combined (117 SNPs). The best compromise between accuracy and genotyping costs was achieved when combining two assays (62 SNPs). We provide a ready-to-use cost-effective tool for accurate molecular identification and estimation of introgression levels to more effectively monitor and manage A. m. mellifera conservatories.

  8. High Speed Civil Transport (HSCT) Isolated Nacelle Transonic Boattail Drag Study and Results Using Computational Fluid Dynamics (CFD)

    NASA Technical Reports Server (NTRS)

    Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori

    2005-01-01

    Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25 percent of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.

  9. High Speed Civil Transport (HSCT) Isolated Nacelle Transonic Boattail Drag Study and Results Using Computational Fluid Dynamics (CFD)

    NASA Technical Reports Server (NTRS)

    Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori

    1999-01-01

    Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25% of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust-drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.

  10. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.

  11. "Performance Of A Wafer Stepper With Automatic Intra-Die Registration Correction."

    NASA Astrophysics Data System (ADS)

    van den Brink, M. A.; Wittekoek, S.; Linders, H. F. D.; van Hout, F. J.; George, R. A.

    1987-01-01

    An evaluation of a wafer stepper with the new improved Philips/ASM-L phase grating alignment system is reported. It is shown that an accurate alignment system needs an accurate X-Y-0 wafer stage and an accurate reticle Z stage to realize optimum overlay accuracy. This follows from a discussion of the overlay budget and an alignment procedure model. The accurate wafer stage permits high overlay accuracy using global alignment only, thus eliminating the throughput penalty of align-by-field schemes. The accurate reticle Z stage enables an intra-die magnification control with respect to the wafer scale. Various overlay data are reported, which have been measured with the automatic metrology program of the stepper. It is demonstrated that the new dual alignment system (with the external spatial filter) has improved the ability to align to weakly reflecting layers. The results are supported by a Fourier analysis of the alignment signal. Resolution data are given for the PAS 2500 projection lenses, which show that the high overlay accuracy of the system is properly matched with submicron linewidth control. The results of a recently introduced 20mm i-line lens with a numerical aperture of 0.4 (Zeiss 10-78-58) are included.

  12. Surface knowledge and risks to landing and roving - The scale problem

    NASA Technical Reports Server (NTRS)

    Bourke, Roger D.

    1991-01-01

    The role of surface information in the performance of surface exploration missions is discussed. Accurate surface models based on direct measurements or inference are considered to be an important component in mission risk management. These models can be obtained using high resolution orbital photography or a combination of laser profiling, thermal inertia measurements, and/or radar. It is concluded that strategies for Martian exploration should use high confidence models to achieve maximum performance and low risk.

  13. Performance of Improved High-Order Filter Schemes for Turbulent Flows with Shocks

    NASA Technical Reports Server (NTRS)

    Kotov, Dmitry Vladimirovich; Yee, Helen M C.

    2013-01-01

    The performance of the filter scheme with improved dissipation control ? has been demonstrated for different flow types. The scheme with local ? is shown to obtain more accurate results than its counterparts with global or constant ?. At the same time no additional tuning is needed to achieve high accuracy of the method when using the local ? technique. However, further improvement of the method might be needed for even more complex and/or extreme flows.

  14. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  15. Highly accurate bound state calculations of the two-center molecular ions by using the universal variational expansion for three-body systems

    NASA Astrophysics Data System (ADS)

    Frolov, Alexei M.

    2018-03-01

    The universal variational expansion for the non-relativistic three-body systems is explicitly constructed. This universal expansion can be used to perform highly accurate numerical computations of the bound state spectra in various three-body systems, including Coulomb three-body systems with arbitrary particle masses and electric charges. Our main interest is related to the adiabatic three-body systems which contain one bound electron and two heavy nuclei of hydrogen isotopes: the protium p, deuterium d and tritium t. We also consider the analogous (model) hydrogen ion ∞H2+ with the two infinitely heavy nuclei.

  16. Teacher Performance Pay Signals and Student Achievement: Are Signals Accurate, and How well Do They Work?

    ERIC Educational Resources Information Center

    Manzeske, David; Garland, Marshall; Williams, Ryan; West, Benjamin; Kistner, Alexandra Manzella; Rapaport, Amie

    2016-01-01

    High-performing teachers tend to seek out positions at more affluent or academically challenging schools, which tend to hire more experienced, effective educators. Consequently, low-income and minority students are more likely to attend schools with less experienced and less effective educators (see, for example, DeMonte & Hanna, 2014; Office…

  17. Positive Biases in Self-Assessment of Mathematics Competence, Achievement Goals, and Mathematics Performance

    ERIC Educational Resources Information Center

    Dupeyrat, Caroline; Escribe, Christian; Huet, Nathalie; Regner, Isabelle

    2011-01-01

    The study examined how biases in self-evaluations of math competence relate to achievement goals and progress in math achievement. It was expected that performance goals would be related to overestimation and mastery goals to accurate self-assessments. A sample of French high-school students completed a questionnaire measuring their math…

  18. A Comparison of Successful and Unsuccessful Strategies in Individual Sight-Singing Preparation and Performance

    ERIC Educational Resources Information Center

    Killian, Janice N.; Henry, Michele L.

    2005-01-01

    High school singers (N = 198) individually sang two melodies from notation, with and without a 30-second practice opportunity. Overall accuracy scores were significantly higher with preparation time. The less accurate singers, however, did not benefit from practice time. Analysis of videoed tests indicated that high scorers tonicized (vocally…

  19. High performance thin layer chromatography (HPTLC) and high performance liquid chromatography (HPLC) for the qualitative and quantitative analysis of Calendula officinalis-advantages and limitations.

    PubMed

    Loescher, Christine M; Morton, David W; Razic, Slavica; Agatonovic-Kustrin, Snezana

    2014-09-01

    Chromatography techniques such as HPTLC and HPLC are commonly used to produce a chemical fingerprint of a plant to allow identification and quantify the main constituents within the plant. The aims of this study were to compare HPTLC and HPLC, for qualitative and quantitative analysis of the major constituents of Calendula officinalis and to investigate the effect of different extraction techniques on the C. officinalis extract composition from different parts of the plant. The results found HPTLC to be effective for qualitative analysis, however, HPLC was found to be more accurate for quantitative analysis. A combination of the two methods may be useful in a quality control setting as it would allow rapid qualitative analysis of herbal material while maintaining accurate quantification of extract composition. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Injection-depth-locking axial motion guided handheld micro-injector using CP-SSOCT.

    PubMed

    Cheon, Gyeong Woo; Huang, Yong; Kwag, Hye Rin; Kim, Ki-Young; Taylor, Russell H; Gehlbach, Peter L; Kang, Jin U

    2014-01-01

    This paper presents a handheld micro-injector system using common-path swept source optical coherence tomography (CP-SSOCT) as a distal sensor with highly accurate injection-depth-locking. To achieve real-time, highly precise, and intuitive freehand control, the system used graphics processing unit (GPU) to process the oversampled OCT signal with high throughput and a smart customized motion monitoring control algorithm. A performance evaluation was conducted with 60-insertions and fluorescein dye injection tests to show how accurately the system can guide the needle and lock to the target depth. The evaluation tests show our system can guide the injection needle into the desired depth with 4.12 um average deviation error while injecting 50 nl of fluorescein dye.

  1. Orthographic recognition in late adolescents: an assessment through event-related brain potentials.

    PubMed

    González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth

    2014-04-01

    Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.

  2. Coaxial-type water load for measuring high voltage, high current and short pulse of a compact Marx system for a high power microwave source

    NASA Astrophysics Data System (ADS)

    Han, Jaeeun; Kim, Jung-ho; Park, Sang-duck; Yoon, Moohyun; Park, Soo Yong; Choi, Do Won; Shin, Jin Woo; So, Joon Ho

    2009-11-01

    A coaxial-type water load was used to measure the voltage output from a Marx generator for a high power microwave source. This output had a rise time of 20 ns, a pulse duration of a few hundred ns, and an amplitude up to 500 kV. The design of the coaxial water load showed that it is an ideal resistive divider and can also accurately measure a short pulse. Experiments were performed to test the performance of the Marx generator with the calibrated coaxial water load.

  3. Using composite images to assess accuracy in personality attribution to faces.

    PubMed

    Little, Anthony C; Perrett, David I

    2007-02-01

    Several studies have demonstrated some accuracy in personality attribution using only visual appearance. Using composite images of those scoring high and low on a particular trait, the current study shows that judges perform better than chance in guessing others' personality, particularly for the traits conscientiousness and extraversion. This study also shows that attractiveness, masculinity and age may all provide cues to assess personality accurately and that accuracy is affected by the sex of both of those judging and being judged. Individuals do perform better than chance at guessing another's personality from only facial information, providing some support for the popular belief that it is possible to assess accurately personality from faces.

  4. User's guide for a computer program for calculating the zero-lift wave drag of complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Craidon, C. B.

    1983-01-01

    A computer program was developed to extend the geometry input capabilities of previous versions of a supersonic zero lift wave drag computer program. The arbitrary geometry input description is flexible enough to describe almost any complex aircraft concept, so that highly accurate wave drag analysis can now be performed because complex geometries can be represented accurately and do not have to be modified to meet the requirements of a restricted input format.

  5. Real-time haptic cutting of high-resolution soft tissues.

    PubMed

    Wu, Jun; Westermann, Rüdiger; Dick, Christian

    2014-01-01

    We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.

  6. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    PubMed

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  7. Highly Accurate Quantitative Analysis Of Enantiomeric Mixtures from Spatially Frequency Encoded 1H NMR Spectra.

    PubMed

    Plainchont, Bertrand; Pitoux, Daisy; Cyrille, Mathieu; Giraud, Nicolas

    2018-02-06

    We propose an original concept to measure accurately enantiomeric excesses on proton NMR spectra, which combines high-resolution techniques based on a spatial encoding of the sample, with the use of optically active weakly orienting solvents. We show that it is possible to simulate accurately dipolar edited spectra of enantiomers dissolved in a chiral liquid crystalline phase, and to use these simulations to calibrate integrations that can be measured on experimental data, in order to perform a quantitative chiral analysis. This approach is demonstrated on a chemical intermediate for which optical purity is an essential criterion. We find that there is a very good correlation between the experimental and calculated integration ratios extracted from G-SERF spectra, which paves the way to a general method of determination of enantiomeric excesses based on the observation of 1 H nuclei.

  8. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  9. Beta-band activity and connectivity in sensorimotor and parietal cortex are important for accurate motor performance.

    PubMed

    Chung, Jae W; Ofori, Edward; Misra, Gaurav; Hess, Christopher W; Vaillancourt, David E

    2017-01-01

    Accurate motor performance may depend on the scaling of distinct oscillatory activity within the motor cortex and effective neural communication between the motor cortex and other brain areas. Oscillatory activity within the beta-band (13-30Hz) has been suggested to provide distinct functional roles for attention and sensorimotor control, yet it remains unclear how beta-band and other oscillatory activity within and between cortical regions is coordinated to enhance motor performance. We explore this open issue by simultaneously measuring high-density cortical activity and elbow flexor and extensor neuromuscular activity during ballistic movements, and manipulating error using high and low visual gain across three target distances. Compared with low visual gain, high visual gain decreased movement errors at each distance. Group analyses in 3D source-space revealed increased theta-, alpha-, and beta-band desynchronization of the contralateral motor cortex and medial parietal cortex in high visual gain conditions and this corresponded to reduced movement error. Dynamic causal modeling was used to compute connectivity between motor cortex and parietal cortex. Analyses revealed that gain affected the directionally-specific connectivity across broadband frequencies from parietal to sensorimotor cortex but not from sensorimotor cortex to parietal cortex. These new findings provide support for the interpretation that broad-band oscillations in theta, alpha, and beta frequency bands within sensorimotor and parietal cortex coordinate to facilitate accurate upper limb movement. Our findings establish a link between sensorimotor oscillations in the context of online motor performance in common source space across subjects. Specifically, the extent and distinct role of medial parietal cortex to sensorimotor beta connectivity and local domain broadband activity combine in a time and frequency manner to assist ballistic movements. These findings can serve as a model to examine whether similar source space EEG dynamics exhibit different time-frequency changes in individuals with neurological disorders that cause movement errors. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Performance optimisation of a new-generation orthogonal-acceleration quadrupole-time-of-flight mass spectrometer.

    PubMed

    Bristow, Tony; Constantine, Jill; Harrison, Mark; Cavoit, Fabien

    2008-04-01

    Orthogonal-acceleration quadrupole time-of-flight (oa-QTOF) mass spectrometers, employed for accurate mass measurement, have been commercially available for well over a decade. A limitation of the early instruments of this type was the narrow ion abundance range over which accurate mass measurements could be made with a high degree of certainty. Recently, a new generation of oa-QTOF mass spectrometers has been developed and these allow accurate mass measurements to be recorded over a much greater range of ion abundances. This development has resulted from new ion detection technology and improved electronic stability or by accurate control of the number of ions reaching the detector. In this report we describe the results from experiments performed to evaluate the mass measurement performance of the Bruker micrOTOF-Q, a member of the new-generation oa-QTOFs. The relationship between mass accuracy and ion abundance has been extensively evaluated and mass measurement accuracy remained stable (+/-1.5 m m/z units) over approximately 3-4 orders of magnitude of ion abundance. The second feature of the Bruker micrOTOF-Q that was evaluated was the SigmaFit function of the software. This isotope pattern-matching algorithm provides an exact numerical comparison of the theoretical and measured isotope patterns as an additional identification tool to accurate mass measurement. The smaller the value, the closer the match between theoretical and measured isotope patterns. This information is then employed to reduce the number of potential elemental formulae produced from the mass measurements. A relationship between the SigmaFit value and ion abundance has been established. The results from the study for both mass accuracy and SigmaFit were employed to define the performance criteria for the micrOTOF-Q. This provided increased confidence in the selection of elemental formulae resulting from accurate mass measurements.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wosnik, Martin; Bachant, Pete; Neary, Vincent Sinclair

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements inmore » a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.« less

  12. A simple video-based timing system for on-ice team testing in ice hockey: a technical report.

    PubMed

    Larson, David P; Noonan, Benjamin C

    2014-09-01

    The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.

  13. Thermomechanical simulations and experimental validation for high speed incremental forming

    NASA Astrophysics Data System (ADS)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  14. The effect of aircraft control forces on pilot performance during instrument landings in a flight simulator.

    PubMed

    Hewson, D J; McNair, P J; Marshall, R N

    2001-07-01

    Pilots may have difficulty controlling aircraft at both high and low force levels due to larger variability in force production at these force levels. The aim of this study was to measure the force variability and landing performance of pilots during an instrument landing in a flight simulator. There were 12 pilots who were tested while performing 5 instrument landings in a flight simulator, each of which required different control force inputs. Pilots can produce the least force when pushing the control column to the right, therefore the force levels for the landings were set relative to each pilot's maximum aileron-right force. The force levels for the landings were 90%, 60%, and 30% of maximal aileron-right force, normal force, and 25% of normal force. Variables recorded included electromyographic activity (EMG), aircraft control forces, aircraft attitude, perceived exertion and deviation from glide slope and heading. Multivariate analysis of variance was used to test for differences between landings. Pilots were least accurate in landing performance during the landing at 90% of maximal force (p < 0.05). There was also a trend toward decreased landing performance during the landing at 25% of normal force. Pilots were more variable in force production during the landings at 60% and 90% of maximal force (p < 0.05). Pilots are less accurate at performing instrument landings when control forces are high due to the increased variability of force production. The increase in variability at high force levels is most likely associated with motor unit recruitment, rather than rate coding. Aircraft designers need to consider the reduction in pilot performance at high force levels, as well as pilot strength limits when specifying new standards.

  15. Psychosis prediction and clinical utility in familial high-risk studies: Selective review, synthesis, and implications for early detection and intervention

    PubMed Central

    Shah, Jai L.; Tandon, Neeraj; Keshavan, Matcheri S.

    2016-01-01

    Aim Accurate prediction of which individuals will go on to develop psychosis would assist early intervention and prevention paradigms. We sought to review investigations of prospective psychosis prediction based on markers and variables examined in longitudinal familial high-risk (FHR) studies. Methods We performed literature searches in MedLine, PubMed and PsycINFO for articles assessing performance characteristics of predictive clinical tests in FHR studies of psychosis. Studies were included if they reported one or more predictive variables in subjects at FHR for psychosis. We complemented this search strategy with references drawn from articles, reviews, book chapters and monographs. Results Across generations of familial high-risk projects, predictive studies have investigated behavioral, cognitive, psychometric, clinical, neuroimaging, and other markers. Recent analyses have incorporated multivariate and multi-domain approaches to risk ascertainment, although with still generally modest results. Conclusions While a broad range of risk factors has been identified, no individual marker or combination of markers can at this time enable accurate prospective prediction of emerging psychosis for individuals at FHR. We outline the complex and multi-level nature of psychotic illness, the myriad of factors influencing its development, and methodological hurdles to accurate and reliable prediction. Prospects and challenges for future generations of FHR studies are discussed in the context of early detection and intervention strategies. PMID:23693118

  16. Genome-Wide Comparative Gene Family Classification

    PubMed Central

    Frech, Christian; Chen, Nansheng

    2010-01-01

    Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221

  17. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  18. A macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Wang, Yulun

    1993-01-01

    This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.

  19. Controlled-Root Approach To Digital Phase-Locked Loops

    NASA Technical Reports Server (NTRS)

    Stephens, Scott A.; Thomas, J. Brooks

    1995-01-01

    Performance tailored more flexibly and directly to satisfy design requirements. Controlled-root approach improved method for analysis and design of digital phase-locked loops (DPLLs). Developed rigorously from first principles for fully digital loops, making DPLL theory and design simpler and more straightforward (particularly for third- or fourth-order DPLL) and controlling performance more accurately in case of high gain.

  20. Reflections on Conceptual Tempo: Relationship Between Cognitive Style and Performance as a Function of Task Characteristics

    ERIC Educational Resources Information Center

    Bush, Ellen S.; Dweck, Carol S.

    1975-01-01

    Children classified as high-anxious reflective in cognitive style were found to perform as well on speeded tasks as low-anxious reflective children and both groups were found to be faster and more accurate than impulsive children. This suggests redefining cognitive style to stress the strategies used rather than predispositions for particular…

  1. Advanced Mass Spectrometric Methods for the Rapid and Quantitative Characterization of Proteomes

    DOE PAGES

    Smith, Richard D.

    2002-01-01

    Progress is reviewedmore » towards the development of a global strategy that aims to extend the sensitivity, dynamic range, comprehensiveness and throughput of proteomic measurements based upon the use of high performance separations and mass spectrometry. The approach uses high accuracy mass measurements from Fourier transform ion cyclotron resonance mass spectrometry (FTICR) to validate peptide ‘accurate mass tags’ (AMTs) produced by global protein enzymatic digestions for a specific organism, tissue or cell type from ‘potential mass tags’ tentatively identified using conventional tandem mass spectrometry (MS/MS). This provides the basis for subsequent measurements without the need for MS/ MS. High resolution capillary liquid chromatography separations combined with high sensitivity, and high resolution accurate FTICR measurements are shown to be capable of characterizing peptide mixtures of more than 10 5 components. The strategy has been initially demonstrated using the microorganisms Saccharomyces cerevisiae and Deinococcus radiodurans. Advantages of the approach include the high confidence of protein identification, its broad proteome coverage, high sensitivity, and the capability for stableisotope labeling methods for precise relative protein abundance measurements. Abbreviations : LC, liquid chromatography; FTICR, Fourier transform ion cyclotron resonance; AMT, accurate mass tag; PMT, potential mass tag; MMA, mass measurement accuracy; MS, mass spectrometry; MS/MS, tandem mass spectrometry; ppm, parts per million.« less

  2. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  3. In Vivo, High-Frequency Three-Dimensional Cardiac MR Elastography: Feasibility in Normal Volunteers

    PubMed Central

    Arani, Arvin; Glaser, Kevin L.; Arunachalam, Shivaram P.; Rossman, Phillip J.; Lake, David S.; Trzasko, Joshua D.; Manduca, Armando; McGee, Kiaran P.; Ehman, Richard L.; Araoz, Philip A.

    2016-01-01

    Purpose Noninvasive stiffness imaging techniques (elastography) can image myocardial tissue biomechanics in vivo. For cardiac MR elastography (MRE) techniques, the optimal vibration frequency for in vivo experiments is unknown. Furthermore, the accuracy of cardiac MRE has never been evaluated in a geometrically accurate phantom. Therefore, the purpose of this study was to determine the necessary driving frequency to obtain accurate three-dimensional (3D) cardiac MRE stiffness estimates in a geometrically accurate diastolic cardiac phantom and to determine the optimal vibration frequency that can be introduced in healthy volunteers. Methods The 3D cardiac MRE was performed on eight healthy volunteers using 80 Hz, 100 Hz, 140 Hz, 180 Hz, and 220 Hz vibration frequencies. These frequencies were tested in a geometrically accurate diastolic heart phantom and compared with dynamic mechanical analysis (DMA). Results The 3D Cardiac MRE was shown to be feasible in volunteers at frequencies as high as 180 Hz. MRE and DMA agreed within 5% at frequencies greater than 180 Hz in the cardiac phantom. However, octahedral shear strain signal to noise ratios and myocardial coverage was shown to be highest at a frequency of 140 Hz across all subjects. Conclusion This study motivates future evaluation of high-frequency 3D MRE in patient populations. PMID:26778442

  4. How do gender and anxiety affect students' self-assessment and actual performance on a high-stakes clinical skills examination?

    PubMed

    Colbert-Getz, Jorie M; Fleishman, Carol; Jung, Julianna; Shilkofski, Nicole

    2013-01-01

    Research suggests that medical students are not accurate in self-assessment, but it is not clear whether students over- or underestimate their skills or how certain characteristics correlate with accuracy in self-assessment. The goal of this study was to determine the effect of gender and anxiety on accuracy of students' self-assessment and on actual performance in the context of a high-stakes assessment. Prior to their fourth year of medical school, two classes of medical students at Johns Hopkins University School of Medicine completed a required clinical skills exam in fall 2010 and 2011, respectively. Two hundred two students rated their anxiety in anticipation of the exam and predicted their overall scores in the history taking and physical examination performance domains. A self-assessment deviation score was calculated by subtracting each student's predicted score from his or her score as rated by standardized patients. When students self-assessed their data gathering performance, there was a weak negative correlation between their predicted scores and their actual scores on the examination. Additionally, there was an interaction effect of anxiety and gender on both self-assessment deviation scores and actual performance. Specifically, females with high anxiety were more accurate in self-assessment and achieved higher actual scores compared with males with high anxiety. No differences by gender emerged for students with moderate or low anxiety. Educators should take into account not only gender but also the role of emotion, in this case anxiety, when planning interventions to help improve accuracy of students' self-assessment.

  5. ESTADIUS: A High Motion "One Arcsec" Daytime Attitude Estimation System for Stratospheric Applications

    NASA Astrophysics Data System (ADS)

    Montel, J.; Andre, Y.; Mirc, F.; Etcheto, P.; Evrard, J.; Bray, N.; Saccoccio, M.; Tomasini, L.; Perot, E.

    2015-09-01

    ESTADIUS is an autonomous, accurate and daytime attitude estimation system, for stratospheric balloons that require a high level of attitude measurement and stability. The system has been developed by CNES. ESTADIUS is based on star sensor an pyrometer data fusion within an extended Kalman filter. The star sensor is composed of a 16 MPixels visible-CCD camera and a large aperture camera lens (focal length of 135mm, aperture f/1.8, 10ºx15º field of view or FOV) which provides very accurate stars measurements due to very low pixel angular size. This also allows detecting stars against a bright sky background. The pyrometer is a 0.01º/h performance class Fiber Optic Gyroscope (FOG). The system is adapted to work down to an altitude of ~25km, even under high cinematic conditions. Key elements of ESTADIUS are: daytime conditions use (as well as night time), autonomy (automatic recognition of constellations), high angular rate robustness (a few deg/s thanks to the high performance of attitude propagation), stray-light robustness (thanks to a high performance baffle), high accuracy (<1", 1σ). Four stratospheric qualification flights were very successfully performed in 2010/2011 and 2013/2014 in Kiruna (Sweden) and Timmins (Canada). ESTADIUS will allow long stratospheric flights with a unique attitude estimation system avoiding the restriction of night/day conditions at launch. The first operational flight of ESTADIUS will be in 2015 for the PILOT scientific missions (led by IRAP and CNES in France). Further balloon missions such as CIDRE will use the system ESTADIUS is probably the first autonomous, large FOV, daytime stellar attitude measurement system. This paper details the technical features and in-flight results.

  6. Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node

    NASA Astrophysics Data System (ADS)

    Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi

    2015-03-01

    In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.

  7. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  8. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  9. High-speed separation and characterization of major constituents in Radix Paeoniae Rubra by fast high-performance liquid chromatography coupled with diode-array detection and time-of-flight mass spectrometry.

    PubMed

    Liu, E-Hu; Qi, Lian-Wen; Li, Bin; Peng, Yong-Bo; Li, Ping; Li, Chang-Yin; Cao, Jun

    2009-01-01

    A fast high-performance liquid chromatography (HPLC) method coupled with diode-array detection (DAD) and electrospray ionization time-of-flight mass spectrometry (ESI-TOFMS) has been developed for rapid separation and sensitive identification of major constituents in Radix Paeoniae Rubra (RPR). The total analysis time on a short column packed with 1.8-microm porous particles was about 20 min without a loss in resolution, six times faster than the performance of a conventional column analysis (115 min). The MS fragmentation behavior and structural characterization of major compounds in RPR were investigated here for the first time. The targets were rapidly screened from RPR matrix using a narrow mass window of 0.01 Da to restructure extracted ion chromatograms. Accurate mass measurements (less than 5 ppm error) for both the deprotonated molecule and characteristic fragment ions represent reliable identification criteria for these compounds in complex matrices with similar if not even better performance compared with tandem mass spectrometry. A total of 26 components were screened and identified in RPR including 11 monoterpene glycosides, 11 galloyl glucoses and 4 other phenolic compounds. From the point of time savings, resolving power, accurate mass measurement capability and full spectral sensitivity, the established fast HPLC/DAD/TOFMS method turns out to be a highly useful technique to identify constituents in complex herbal medicines. (c) 2008 John Wiley & Sons, Ltd.

  10. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  11. Performance Characteristic Mems-Based IMUs for UAVs Navigation

    NASA Astrophysics Data System (ADS)

    Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.

    2015-08-01

    Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.

  12. Identification of allocryptopine and protopine metabolites in rat liver S9 by high-performance liquid chromatography/quadrupole-time-of-flight mass spectrometry.

    PubMed

    Huang, Ya-Jun; Xiao, Sa; Sun, Zhi-Liang; Zeng, Jian-Guo; Liu, Yi-Song; Liu, Zhao-Ying

    2016-07-15

    Allocryptopine (AL) and protopine (PR) have been extensively studied because of their anti-parasitic, anti-arrhythmic, anti-thrombotic, anti-inflammatory and anti-bacterial activity. However, limited information on the pharmacokinetics and metabolism of AL and PR has been reported. Therefore, the purpose of the present study was to investigate the in vitro metabolism of AL and PR in rat liver S9 using a rapid and accurate high-performance liquid chromatography/quadrupole-time-of-flight mass spectrometry (HPLC/QqTOFMS) method. The incubation mixture was processed with 15% trichloroacetic acid (TCA). Multiple scans of AL and PR metabolites and accurate mass measurements were automatically performed simultaneously through data-dependent acquisition in only a 30-min analysis. The structural elucidations of these metabolites were performed by comparing their changes in accurate molecular masses and product ions with those of the precursor ion or metabolite. Eight and five metabolites of AL and PR were identified in rat liver S9, respectively. Among these metabolites, seven and two metabolites of AL and PR were identified in the first time, respectively. The demethylenation of the 2,3-methylenedioxy, the demethylation of the 9,10-vicinal methoxyl group and the 2,3-methylenedioxy group were the main metabolic pathways of AL and PR in liver S9, respectively. In addition, the cleavage of the methylenedioxy group of the drugs and subsequent methylation or O-demethylation were also the common metabolic pathways of drugs in liver S9. In addition, the hydroxylation reaction was also the metabolic pathway of AL. This was the first investigation of in vitro metabolism of AL and PR in rat liver S9. The detailed structural elucidations of AL and PR metabolites were performed using a rapid and accurate HPLC/QqTOFMS method. The metabolic pathways of AL and PR in rat were tentatively proposed based on these characterized metabolites and early reports. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Comparative Performance of Four Single Extreme Outlier Discordancy Tests from Monte Carlo Simulations

    PubMed Central

    Díaz-González, Lorena; Quiroz-Ruiz, Alfredo

    2014-01-01

    Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8. PMID:24737992

  14. Comparative performance of four single extreme outlier discordancy tests from Monte Carlo simulations.

    PubMed

    Verma, Surendra P; Díaz-González, Lorena; Rosales-Rivera, Mauricio; Quiroz-Ruiz, Alfredo

    2014-01-01

    Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8.

  15. Data-driven methods towards learning the highly nonlinear inverse kinematics of tendon-driven surgical manipulators.

    PubMed

    Xu, Wenjun; Chen, Jie; Lau, Henry Y K; Ren, Hongliang

    2017-09-01

    Accurate motion control of flexible surgical manipulators is crucial in tissue manipulation tasks. The tendon-driven serpentine manipulator (TSM) is one of the most widely adopted flexible mechanisms in minimally invasive surgery because of its enhanced maneuverability in torturous environments. TSM, however, exhibits high nonlinearities and conventional analytical kinematics model is insufficient to achieve high accuracy. To account for the system nonlinearities, we applied a data driven approach to encode the system inverse kinematics. Three regression methods: extreme learning machine (ELM), Gaussian mixture regression (GMR) and K-nearest neighbors regression (KNNR) were implemented to learn a nonlinear mapping from the robot 3D position states to the control inputs. The performance of the three algorithms was evaluated both in simulation and physical trajectory tracking experiments. KNNR performed the best in the tracking experiments, with the lowest RMSE of 2.1275 mm. The proposed inverse kinematics learning methods provide an alternative and efficient way to accurately model the tendon driven flexible manipulator. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Use of Passive Samplers to Measure Dissolved Organic Contaminants in a Temperate Estuary

    EPA Science Inventory

    Measuring dissolved concentrations of organic contaminants can be challenging given their low solubilities and high particle association. However, to perform accurate risk assessments of these chemicals, knowing the dissolved concentration is critical since it is considered to b...

  17. Automatic analysis for neuron by confocal laser scanning microscope

    NASA Astrophysics Data System (ADS)

    Satou, Kouhei; Aoki, Yoshimitsu; Mataga, Nobuko; Hensh, Takao K.; Taki, Katuhiko

    2005-12-01

    The aim of this study is to develop a system that recognizes both the macro- and microscopic configurations of nerve cells and automatically performs the necessary 3-D measurements and functional classification of spines. The acquisition of 3-D images of cranial nerves has been enabled by the use of a confocal laser scanning microscope, although the highly accurate 3-D measurements of the microscopic structures of cranial nerves and their classification based on their configurations have not yet been accomplished. In this study, in order to obtain highly accurate measurements of the microscopic structures of cranial nerves, existing positions of spines were predicted by the 2-D image processing of tomographic images. Next, based on the positions that were predicted on the 2-D images, the positions and configurations of the spines were determined more accurately by 3-D image processing of the volume data. We report the successful construction of an automatic analysis system that uses a coarse-to-fine technique to analyze the microscopic structures of cranial nerves with high speed and accuracy by combining 2-D and 3-D image analyses.

  18. Fuzzy logic modeling of high performance rechargeable batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, P.; Fennie, C. Jr.; Reisner, D.E.

    1998-07-01

    Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.

  19. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Rapid and accurate prediction of degradant formation rates in pharmaceutical formulations using high-performance liquid chromatography-mass spectrometry.

    PubMed

    Darrington, Richard T; Jiao, Jim

    2004-04-01

    Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.

  1. Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines

    NASA Astrophysics Data System (ADS)

    Massa, Luca

    A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.

  2. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    NASA Astrophysics Data System (ADS)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  3. A Spatial Frequency Account of the Detriment that Local Processing of Navon Letters Has on Face Recognition

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.

    2009-01-01

    Five minutes of processing the local features of a Navon letter causes a detriment in subsequent face-recognition performance (Macrae & Lewis, 2002). We hypothesize a perceptual after effect explanation of this effect in which face recognition is less accurate after adapting to high-spatial frequencies at high contrasts. Five experiments were…

  4. Sensor for performance monitoring of advanced gas turbines

    NASA Astrophysics Data System (ADS)

    Latvakoski, Harri M.; Markham, James R.; Harrington, James A.; Haan, David J.

    1999-01-01

    Advanced thermal coating materials are being developed for use in the combustor section of high performance turbine engines to allow for higher combustion temperatures. To optimize the use of these thermal barrier coatings (TBC), accurate surface temperature measurements are required to understand their response to changes in the combustion environment. Present temperature sensors, which are based on the measurement of emitted radiation, are not well studied for coated turbine blades since their operational wavelengths are not optimized for the radiative properties of the TBC. This work is concerned with developing an instrument to provide accurate, real-time measurements of the temperature of TBC blades in an advanced turbine engine. The instrument will determine the temperature form a measurement of the radiation emitted at the optimum wavelength, where the TBC radiates as a near-blackbody. The operational wavelength minimizes interference from the high temperature and pressure environment. A hollow waveguide is used to transfer the radiation from the engine cavity to a high-speed detector and data acquisition system. A prototype of this system was successfully tested at an atmospheric burner test facility, and an on-engine version is undergoing testing for installation on a high-pressure rig.

  5. Quantification of sulphur amino acids by ultra-high performance liquid chromatography in aquatic invertebrates.

    PubMed

    Thera, Jennifer C; Kidd, Karen A; Dodge-Lynch, M Elaine; Bertolo, Robert F

    2017-12-15

    We examined the performance of an ultra-high performance liquid chromatography method to quantify protein-bound sulphur amino acids in zooplankton. Both cysteic acid and methionine sulfone were linear from 5 to 250 pmol (r 2  = 0.99), with a method detection limit of 13 pmol and 9 pmol, respectively. Although there was no matrix effect on linearity, adjacent peaks and co-eluting noise from the invertebrate proteins increased the detection limits when compared to common standards. Overall, performance characteristics were reproducible and accurate, and provide a means for quantifying sulphur amino acids in aquatic invertebrates, an understudied group. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. CF6 High Pressure Compressor and Turbine Clearance Evaluations

    NASA Technical Reports Server (NTRS)

    Radomski, M. A.; Cline, L. D.

    1981-01-01

    In the CF6 Jet Engine Diagnostics Program the causes of performance degradation were determined for each component of revenue service engines. It was found that a significant contribution to performance degradation was caused by increased airfoil tip radial clearances in the high pressure compressor and turbine areas. Since the influence of these clearances on engine performance and fuel consumption is significant, it is important to accurately establish these relatonships. It is equally important to understand the causes of clearance deterioration so that they can be reduced or eliminated. The results of factory engine tests run to enhance the understanding of the high pressure compressor and turbine clearance effects on performance are described. The causes of clearance deterioration are indicated and potential improvements in clearance control are discussed.

  7. Children's perception of their synthetically corrected speech production.

    PubMed

    Strömbergsson, Sofia; Wengelin, Asa; House, David

    2014-06-01

    We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.

  8. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    NASA Technical Reports Server (NTRS)

    Wallace, D. B.

    1980-01-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  9. Countercurrent chromatography separation of saponins by skeleton type from Ampelozizyphus amazonicus for off-line ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry analysis and characterisation.

    PubMed

    de Souza Figueiredo, Fabiana; Celano, Rita; de Sousa Silva, Danila; das Neves Costa, Fernanda; Hewitson, Peter; Ignatova, Svetlana; Piccinelli, Anna Lisa; Rastrelli, Luca; Guimarães Leitão, Suzana; Guimarães Leitão, Gilda

    2017-01-20

    Ampelozizyphus amazonicus Ducke (Rhamnaceae), a medicinal plant used to prevent malaria, is a climbing shrub, native to the Amazonian region, with jujubogenin glycoside saponins as main compounds. The crude extract of this plant is too complex for any kind of structural identification, and HPLC separation was not sufficient to resolve this issue. Therefore, the aim of this work was to obtain saponin enriched fractions from the bark ethanol extract by countercurrent chromatography (CCC) for further isolation and identification/characterisation of the major saponins by HPLC and MS. The butanol extract was fractionated by CCC with hexane - ethyl acetate - butanol - ethanol - water (1:6:1:1:6; v/v) solvent system yielding 4 group fractions. The collected fractions were analysed by UHPLC-HRMS (ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry) and MS n . Group 1 presented mainly oleane type saponins, and group 3 showed mainly jujubogenin glycosides, keto-dammarane type triterpene saponins and saponins with C 31 skeleton. Thus, CCC separated saponins from the butanol-rich extract by skeleton type. A further purification of group 3 by CCC (ethyl acetate - ethanol - water (1:0.2:1; v/v)) and HPLC-RI was performed in order to obtain these unusual aglycones in pure form. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. [Screening and confirmation of 24 hormones in cosmetics by ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry].

    PubMed

    Li, Zhaoyong; Wang, Fengmei; Niu, Zengyuan; Luo, Xin; Zhang, Gang; Chen, Junhui

    2014-05-01

    A method of ultra high performance liquid chromatography-linear ion trap/orbitrap high resolution mass spectrometry (UPLC-LTQ/Orbitrap MS) was established to screen and confirm 24 hormones in cosmetics. Various cosmetic samples were extracted with methanol. The extract was loaded onto a Waters ACQUITY UPLC BEH C18 column (50 mm x 2.1 mm, 1.7 microm) using a gradient elution of acetonitrile/water containing 0.1% (v/v) formic acid for the separation. The accurate mass of quasi-molecular ion was acquired by full scanning of electrostatic field orbitrap. The rapid screening was carried out by the accurate mass of quasi-molecular ion. The confirmation analysis for targeted compounds was performed with the retention time and qualitative fragments obtained by data dependent scan mode. Under the optimal conditions, the 24 hormones were routinely detected with mass accuracy error below 3 x 10(-6) (3 ppm), and good linearities were obtained in their respective linear ranges with correlation coefficients higher than 0.99. The LODs (S/N = 3) of the 24 compounds were < or = 10 microg/kg, which can meet the requirements for the actual screening of cosmetic samples. The developed method was applied to screen the hormones in 50 cosmetic samples. The results demonstrate that the method is a useful tool for the rapid screening and identification of the hormones in cosmetics.

  11. Nanopore sequencing technology and tools for genome assembly: computational analysis of the current state, bottlenecks and future directions.

    PubMed

    Senol Cali, Damla; Kim, Jeremie S; Ghose, Saugata; Alkan, Can; Mutlu, Onur

    2018-04-02

    Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious and effective choices for each step of the genome assembly pipeline using nanopore sequence data. Also, with the help of bottlenecks we have found, developers can improve the current tools or build new ones that are both accurate and fast, to overcome the high error rates of the nanopore sequencing technology.

  12. Accurate analysis of parabens in human urine using isotope-dilution ultrahigh-performance liquid chromatography-high resolution mass spectrometry.

    PubMed

    Zhou, Hui-Ting; Chen, Hsin-Chang; Ding, Wang-Hsien

    2018-02-20

    An analytical method that utilizes isotope-dilution ultrahigh-performance liquid chromatography coupled with hybrid quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS or called UHPLC-HRMS) was developed, and validated to be highly precise and accurate for the detection of nine parabens (methyl-, ethyl-, propyl-, isopropyl-, butyl-, isobutyl-, pentyl-, hexyl-, and benzyl-parabens) in human urine samples. After sample preparation by ultrasound-assisted emulsification microextraction (USAEME), the extract was directly injected into UHPLC-HRMS. By using negative electrospray ionization in the multiple reaction monitoring (MRM) mode and measuring the peak area ratios of both the natural and the labeled-analogues in the samples and calibration standards, the target analytes could be accurately identified and quantified. Another use for the labeled-analogues was to correct for systematic errors associated with the analysis, such as the matrix effect and other variations. The limits of quantitation (LOQs) were ranging from 0.3 to 0.6 ng/mL. High precisions for both repeatability and reproducibility were obtained ranging from 1 to 8%. High trueness (mean extraction recovery, or called accuracy) ranged from 93 to 107% on two concentration levels. According to preliminary results, the total concentrations of four most detected parabens (methyl-, ethyl-, propyl- and butyl-) ranged from 0.5 to 79.1 ng/mL in male urine samples, and from 17 to 237 ng/mL in female urine samples. Interestingly, two infrequently detected pentyl- and hexyl-parabens were found in one of the male samples in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Verification of Data Accuracy in Japan Congenital Cardiovascular Surgery Database Including Its Postprocedural Complication Reports.

    PubMed

    Takahashi, Arata; Kumamaru, Hiraku; Tomotaki, Ai; Matsumura, Goki; Fukuchi, Eriko; Hirata, Yasutaka; Murakami, Arata; Hashimoto, Hideki; Ono, Minoru; Miyata, Hiroaki

    2018-03-01

    Japan Congenital Cardiovascluar Surgical Database (JCCVSD) is a nationwide registry whose data are used for health quality assessment and clinical research in Japan. We evaluated the completeness of case registration and the accuracy of recorded data components including postprocedural mortality and complications in the database via on-site data adjudication. We validated the records from JCCVSD 2010 to 2012 containing congenital cardiovascular surgery data performed in 111 facilities throughout Japan. We randomly chose nine facilities for site visit by the auditor team and conducted on-site data adjudication. We assessed whether the records in JCCVSD matched the data in the source materials. We identified 1,928 cases of eligible surgeries performed at the facilities, of which 1,910 were registered (99.1% completeness), with 6 cases of duplication and 1 inappropriate case registration. Data components including gender, age, and surgery time (hours) were highly accurate with 98% to 100% concordance. Mortality at discharge and at 30 and 90 postoperative days was 100% accurate. Among the five complications studied, reoperation was the most frequently observed, with 16 and 21 cases recorded in the database and source materials, respectively, having a sensitivity of 0.67 and a specificity of 0.99. Validation of JCCVSD database showed high registration completeness and high accuracy especially in the categorical data components. Adjudicated mortality was 100% accurate. While limited in numbers, the recorded cases of postoperative complications all had high specificities but had lower sensitivity (0.67-1.00). Continued activities for data quality improvement and assessment are necessary for optimizing the utility of these registries.

  14. A validated method for measurement of serum total, serum free, and salivary cortisol, using high-performance liquid chromatography coupled with high-resolution ESI-TOF mass spectrometry.

    PubMed

    Montskó, Gergely; Tarjányi, Zita; Mezősi, Emese; Kovács, Gábor L

    2014-04-01

    Blood cortisol level is routinely analysed in laboratory medicine, but the immunoassays in widespread use have the disadvantage of cross-reactivity with some commonly used steroid drugs. Mass spectrometry has become a method of increasing importance for cortisol estimation. However, current methods do not offer the option of accurate mass identification. Our objective was to develop a mass spectrometry method to analyse salivary, serum total, and serum free cortisol via accurate mass identification. The analysis was performed on a Bruker micrOTOF high-resolution mass spectrometer. Sample preparation involved protein precipitation, serum ultrafiltration, and solid-phase extraction. Limit of quantification was 12.5 nmol L(-1) for total cortisol, 440 pmol L(-1) for serum ultrafiltrate, and 600 pmol L(-1) for saliva. Average intra-assay variation was 4.7%, and inter-assay variation was 6.6%. Mass accuracy was <2.5 ppm. Serum total cortisol levels were in the range 35.6-1088 nmol L(-1), and serum free cortisol levels were in the range 0.5-12.4 nmol L(-1). Salivary cortisol levels were in the range 0.7-10.4 nmol L(-1). Mass accuracy was equal to or below 2.5 ppm, resulting in a mass error less than 1 mDa and thus providing high specificity. We did not observe any interference with routinely used steroidal drugs. The method is capable of specific cortisol quantification in different matrices on the basis of accurate mass identification.

  15. Screening for non-alcoholic fatty liver disease in children: do guidelines provide enough guidance?

    PubMed

    Koot, B G P; Nobili, V

    2017-09-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common chronic liver disease in the industrialized world in children. Its high prevalence and important health risks make NAFLD highly suitable for screening. In practice, screening is widely, albeit not consistently, performed. To review the recommendations on screening for NAFLD in children. Recommendations on screening were reviewed from major paediatric obesity guidelines and NAFLD guidelines. A literature overview is provided on open questions and controversies. Screening for NAFLD is advocated in all obesity and most NAFLD guidelines. Guidelines are not uniform in whom to screen, and most guidelines do not specify how screening should be performed in practice. Screening for NAFLD remains controversial, due to lack of a highly accurate screening tool, limited knowledge to predict the natural course of NAFLD and limited data on its cost effectiveness. Guidelines provide little guidance on how screening should be performed. Screening for NAFLD remains controversial because not all conditions for screening are fully met. Consensus is needed on the optimal use of currently available screening tools. Research should focus on new accurate screening tool, the natural history of NAFLD and the cost effectiveness of different screening strategies in children. © 2017 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of World Obesity Federation.

  16. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  17. Note: long range and accurate measurement of deep trench microstructures by a specialized scanning tunneling microscope.

    PubMed

    Ju, Bing-Feng; Chen, Yuan-Liu; Zhang, Wei; Zhu, Wule; Jin, Chao; Fang, F Z

    2012-05-01

    A compact but practical scanning tunneling microscope (STM) with high aspect ratio and high depth capability has been specially developed. Long range scanning mechanism with tilt-adjustment stage is adopted for the purpose of adjusting the probe-sample relative angle to compensate the non-parallel effects. A periodical trench microstructure with a pitch of 10 μm has been successfully imaged with a long scanning range up to 2.0 mm. More innovatively, a deep trench with depth and step height of 23.0 μm has also been successfully measured, and slope angle of the sidewall can approximately achieve 67°. The probe can continuously climb the high step and exploring the trench bottom without tip crashing. The new STM could perform long range measurement for the deep trench and high step surfaces without image distortion. It enables accurate measurement and quality control of periodical trench microstructures.

  18. High-rate dead-time corrections in a general purpose digital pulse processing system

    PubMed Central

    Abbene, Leonardo; Gerardi, Gaetano

    2015-01-01

    Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270

  19. Detection of Free Polyamines in Plants Subjected to Abiotic Stresses by High-Performance Liquid Chromatography (HPLC).

    PubMed

    Gong, Xiaoqing; Liu, Ji-Hong

    2017-01-01

    High-performance liquid chromatography (HPLC) is a sensitive, rapid, and accurate technique to detect and characterize various metabolites from plants. The metabolites are extracted with different solvents and eluted with appropriate mobile phases in a designed HPLC program. Polyamines are known to accumulate under abiotic stress conditions in various plant species and thought to provide protection against oxidative stress by scavenging reactive oxygen species. Here, we describe a common method to detect the free polyamines in plant tissues both qualitatively and quantitatively.

  20. On-road black carbon instrument intercomparison and aerosol characteristics by driving environment

    EPA Science Inventory

    Large spatial variations of black carbon (BC) concentrations in the on-road and near-road environments necessitate measurements with high spatial resolution to assess exposure accurately. A series of measurements was made comparing the performance of several different BC instrume...

  1. The bench scientist's guide to RNA-Seq analysis

    USDA-ARS?s Scientific Manuscript database

    RNA sequencing (RNA-Seq) is emerging as a highly accurate method to quantify transcript abundance. However, analyses of the large data sets obtained by sequencing the entire transcriptome of organisms have generally been performed by bioinformatic specialists. Here we outline a methods strategy desi...

  2. Quantum Tunneling Affects Engine Performance.

    PubMed

    Som, Sibendu; Liu, Wei; Zhou, Dingyu D Y; Magnotti, Gina M; Sivaramakrishnan, Raghu; Longman, Douglas E; Skodje, Rex T; Davis, Michael J

    2013-06-20

    We study the role of individual reaction rates on engine performance, with an emphasis on the contribution of quantum tunneling. It is demonstrated that the effect of quantum tunneling corrections for the reaction HO2 + HO2 = H2O2 + O2 can have a noticeable impact on the performance of a high-fidelity model of a compression-ignition (e.g., diesel) engine, and that an accurate prediction of ignition delay time for the engine model requires an accurate estimation of the tunneling correction for this reaction. The three-dimensional model includes detailed descriptions of the chemistry of a surrogate for a biodiesel fuel, as well as all the features of the engine, such as the liquid fuel spray and turbulence. This study is part of a larger investigation of how the features of the dynamics and potential energy surfaces of key reactions, as well as their reaction rate uncertainties, affect engine performance, and results in these directions are also presented here.

  3. Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevalier, Scott; Schopf, Jennifer M.; Miller, Kenneth

    2016-07-01

    Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets. The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends. This work explores low costmore » alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. Finally, we present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.« less

  4. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE PAGES

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong; ...

    2017-07-11

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  5. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  6. Social Collaborative Filtering by Trust.

    PubMed

    Yang, Bo; Lei, Yu; Liu, Jiming; Li, Wenjie

    2017-08-01

    Recommender systems are used to accurately and actively provide users with potentially interesting information or services. Collaborative filtering is a widely adopted approach to recommendation, but sparse data and cold-start users are often barriers to providing high quality recommendations. To address such issues, we propose a novel method that works to improve the performance of collaborative filtering recommendations by integrating sparse rating data given by users and sparse social trust network among these same users. This is a model-based method that adopts matrix factorization technique that maps users into low-dimensional latent feature spaces in terms of their trust relationship, and aims to more accurately reflect the users reciprocal influence on the formation of their own opinions and to learn better preferential patterns of users for high-quality recommendations. We use four large-scale datasets to show that the proposed method performs much better, especially for cold start users, than state-of-the-art recommendation algorithms for social collaborative filtering based on trust.

  7. Separation and quantitation of polyethylene glycols 400 and 3350 from human urine by high-performance liquid chromatography.

    PubMed

    Ryan, C M; Yarmush, M L; Tompkins, R G

    1992-04-01

    Polyethylene glycol 3350 (PEG 3350) is useful as an orally administered probe to measure in vivo intestinal permeability to macromolecules. Previous methods to detect polyethylene glycol (PEG) excreted in the urine have been hampered by inherent inaccuracies associated with liquid-liquid extraction and turbidimetric analysis. For accurate quantitation by previous methods, radioactive labels were required. This paper describes a method to separate and quantitate PEG 3350 and PEG 400 in human urine that is independent of radioactive labels and is accurate in clinical practice. The method uses sized regenerated cellulose membranes and mixed ion-exchange resin for sample preparation and high-performance liquid chromatography with refractive index detection for analysis. The 24-h excretion for normal individuals after an oral dose of 40 g of PEG 3350 and 5 g of PEG 400 was 0.12 +/- 0.04% of the original dose of PEG 3350 and 26.3 +/- 5.1% of the original dose of PEG 400.

  8. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  9. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  10. Human matching performance of genuine crime scene latent fingerprints.

    PubMed

    Thompson, Matthew B; Tangen, Jason M; McCarthy, Duncan J

    2014-02-01

    There has been very little research into the nature and development of fingerprint matching expertise. Here we present the results of an experiment testing the claimed matching expertise of fingerprint examiners. Expert (n = 37), intermediate trainee (n = 8), new trainee (n = 9), and novice (n = 37) participants performed a fingerprint discrimination task involving genuine crime scene latent fingerprints, their matches, and highly similar distractors, in a signal detection paradigm. Results show that qualified, court-practicing fingerprint experts were exceedingly accurate compared with novices. Experts showed a conservative response bias, tending to err on the side of caution by making more errors of the sort that could allow a guilty person to escape detection than errors of the sort that could falsely incriminate an innocent person. The superior performance of experts was not simply a function of their ability to match prints, per se, but a result of their ability to identify the highly similar, but nonmatching fingerprints as such. Comparing these results with previous experiments, experts were even more conservative in their decision making when dealing with these genuine crime scene prints than when dealing with simulated crime scene prints, and this conservatism made them relatively less accurate overall. Intermediate trainees-despite their lack of qualification and average 3.5 years experience-performed about as accurately as qualified experts who had an average 17.5 years experience. New trainees-despite their 5-week, full-time training course or their 6 months experience-were not any better than novices at discriminating matching and similar nonmatching prints, they were just more conservative. Further research is required to determine the precise nature of fingerprint matching expertise and the factors that influence performance. The findings of this representative, lab-based experiment may have implications for the way fingerprint examiners testify in court, but what the findings mean for reasoning about expert performance in the wild is an open, empirical, and epistemological question.

  11. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law

    PubMed Central

    Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen

    2015-01-01

    In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556

  13. Higher-order differencing method with a multigrid approach for the solution of the incompressible flow equations at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tzanos, Constantine P.

    1992-10-01

    A higher-order differencing scheme (Tzanos, 1990) is used in conjunction with a multigrid approach to obtain accurate solutions of the Navier-Stokes convection-diffusion equations at high Re numbers. Flow in a square cavity with a moving lid is used as a test problem. a multigrid approach based on the additive correction method (Settari and Aziz) and an iterative incomplete lower and upper solver demonstrated good performance for the whole range of Re number under consideration (from 1000 to 10,000) and for both uniform and nonuniform grids. It is concluded that the combination of the higher-order differencing scheme with a multigrid approach proved to be an effective technique for giving accurate solutions of the Navier-Stokes equations at high Re numbers.

  14. Predicting the Plate Dent Test Output in Order to Assess the Performance of Condensed High Explosives

    NASA Astrophysics Data System (ADS)

    Frem, Dany

    2017-01-01

    In the present study, a relationship is proposed that is capable of predicting the output of the plate dent test. It is shown that the initial density ?; condensed phase heat of formation ?; the number of carbon (C), nitrogen (N), oxygen (O); and the composition molecular weight (MW) are the most important parameters needed in order to accurately predict the absolute dent depth ? produced on 1018 cold-rolled steel by a detonating organic explosive. The estimated ? values can be used to predict the detonation pressure (P) of high explosives; furthermore, we show that a correlation exists between ? and the Gurney velocity ? parameter. The new correlation is used to accurately estimate ? for several C-H-N-O explosive compositions.

  15. A Near-Infrared Spectrometer to Measure Zodiacal Light Absorption Spectrum

    NASA Technical Reports Server (NTRS)

    Kutyrev, A. S.; Arendt, R.; Dwek, E.; Kimble, R.; Moseley, S. H.; Rapchun, D.; Silverberg, R. F.

    2010-01-01

    We have developed a high throughput infrared spectrometer for zodiacal light fraunhofer lines measurements. The instrument is based on a cryogenic dual silicon Fabry-Perot etalon which is designed to achieve high signal to noise Fraunhofer line profile measurements. Very large aperture silicon Fabry-Perot etalons and fast camera optics make these measurements possible. The results of the absorption line profile measurements will provide a model free measure of the zodiacal Light intensity in the near infrared. The knowledge of the zodiacal light brightness is crucial for accurate subtraction of zodiacal light foreground for accurate measure of the extragalactic background light after the subtraction of zodiacal light foreground. We present the final design of the instrument and the first results of its performance.

  16. Diagnosing and alleviating the impact of performance pressure on mathematical problem solving.

    PubMed

    DeCaro, Marci S; Rotar, Kristin E; Kendra, Matthew S; Beilock, Sian L

    2010-08-01

    High-pressure academic testing situations can lead people to perform below their actual ability levels by co-opting working memory (WM) resources needed for the task at hand (Beilock, 2008). In the current work we examine how performance pressure impacts WM and design an intervention to alleviate pressure's negative impact. Specifically, we explore the hypothesis that high-pressure situations trigger distracting thoughts and worries that rely heavily on verbal WM. Individuals performed verbally based and spatially based mathematics problems in a low-pressure or high-pressure testing situation. Results demonstrated that performance on problems that rely heavily on verbal WM resources was less accurate under high-pressure than under low-pressure tests. Performance on spatially based problems that do not rely heavily on verbal WM was not affected by pressure. Moreover, the more people reported worrying during test performance, the worse they performed on the verbally based (but not spatially based) maths problems. Asking some individuals to focus on the problem steps by talking aloud helped to keep pressure-induced worries at bay and eliminated pressure's negative impact on performance.

  17. Does ultrasonography accurately diagnose acute cholecystitis? Improving diagnostic accuracy based on a review at a regional hospital

    PubMed Central

    Hwang, Hamish; Marsh, Ian; Doyle, Jason

    2014-01-01

    Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607

  18. Comparison of Self-Report Versus Sensor-Based Methods for Measuring the Amount of Upper Limb Activity Outside the Clinic.

    PubMed

    Waddell, Kimberly J; Lang, Catherine E

    2018-03-10

    To compare self-reported with sensor-measured upper limb (UL) performance in daily life for individuals with chronic (≥6mo) UL paresis poststroke. Secondary analysis of participants enrolled in a phase II randomized, parallel, dose-response UL movement trial. This analysis compared the accuracy and consistency between self-reported UL performance and sensor-measured UL performance at baseline and immediately post an 8-week intensive UL task-specific intervention. Outpatient rehabilitation. Community-dwelling individuals with chronic (≥6mo) UL paresis poststroke (N=64). Not applicable. Motor Activity Log amount of use scale and the sensor-derived use ratio from wrist-worn accelerometers. There was a high degree of variability between self-reported UL performance and the sensor-derived use ratio. Using sensor-based values as a reference, 3 distinct categories were identified: accurate reporters (reporting difference ±0.1), overreporters (difference >0.1), and underreporters (difference <-0.1). Five of 64 participants accurately self-reported UL performance at baseline and postintervention. Over half of participants (52%) switched categories from pre-to postintervention (eg, moved from underreporting preintervention to overreporting postintervention). For the consistent reporters, no participant characteristics were found to influence whether someone over- or underreported performance compared with sensor-based assessment. Participants did not consistently or accurately self-report UL performance when compared with the sensor-derived use ratio. Although self-report and sensor-based assessments are moderately associated and appear similar conceptually, these results suggest self-reported UL performance is often not consistent with sensor-measured performance and the measures cannot be used interchangeably. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  19. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.

  20. On the critical temperature, normal boiling point, and vapor pressure of ionic liquids.

    PubMed

    Rebelo, Luis P N; Canongia Lopes, José N; Esperança, José M S S; Filipe, Eduardo

    2005-04-07

    One-stage, reduced-pressure distillations at moderate temperature of 1-decyl- and 1-dodecyl-3-methylimidazolium bistriflilamide ([Ntf(2)](-)) ionic liquids (ILs) have been performed. These liquid-vapor equilibria can be understood in light of predictions for normal boiling points of ILs. The predictions are based on experimental surface tension and density data, which are used to estimate the critical points of several ILs and their corresponding normal boiling temperatures. In contrast to the situation found for relatively unstable ILs at high-temperature such as those containing [BF(4)](-) or [PF(6)](-) anions, [Ntf(2)](-)-based ILs constitute a promising class in which reliable, accurate vapor pressure measurements can in principle be performed. This property is paramount for assisting in the development and testing of accurate molecular models.

  1. Feature-Based Correlation and Topological Similarity for Interbeat Interval Estimation Using Ultrawideband Radar.

    PubMed

    Sakamoto, Takuya; Imasaka, Ryohei; Taki, Hirofumi; Sato, Toru; Yoshioka, Mototaka; Inoue, Kenichi; Fukuda, Takeshi; Sakai, Hiroyuki

    2016-04-01

    The objectives of this paper are to propose a method that can accurately estimate the human heart rate (HR) using an ultrawideband (UWB) radar system, and to determine the performance of the proposed method through measurements. The proposed method uses the feature points of a radar signal to estimate the HR efficiently and accurately. Fourier- and periodicity-based methods are inappropriate for estimation of instantaneous HRs in real time because heartbeat waveforms are highly variable, even within the beat-to-beat interval. We define six radar waveform features that enable correlation processing to be performed quickly and accurately. In addition, we propose a feature topology signal that is generated from a feature sequence without using amplitude information. This feature topology signal is used to find unreliable feature points, and thus, to suppress inaccurate HR estimates. Measurements were taken using UWB radar, while simultaneously performing electrocardiography measurements in an experiment that was conducted on nine participants. The proposed method achieved an average root-mean-square error in the interbeat interval of 7.17 ms for the nine participants. The results demonstrate the effectiveness and accuracy of the proposed method. The significance of this study for biomedical research is that the proposed method will be useful in the realization of a remote vital signs monitoring system that enables accurate estimation of HR variability, which has been used in various clinical settings for the treatment of conditions such as diabetes and arterial hypertension.

  2. Testing Integrity Symposium: Issues and Recommendations for Best Practice

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2013

    2013-01-01

    Educators, parents, and the public depend on accurate, valid, reliable, and timely information about student academic performance. Testing irregularities--breaches of test security or improper administration of academic testing--undermine efforts to use those data to improve student achievement. Unfortunately, there have been high-profile and…

  3. Detection of food intake from swallowing sequences by supervised and unsupervised methods.

    PubMed

    Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L; Neuman, Michael R; Sazonov, Edward

    2010-08-01

    Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone.

  4. Detection of Food Intake from Swallowing Sequences by Supervised and Unsupervised Methods

    PubMed Central

    Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L.; Neuman, Michael R.; Sazonov, Edward

    2010-01-01

    Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone. PMID:20352335

  5. Atmospheric Models for Over-Ocean Propagation Loss

    DTIC Science & Technology

    2015-05-15

    Atmospheric Models For Over-Ocean Propagation Loss Bruce McGuffin1 MIT Lincoln Laboratory Introduction Air -to-surface radio links differ from...from radiosonde profiles collected along the Atlantic coast of the United States, in order to accurately estimate high-reliability SHF/EHF air -to...predict required link performance to achieve high reliability at different locations and times of year. Data Acquisition Radiosonde balloons are

  6. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  7. Development and validation of high-performance liquid chromatography and high-performance thin-layer chromatography methods for the quantification of khellin in Ammi visnaga seed

    PubMed Central

    Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar

    2015-01-01

    Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890

  8. Evaluation of Head Orientation and Neck Muscle EMG Signals as Command Inputs to a Human-Computer Interface for Individuals with High Tetraplegia

    PubMed Central

    Williams, Matthew R.; Kirsch, Robert F.

    2013-01-01

    We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, EMG from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2D, center-out, Fitts’ Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application. PMID:18990652

  9. Monitoring the metering performance of an electronic voltage transformer on-line based on cyber-physics correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang

    2017-10-01

    Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.

  10. A quality assurance phantom for the performance evaluation of volumetric micro-CT systems

    NASA Astrophysics Data System (ADS)

    Du, Louise Y.; Umoh, Joseph; Nikolov, Hristo N.; Pollmann, Steven I.; Lee, Ting-Yim; Holdsworth, David W.

    2007-12-01

    Small-animal imaging has recently become an area of increased interest because more human diseases can be modeled in transgenic and knockout rodents. As a result, micro-computed tomography (micro-CT) systems are becoming more common in research laboratories, due to their ability to achieve spatial resolution as high as 10 µm, giving highly detailed anatomical information. Most recently, a volumetric cone-beam micro-CT system using a flat-panel detector (eXplore Ultra, GE Healthcare, London, ON) has been developed that combines the high resolution of micro-CT and the fast scanning speed of clinical CT, so that dynamic perfusion imaging can be performed in mice and rats, providing functional physiological information in addition to anatomical information. This and other commercially available micro-CT systems all promise to deliver precise and accurate high-resolution measurements in small animals. However, no comprehensive quality assurance phantom has been developed to evaluate the performance of these micro-CT systems on a routine basis. We have designed and fabricated a single comprehensive device for the purpose of performance evaluation of micro-CT systems. This quality assurance phantom was applied to assess multiple image-quality parameters of a current flat-panel cone-beam micro-CT system accurately and quantitatively, in terms of spatial resolution, geometric accuracy, CT number accuracy, linearity, noise and image uniformity. Our investigations show that 3D images can be obtained with a limiting spatial resolution of 2.5 mm-1 and noise of ±35 HU, using an acquisition interval of 8 s at an entrance dose of 6.4 cGy.

  11. DNA double strand break repair in human bladder cancer is error prone and involves microhomology-associated end-joining

    PubMed Central

    Bentley, Johanne; Diggle, Christine P.; Harnden, Patricia; Knowles, Margaret A.; Kiltie, Anne E.

    2004-01-01

    In human cells DNA double strand breaks (DSBs) can be repaired by the non-homologous end-joining (NHEJ) pathway. In a background of NHEJ deficiency, DSBs with mismatched ends can be joined by an error-prone mechanism involving joining between regions of nucleotide microhomology. The majority of joins formed from a DSB with partially incompatible 3′ overhangs by cell-free extracts from human glioblastoma (MO59K) and urothelial (NHU) cell lines were accurate and produced by the overlap/fill-in of mismatched termini by NHEJ. However, repair of DSBs by extracts using tissue from four high-grade bladder carcinomas resulted in no accurate join formation. Junctions were formed by the non-random deletion of terminal nucleotides and showed a preference for annealing at a microhomology of 8 nt buried within the DNA substrate; this process was not dependent on functional Ku70, DNA-PK or XRCC4. Junctions were repaired in the same manner in MO59K extracts in which accurate NHEJ was inactivated by inhibition of Ku70 or DNA-PKcs. These data indicate that bladder tumour extracts are unable to perform accurate NHEJ such that error-prone joining predominates. Therefore, in high-grade tumours mismatched DSBs are repaired by a highly mutagenic, microhomology-mediated, alternative end-joining pathway, a process that may contribute to genomic instability observed in bladder cancer. PMID:15466592

  12. In vivo, high-frequency three-dimensional cardiac MR elastography: Feasibility in normal volunteers.

    PubMed

    Arani, Arvin; Glaser, Kevin L; Arunachalam, Shivaram P; Rossman, Phillip J; Lake, David S; Trzasko, Joshua D; Manduca, Armando; McGee, Kiaran P; Ehman, Richard L; Araoz, Philip A

    2017-01-01

    Noninvasive stiffness imaging techniques (elastography) can image myocardial tissue biomechanics in vivo. For cardiac MR elastography (MRE) techniques, the optimal vibration frequency for in vivo experiments is unknown. Furthermore, the accuracy of cardiac MRE has never been evaluated in a geometrically accurate phantom. Therefore, the purpose of this study was to determine the necessary driving frequency to obtain accurate three-dimensional (3D) cardiac MRE stiffness estimates in a geometrically accurate diastolic cardiac phantom and to determine the optimal vibration frequency that can be introduced in healthy volunteers. The 3D cardiac MRE was performed on eight healthy volunteers using 80 Hz, 100 Hz, 140 Hz, 180 Hz, and 220 Hz vibration frequencies. These frequencies were tested in a geometrically accurate diastolic heart phantom and compared with dynamic mechanical analysis (DMA). The 3D Cardiac MRE was shown to be feasible in volunteers at frequencies as high as 180 Hz. MRE and DMA agreed within 5% at frequencies greater than 180 Hz in the cardiac phantom. However, octahedral shear strain signal to noise ratios and myocardial coverage was shown to be highest at a frequency of 140 Hz across all subjects. This study motivates future evaluation of high-frequency 3D MRE in patient populations. Magn Reson Med 77:351-360, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 1: Cell embedding method and performance

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Kiesel, Peter; Sommer, Lars Wilko; Schwartz, Julian; Lochbaum, Alexander; Hegyi, Alex; Schuh, Andreas; Arakaki, Kyle; Saha, Bhaskar; Ganguli, Anurag; Kim, Kyung Ho; Kim, ChaeAh; Hah, Hoe Jin; Kim, SeokKoo; Hwang, Gyu-Ok; Chung, Geun-Chang; Choi, Bokkyu; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic sensors. High-performance large-format pouch cells with embedded fiber-optic sensors were fabricated. The first of this two-part paper focuses on the embedding method details and performance of these cells. The seal integrity, capacity retention, cycle life, compatibility with existing module designs, and mass-volume cost estimates indicate their suitability for xEV and other advanced battery applications. The second part of the paper focuses on the internal strain and temperature signals obtained from these sensors under various conditions and their utility for high-accuracy cell state estimation algorithms.

  14. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  15. Electrophysiological distinctions between recognition memory with and without awareness

    PubMed Central

    Ko, Philip C.; Duda, Bryant; Hussey, Erin P.; Ally, Brandon A.

    2013-01-01

    The influence of implicit memory representations on explicit recognition may help to explain cases of accurate recognition decisions made with high uncertainty. During a recognition task, implicit memory may enhance the fluency of a test item, biasing decision processes to endorse it as “old”. This model may help explain recognition-without-identification, a remarkable phenomenon in which participants make highly accurate recognition decisions despite the inability to identify the test item. The current study investigated whether recognition-without-identification for pictures elicits a similar pattern of neural activity as other types of accurate recognition decisions made with uncertainty. Further, this study also examined whether recognition-without-identification for pictures could be attained by the use of perceptual and conceptual information from memory. To accomplish this, participants studied pictures and then performed a recognition task under difficult viewing conditions while event-related potentials (ERPs) were recorded. Behavioral results showed that recognition was highly accurate even when test items could not be identified, demonstrating recognition-without identification. The behavioral performance also indicated that recognition-without-identification was mediated by both perceptual and conceptual information, independently of one another. The ERP results showed dramatically different memory related activity during the early 300 to 500 ms epoch for identified items that were studied compared to unidentified items that were studied. Similar to previous work highlighting accurate recognition without retrieval awareness, test items that were not identified, but correctly endorsed as “old,” elicited a negative posterior old/new effect (i.e., N300). In contrast, test items that were identified and correctly endorsed as “old,” elicited the classic positive frontal old/new effect (i.e., FN400). Importantly, both of these effects were elicited under conditions when participants used perceptual information to make recognition decisions. Conceptual information elicited very different ERPs than perceptual information, showing that the informational wealth of pictures can evoke multiple routes to recognition even without awareness of memory retrieval. These results are discussed within the context of current theories regarding the N300 and the FN400. PMID:23287567

  16. Effects of activity and energy budget balancing algorithm on laboratory performance of a fish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.

    2012-01-01

    We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.

  17. Online monitoring of dynamic tip clearance of turbine blades in high temperature environments

    NASA Astrophysics Data System (ADS)

    Han, Yu; Zhong, Chong; Zhu, Xiaoliang; Zhe, Jiang

    2018-04-01

    Minimized tip clearance reduces the gas leakage over turbine blade tips and improves the thrust and efficiency of turbomachinery. An accurate tip clearance sensor, measuring the dynamic clearances between blade tips and the turbine case, is a critical component for tip clearance control. This paper presents a robust inductive tip clearance sensor capable of monitoring dynamic tip clearances of turbine machines in high-temperature environments and at high rotational speeds. The sensor can also self-sense the temperature at a blade tip in situ such that temperature effect on tip clearance measurement can be estimated and compensated. To evaluate the sensor’s performance, the sensor was tested for measuring the tip clearances of turbine blades under various working temperatures ranging from 700 K to 1300 K and at turbine rotational speeds ranging from 3000 to 10 000 rpm. The blade tip clearance was varied from 50 to 2000 µm. The experiment results proved that the sensor can accurately measure the blade tip clearances with a temporal resolution of 10 µm. The capability of accurately measuring the tip clearances at high temperatures (~1300 K) and high turbine rotation speeds (~30 000 rpm), along with its compact size, makes it promising for online monitoring and active control of blade tip clearances of high-temperature turbomachinery.

  18. High performance direct absorption spectroscopy of pure and binary mixture hydrocarbon gases in the 6-11 μm range

    NASA Astrophysics Data System (ADS)

    Heinrich, Robert; Popescu, Alexandru; Hangauer, Andreas; Strzoda, Rainer; Höfling, Sven

    2017-08-01

    The availability of accurate and fast hydrocarbon analyzers, capable of real-time operation while enabling feedback-loops, would lead to a paradigm change in the petro-chemical industry. Primarily gas chromatographs measure the composition of hydrocarbon process streams. Due to sophisticated gas sampling, these analyzers are limited in response time. As hydrocarbons absorb in the mid-infrared spectral range, the employment of fast spectroscopic systems is highly attractive due to significantly reduced maintenance costs and the capability to setup real-time process control. New developments in mid-infrared laser systems pave the way for the development of high-performance analyzers provided that accurate spectral models are available for multi-species detection. In order to overcome current deficiencies in the availability of spectroscopic data, we developed a laser-based setup covering the 6-11 μm wavelength range. The presented system is designated as laboratory reference system. Its spectral accuracy is at least 6.6× 10^{-3} cm^{-1} with a precision of 3× 10^{-3} cm^{-1}. With a "per point" minimum detectable absorption of 1.3× 10^{-3} cm^{-1} Hz^{{-}{1/2}} it allows us to perform systematic measurements of hydrocarbon spectra of the first 7 alkanes under conditions which are not tabulated in spectroscopic database. We exemplify the system performance with measured direct absorption spectra of methane, propane, iso-butane, and a mixture of methane and propane.

  19. High Performance Parallel Processing (HPPP) Finite Element Simulation of Fluid Structure Interactions Final Report CRADA No. TC-0824-94-A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couch, R.; Ziegler, D. P.

    This project was a muki-partner CRADA. This was a partnership between Alcoa and LLNL. AIcoa developed a system of numerical simulation modules that provided accurate and efficient threedimensional modeling of combined fluid dynamics and structural response.

  20. Quantum Electrodynamical Shifts in Multivalent Heavy Ions.

    PubMed

    Tupitsyn, I I; Kozlov, M G; Safronova, M S; Shabaev, V M; Dzuba, V A

    2016-12-16

    The quantum electrodynamics (QED) corrections are directly incorporated into the most accurate treatment of the correlation corrections for ions with complex electronic structure of interest to metrology and tests of fundamental physics. We compared the performance of four different QED potentials for various systems to access the accuracy of QED calculations and to make a prediction of highly charged ion properties urgently needed for planning future experiments. We find that all four potentials give consistent and reliable results for ions of interest. For the strongly bound electrons, the nonlocal potentials are more accurate than the local potential.

  1. Gender identification from high-pass filtered vowel segments: the use of high-frequency energy.

    PubMed

    Donai, Jeremy J; Lass, Norman J

    2015-10-01

    The purpose of this study was to examine the use of high-frequency information for making gender identity judgments from high-pass filtered vowel segments produced by adult speakers. Specifically, the effect of removing lower-frequency spectral detail (i.e., F3 and below) from vowel segments via high-pass filtering was evaluated. Thirty listeners (ages 18-35) with normal hearing participated in the experiment. A within-subjects design was used to measure gender identification for six 250-ms vowel segments (/æ/, /ɪ /, /ɝ/, /ʌ/, /ɔ/, and /u/), produced by ten male and ten female speakers. The results of this experiment demonstrated that despite the removal of low-frequency spectral detail, the listeners were accurate in identifying speaker gender from the vowel segments, and did so with performance significantly above chance. The removal of low-frequency spectral detail reduced gender identification by approximately 16 % relative to unfiltered vowel segments. Classification results using linear discriminant function analyses followed the perceptual data, using spectral and temporal representations derived from the high-pass filtered segments. Cumulatively, these findings indicate that normal-hearing listeners are able to make accurate perceptual judgments regarding speaker gender from vowel segments with low-frequency spectral detail removed via high-pass filtering. Therefore, it is reasonable to suggest the presence of perceptual cues related to gender identity in the high-frequency region of naturally produced vowel signals. Implications of these findings and possible mechanisms for performing the gender identification task from high-pass filtered stimuli are discussed.

  2. Analysis of the Effect of UTI-UTC to High Precision Orbit

    NASA Astrophysics Data System (ADS)

    Shin, Dongseok; Kwak, Sunghee; Kim, Tag-Gon

    1999-12-01

    As the spatial resolution of remote sensing satellites becomes higher, very accurate determination of the position of a LEO (Low Earth Orbit) satellite is demanding more than ever. Non-symmetric Earth gravity is the major perturbation force to LEO satellites. Since the orbit propagation is performed in the celestial frame while Earth gravity is defined in the terrestrial frame, it is required to convert the coordinates of the satellite from one to the other accurately. Unless the coordinate conversion between the two frames is performed accurately the orbit propagation calculates incorrect Earth gravitational force at a specific time instant, and hence, causes errors in orbit prediction. The coordinate conversion between the two frames involves precession, nutation, Earth rotation and polar motion. Among these factors, unpredictability and uncertainty of Earth rotation, called UTI-UTC, is the largest error source. In this paper, the effect of UTI-UTC on the accuracy of the LEO propagation is introduced, tested and analzed. Considering the maximum unpredictability of UTI-UTC, 0.9 seconds, the meaningful order of non-spherical Earth harmonic functions is derived.

  3. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  4. Determining the end of a musical turn: Effects of tonal cues.

    PubMed

    Hadley, Lauren V; Sturt, Patrick; Moran, Nikki; Pickering, Martin J

    2018-01-01

    Successful duetting requires that musicians coordinate their performance with their partners. In the case of turn-taking in improvised performance they need to be able to predict their partner's turn-end in order to accurately time their own entries. Here we investigate the cues used for accurate turn-end prediction in musical improvisations, focusing on the role of tonal structure. In a response-time task, participants more accurately determined the endings of (tonal) jazz than (non-tonal) free improvisation turns. Moreover, for the jazz improvisations, removing low frequency information (<2100Hz) - and hence obscuring the pitch relationships conveying tonality - reduced response accuracy, but removing high frequency information (>2100Hz) had no effect. Neither form of filtering affected response accuracy in the free improvisation condition. We therefore argue that tonal cues aided prediction accuracy for the jazz improvisations compared to the free improvisations. We compare our results with those from related speech research (De Ruiter et al., 2006), to draw comparisons between the structural function of tonality and linguistic syntax. Copyright © 2017. Published by Elsevier B.V.

  5. Combined fabrication technique for high-precision aspheric optical windows

    NASA Astrophysics Data System (ADS)

    Hu, Hao; Song, Ci; Xie, Xuhui

    2016-07-01

    Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.

  6. A probabilistic and adaptive approach to modeling performance of pavement infrastructure

    DOT National Transportation Integrated Search

    2007-08-01

    Accurate prediction of pavement performance is critical to pavement management agencies. Reliable and accurate predictions of pavement infrastructure performance can save significant amounts of money for pavement infrastructure management agencies th...

  7. Performance Evaluation of a High Bandwidth Liquid Fuel Modulation Valve for Active Combustion Control

    NASA Technical Reports Server (NTRS)

    Saus, Joseph R.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.

    2012-01-01

    At the NASA Glenn Research Center, a characterization rig was designed and constructed for the purpose of evaluating high bandwidth liquid fuel modulation devices to determine their suitability for active combustion control research. Incorporated into the rig s design are features that approximate conditions similar to those that would be encountered by a candidate device if it were installed on an actual combustion research rig. The characterized dynamic performance measures obtained through testing in the rig are planned to be accurate indicators of expected performance in an actual combustion testing environment. To evaluate how well the characterization rig predicts fuel modulator dynamic performance, characterization rig data was compared with performance data for a fuel modulator candidate when the candidate was in operation during combustion testing. Specifically, the nominal and off-nominal performance data for a magnetostrictive-actuated proportional fuel modulation valve is described. Valve performance data were collected with the characterization rig configured to emulate two different combustion rig fuel feed systems. Fuel mass flows and pressures, fuel feed line lengths, and fuel injector orifice size was approximated in the characterization rig. Valve performance data were also collected with the valve modulating the fuel into the two combustor rigs. Comparison of the predicted and actual valve performance data show that when the valve is operated near its design condition the characterization rig can appropriately predict the installed performance of the valve. Improvements to the characterization rig and accompanying modeling activities are underway to more accurately predict performance, especially for the devices under development to modulate fuel into the much smaller fuel injectors anticipated in future lean-burning low-emissions aircraft engine combustors.

  8. Computer Analysis Of High-Speed Roller Bearings

    NASA Technical Reports Server (NTRS)

    Coe, H.

    1988-01-01

    High-speed cylindrical roller-bearing analysis program (CYBEAN) developed to compute behavior of cylindrical rolling-element bearings at high speeds and with misaligned shafts. With program, accurate assessment of geometry-induced roller preload possible for variety of out-ring and housing configurations and loading conditions. Enables detailed examination of bearing performance and permits exploration of causes and consequences of bearing skew. Provides general capability for assessment of designs of bearings supporting main shafts of engines. Written in FORTRAN IV.

  9. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  10. Broad screening of illicit ingredients in cosmetics using ultra-high-performance liquid chromatography-hybrid quadrupole-Orbitrap mass spectrometry with customized accurate-mass database and mass spectral library.

    PubMed

    Meng, Xianshuang; Bai, Hua; Guo, Teng; Niu, Zengyuan; Ma, Qiang

    2017-12-15

    Comprehensive identification and quantitation of 100 multi-class regulated ingredients in cosmetics was achieved using ultra-high-performance liquid chromatography (UHPLC) coupled with hybrid quadrupole-Orbitrap high-resolution mass spectrometry (Q-Orbitrap HRMS). A simple, efficient, and inexpensive sample pretreatment protocol was developed using ultrasound-assisted extraction (UAE), followed by dispersive solid-phase extraction (dSPE). The cosmetic samples were analyzed by UHPLC-Q-Orbitrap HRMS under synchronous full-scan MS and data-dependent MS/MS (full-scan MS 1 /dd-MS 2 ) acquisition mode. The mass resolution was set to 70,000 FWHM (full width at half maximum) for full-scan MS 1 and 17,500 FWHM for dd-MS 2 stage with the experimentally measured mass deviations of less than 2ppm (parts per million) for quasi-molecular ions and 5ppm for characteristic fragment ions for each individual analyte. An accurate-mass database and a mass spectral library were built in house for searching the 100 target compounds. Broad screening was conducted by comparing the experimentally measured exact mass of precursor and fragment ions, retention time, isotopic pattern, and ionic ratio with the accurate-mass database and by matching the acquired MS/MS spectra against the mass spectral library. The developed methodology was evaluated and validated in terms of limits of detection (LODs), limits of quantitation (LOQs), linearity, stability, accuracy, and matrix effect. The UHPLC-Q-Orbitrap HRMS approach was applied for the analysis of 100 target illicit ingredients in 123 genuine cosmetic samples, and exhibited great potential for high-throughput, sensitive, and reliable screening of multi-class illicit compounds in cosmetics. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. High-throughput migration modelling for estimating exposure to chemicals in food packaging in screening and prioritization tools.

    PubMed

    Ernstoff, Alexi S; Fantke, Peter; Huang, Lei; Jolliet, Olivier

    2017-11-01

    Specialty software and simplified models are often used to estimate migration of potentially toxic chemicals from packaging into food. Current models, however, are not suitable for emerging applications in decision-support tools, e.g. in Life Cycle Assessment and risk-based screening and prioritization, which require rapid computation of accurate estimates for diverse scenarios. To fulfil this need, we develop an accurate and rapid (high-throughput) model that estimates the fraction of organic chemicals migrating from polymeric packaging materials into foods. Several hundred step-wise simulations optimised the model coefficients to cover a range of user-defined scenarios (e.g. temperature). The developed model, operationalised in a spreadsheet for future dissemination, nearly instantaneously estimates chemical migration, and has improved performance over commonly used model simplifications. When using measured diffusion coefficients the model accurately predicted (R 2  = 0.9, standard error (S e ) = 0.5) hundreds of empirical data points for various scenarios. Diffusion coefficient modelling, which determines the speed of chemical transfer from package to food, was a major contributor to uncertainty and dramatically decreased model performance (R 2  = 0.4, S e  = 1). In all, this study provides a rapid migration modelling approach to estimate exposure to chemicals in food packaging for emerging screening and prioritization approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Quantification of methionine and selenomethionine in biological samples using multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS).

    PubMed

    Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel

    2018-05-01

    Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.

  13. The circuit parameters measurement of the SABALAN-I plasma focus facility and comparison with Lee Model

    NASA Astrophysics Data System (ADS)

    Karimi, F. S.; Saviz, S.; Ghoranneviss, M.; Salem, M. K.; Aghamir, F. M.

    The circuit parameters are investigated in a Mather-type plasma focus device. The experiments are performed in the SABALAN-I plasma focus facility (2 kJ, 20 kV, 10 μF). A 12-turn Rogowski coil is built and used to measure the time derivative of discharge current (dI/dt). The high pressure test has been performed in this work, as alternative technique to short circuit test to determine the machine circuit parameters and calibration factor of the Rogowski coil. The operating parameters are calculated by two methods and the results show that the relative error of determined parameters by method I, are very low in comparison to method II. Thus the method I produces more accurate results than method II. The high pressure test is operated with this assumption that no plasma motion and the circuit parameters may be estimated using R-L-C theory given that C0 is known. However, for a plasma focus, even at highest permissible pressure it is found that there is significant motion, so that estimated circuit parameters not accurate. So the Lee Model code is used in short circuit mode to generate the computed current trace for fitting to the current waveform was integrated from current derivative signal taken with Rogowski coil. Hence, the dynamics of plasma is accounted for into the estimation and the static bank parameters are determined accurately.

  14. Feedback linearization for control of air breathing engines

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen; Mattern, Duane

    1991-01-01

    The method of feedback linearization for control of the nonlinear nozzle and compressor components of an air breathing engine is presented. This method overcomes the need for a large number of scheduling variables and operating points to accurately model highly nonlinear plants. Feedback linearization also results in linear closed loop system performance simplifying subsequent control design. Feedback linearization is used for the nonlinear partial engine model and performance is verified through simulation.

  15. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    DTIC Science & Technology

    2011-11-01

    accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware

  16. On-the-fly Locata/inertial navigation system integration for precise maritime application

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Li, Yong; Rizos, Chris

    2013-10-01

    The application of Global Navigation Satellite System (GNSS) technology has meant that marine navigators have greater access to a more consistent and accurate positioning capability than ever before. However, GNSS may not be able to meet all emerging navigation performance requirements for maritime applications with respect to service robustness, accuracy, integrity and availability. In particular, applications in port areas (for example automated docking) and in constricted waterways, have very stringent performance requirements. Even when an integrated inertial navigation system (INS)/GNSS device is used there may still be performance gaps. GNSS signals are easily blocked or interfered with, and sometimes the satellite geometry may not be good enough for high accuracy and high reliability applications. Furthermore, the INS accuracy degrades rapidly during GNSS outages. This paper investigates the use of a portable ground-based positioning system, known as ‘Locata’, which was integrated with an INS, to provide accurate navigation in a marine environment without reliance on GNSS signals. An ‘on-the-fly’ Locata resolution algorithm that takes advantage of geometry change via an extended Kalman filter is proposed in this paper. Single-differenced Locata carrier phase measurements are utilized to achieve accurate and reliable solutions. A ‘loosely coupled’ decentralized Locata/INS integration architecture based on the Kalman filter is used for data processing. In order to evaluate the system performance, a field trial was conducted on Sydney Harbour. A Locata network consisting of eight Locata transmitters was set up near the Sydney Harbour Bridge. The experiment demonstrated that the Locata on-the-fly (OTF) algorithm is effective and can improve the system accuracy in comparison with the conventional ‘known point initialization’ (KPI) method. After the OTF and KPI comparison, the OTF Locata/INS integration is then assessed further and its performance improvement on both stand-alone OTF Locata and INS is shown. The Locata/INS integration can achieve centimetre-level accuracy for position solutions, and centimetre-per-second accuracy for velocity determination.

  17. Detonation Propagation in Slabs and Axisymmetric Rate Sticks

    NASA Astrophysics Data System (ADS)

    Romick, Christopher; Aslam, Tariq

    Insensitive high explosives (IHE) have many benefits; however, these IHEs exhibit longer reaction zones than more conventional high explosives (HE). This makes IHEs less ideal explosives and more susceptible to edge effects as well as other performance degradation issues. Thus, there is a resulting reduction in the detonation speed within the explosive. Many HE computational models, e. g. WSD, SURF, CREST, have shock-dependent reaction rates. This dependency places a high value on having an accurate shock speed. In the common practice of shock-capturing, there is ambiguity in the shock-state due to smoothing of the shock-front. Moreover, obtaining an accurate shock speed with shock-capturing becomes prohibitively computationally expensive in multiple dimensions. The use of shock-fitting removes the ambiguity of the shock-state as it is one of the boundaries. As such, the required resolution for a given error in the detonation speed is less than with shock-capturing. This allows for further insight into performance degradation. A two-dimensional shock-fitting scheme has been developed for unconfined slabs and rate sticks of HE. The HE modeling is accomplished by Euler equations utilizing several models with single-step irreversible kinetics in slab and rate stick geometries. Department of Energy - LANL.

  18. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X.

    PubMed

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A; Sondhi, S L

    2017-12-13

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition.This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. © 2017 The Author(s).

  19. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A.; Sondhi, S. L.

    2017-10-01

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.

  20. Algorithmic Classification of Five Characteristic Types of Paraphasias.

    PubMed

    Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven

    2016-12-01

    This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.

  1. Determination of rivaroxaban in patient's plasma samples by anti-Xa chromogenic test associated to High Performance Liquid Chromatography tandem Mass Spectrometry (HPLC-MS/MS).

    PubMed

    Derogis, Priscilla Bento Matos; Sanches, Livia Rentas; de Aranda, Valdir Fernandes; Colombini, Marjorie Paris; Mangueira, Cristóvão Luis Pitangueira; Katz, Marcelo; Faulhaber, Adriana Caschera Leme; Mendes, Claudio Ernesto Albers; Ferreira, Carlos Eduardo Dos Santos; França, Carolina Nunes; Guerra, João Carlos de Campos

    2017-01-01

    Rivaroxaban is an oral direct factor Xa inhibitor, therapeutically indicated in the treatment of thromboembolic diseases. As other new oral anticoagulants, routine monitoring of rivaroxaban is not necessary, but important in some clinical circumstances. In our study a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was validated to measure rivaroxaban plasmatic concentration. Our method used a simple sample preparation, protein precipitation, and a fast chromatographic run. It was developed a precise and accurate method, with a linear range from 2 to 500 ng/mL, and a lower limit of quantification of 4 pg on column. The new method was compared to a reference method (anti-factor Xa activity) and both presented a good correlation (r = 0.98, p < 0.001). In addition, we validated hemolytic, icteric or lipemic plasma samples for rivaroxaban measurement by HPLC-MS/MS without interferences. The chromogenic and HPLC-MS/MS methods were highly correlated and should be used as clinical tools for drug monitoring. The method was applied successfully in a group of 49 real-life patients, which allowed an accurate determination of rivaroxaban in peak and trough levels.

  2. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  3. The Effects of Transient Emotional State and Workload on Size Scaling in Perspective Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuan Q. Tran; Kimberly R. Raddatz

    2006-10-01

    Previous research has been devoted to the study of perceptual (e.g., number of depth cues) and cognitive (e.g., instructional set) factors that influence veridical size perception in perspective displays. However, considering that perspective displays have utility in high workload environments that often induce high arousal (e.g., aircraft cockpits), the present study sought to examine the effect of observers’ emotional state on the ability to perceive and judge veridical size. Within a dual-task paradigm, observers’ ability to make accurate size judgments was examined under conditions of induced emotional state (positive, negative, neutral) and high and low workload. Results showed that participantsmore » in both positive and negative induced emotional states were slower to make accurate size judgments than those not under induced emotional arousal. Results suggest that emotional state is an important factor that influences visual performance on perspective displays and is worthy of further study.« less

  4. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  5. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  6. The Effect of Visual Variability on the Learning of Academic Concepts.

    PubMed

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  7. Note: Evaluation of microfracture strength of diamond materials using nano-polycrystalline diamond spherical indenter

    NASA Astrophysics Data System (ADS)

    Sumiya, H.; Hamaki, K.; Harano, K.

    2018-05-01

    Ultra-hard and high-strength spherical indenters with high precision and sphericity were successfully prepared from nanopolycrystalline diamond (NPD) synthesized by direct conversion sintering from graphite under high pressure and high temperature. It was shown that highly accurate and stable microfracture strength tests can be performed on various super-hard diamond materials by using the NPD spherical indenters. It was also verified that this technique enables quantitative evaluation of the strength characteristics of single crystal diamonds and NPDs which have been quite difficult to evaluate.

  8. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  9. Quantitative analysis of benzodiazepines in vitreous humor by high-performance liquid chromatography

    PubMed Central

    Bazmi, Elham; Behnoush, Behnam; Akhgari, Maryam; Bahmanabadi, Leila

    2016-01-01

    Objective: Benzodiazepines are frequently screened drugs in emergency toxicology, drugs of abuse testing, and in forensic cases. As the variations of benzodiazepines concentrations in biological samples during bleeding, postmortem changes, and redistribution could be biasing forensic medicine examinations, hence selecting a suitable sample and a validated accurate method is essential for the quantitative analysis of these main drug categories. The aim of this study was to develop a valid method for the determination of four benzodiazepines (flurazepam, lorazepam, alprazolam, and diazepam) in vitreous humor using liquid–liquid extraction and high-performance liquid chromatography. Methods: Sample preparation was carried out using liquid–liquid extraction with n-hexane: ethyl acetate and subsequent detection by high-performance liquid chromatography method coupled to diode array detector. This method was applied to quantify benzodiazepines in 21 authentic vitreous humor samples. Linear curve for each drug was obtained within the range of 30–3000 ng/mL with coefficient of correlation higher than 0.99. Results: The limit of detection and quantitation were 30 and 100 ng/mL respectively for four drugs. The method showed an appropriate intra- and inter-day precision (coefficient of variation < 10%). Benzodiazepines recoveries were estimated to be over 80%. The method showed high selectivity; no additional peak due to interfering substances in samples was observed. Conclusion: The present method was selective, sensitive, accurate, and precise for the quantitative analysis of benzodiazepines in vitreous humor samples in forensic toxicology laboratory. PMID:27635251

  10. ICSH recommendations for assessing automated high-performance liquid chromatography and capillary electrophoresis equipment for the quantitation of HbA2.

    PubMed

    Stephens, A D; Colah, R; Fucharoen, S; Hoyer, J; Keren, D; McFarlane, A; Perrett, D; Wild, B J

    2015-10-01

    Automated high performance liquid chromatography and Capillary electrophoresis are used to quantitate the proportion of Hemoglobin A2 (HbA2 ) in blood samples order to enable screening and diagnosis of carriers of β-thalassemia. Since there is only a very small difference in HbA2 levels between people who are carriers and people who are not carriers such analyses need to be both precise and accurate. This paper examines the different parameters of such equipment and discusses how they should be assessed. © 2015 John Wiley & Sons Ltd.

  11. FlavonQ: An Automated Data Processing Tool for Profiling Flavone/flavonol Glycosides Using Ultra High-performance Liquid Chromatography Diode Array Detection and High-Resolution Accurate-Mass Mass Spectrometry (UHPLC HRAM-MS)

    USDA-ARS?s Scientific Manuscript database

    Flavonoids are well-known for their health benefits and can be found in nearly every plant. There are more than 5,000 known flavonoids existing in foods. Profiling flavonoids in natural products poses great challenges due to the diversity of flavonoids, the lack of commercially available standards, ...

  12. An optimized method for neurotransmitters and their metabolites analysis in mouse hypothalamus by high performance liquid chromatography-Q Exactive hybrid quadrupole-orbitrap high-resolution accurate mass spectrometry.

    PubMed

    Yang, Zong-Lin; Li, Hui; Wang, Bing; Liu, Shu-Ying

    2016-02-15

    Neurotransmitters (NTs) and their metabolites are known to play an essential role in maintaining various physiological functions in nervous system. However, there are many difficulties in the detection of NTs together with their metabolites in biological samples. A new method for NTs and their metabolites detection by high performance liquid chromatography coupled with Q Exactive hybrid quadruple-orbitrap high-resolution accurate mass spectrometry (HPLC-HRMS) was established in this paper. This method was a great development of the applying of Q Exactive MS in the quantitative analysis. This method enabled a rapid quantification of ten compounds within 18min. Good linearity was obtained with a correlation coefficient above 0.99. The concentration range of the limit of detection (LOD) and the limit of quantitation (LOQ) level were 0.0008-0.05nmol/mL and 0.002-25.0nmol/mL respectively. Precisions (relative standard deviation, RSD) of this method were at 0.36-12.70%. Recovery ranges were between 81.83% and 118.04%. Concentrations of these compounds in mouse hypothalamus were detected by Q Exactive LC-MS technology with this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Machine learning bandgaps of double perovskites

    PubMed Central

    Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; Ramprasad, R.; Gubernatis, J. E.; Lookman, T.

    2016-01-01

    The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance. PMID:26783247

  14. Parametric Investigation of a High-Lift Airfoil at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Lin, John C.; Dominik, Chet J.

    1997-01-01

    A new two-dimensional, three-element, advanced high-lift research airfoil has been tested in the NASA Langley Research Center s Low-Turbulence Pressure Tunnel at a chord Reynolds number up to 1.6 x 107. The components of this high-lift airfoil have been designed using a incompressible computational code (INS2D). The design was to provide high maximum-lift values while maintaining attached flow on the single-segment flap at landing conditions. The performance of the new NASA research airfoil is compared to a similar reference high-lift airfoil. On the new high-lift airfoil the effects of Reynolds number on slat and flap rigging have been studied experimentally, as well as the Mach number effects. The performance trend of the high-lift design is comparable to that predicted by INS2D over much of the angle-of-attack range. However, the code did not accurately predict the airfoil performance or the configuration-based trends near maximum lift where the compressibility effect could play a major role.

  15. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    NASA Technical Reports Server (NTRS)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  16. Development and Validation of a Simultaneous RP-HPLCUV/DAD Method for Determination of Polyphenols in Gels Containing S. terebinthifolius Raddi (Anacardiaceae)

    PubMed Central

    Carvalho, Melina G.; Aragão, Cícero F. S; Raffin, Fernanda N.; de L. Moura, Túlio F. A.

    2017-01-01

    Topical gels containing extracts of Schinus terebinthifolius have been used to treat bacterial vaginosis. It has been reported that this species has antimicrobial, anti-inflammatory and anti-ulcerogenic properties, which can be attributed to the presence of phenolic compounds. In this work, a sensitive and selective reversed-phase HPLC-UV/DAD method for the simultaneous assay of six polyphenols that could be present in S. terebinthifolius was developed. The method was shown to be accurate and precise. Peak purity and similarity index both exceeded 0.99. Calibration curves were linear over the concentration range studied, with correlation coefficients between 0.9931 and 0.9974. This method was used to determine the polyphenol content of a hydroalcoholic extract and pharmacy-compounded vaginal gel. Although the method is useful to assess the 6 phenolic compounds, some compounds could not be detected in the products. SUMMARY A sensitive, selective, accurate and precise reversed-phase HPLC-UV/DAD method for the simultaneous assay of six polyphenols in S. terebinthifolius Raddi Abbreviations used: RP-HPLC-UV/DAD: Reverse Phase High Performance Liquid Chromatograph with Ultraviolet and Diode Array Detector, HPLC: High Performance Liquid Chromatograph, HPLC-UV: High Performance Liquid Chromatograph with Ultraviolet Detector, ANVISA: Brazilian National Health Surveillance Agency, LOD: Limit of detection, LOQ: Limit of quantitation PMID:28539726

  17. Exploratory data analysis of acceleration signals to select light-weight and accurate features for real-time activity recognition on smartphones.

    PubMed

    Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won

    2013-09-27

    Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.

  18. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 2: Internal cell signals and utility for state estimation

    NASA Astrophysics Data System (ADS)

    Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.

  19. Evaluation of Preduster in Cement Industry Based on Computational Fluid Dynamic

    NASA Astrophysics Data System (ADS)

    Septiani, E. L.; Widiyastuti, W.; Djafaar, A.; Ghozali, I.; Pribadi, H. M.

    2017-10-01

    Ash-laden hot air from clinker in cement industry is being used to reduce water contain in coal, however it may contain large amount of ash even though it was treated by a preduster. This study investigated preduster performance as a cyclone separator in the cement industry by Computational Fluid Dynamic method. In general, the best performance of cyclone is it have relatively high efficiency with the low pressure drop. The most accurate and simple turbulence model, Reynold Average Navier Stokes (RANS), standard k-ε, and combination with Lagrangian model as particles tracking model were used to solve the problem. The measurement in simulation result are flow pattern in the cyclone, pressure outlet and collection efficiency of preduster. The applied model well predicted by comparing with the most accurate empirical model and pressure outlet in experimental measurement.

  20. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  1. /sup 32/P testing for posterior segment lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, R.S.; Howerton, E.E. Jr.

    /sup 32/P testing introduced to ophthalmology by Thomas et al. in 1952 has gained wide acceptance as a test for determining the benign or malignant nature of ocular lesions. With experience gained during the first decade, the test was generally thought to be accurate for larger anterior lesions but unreliable in testing smaller posterior lesions. Over the last ten years new instruments utilizing modern technologic advances have been developed. Greater understanding of the basic properties of /sup 32/P and its behavior in benign and malignant tissue has been obtained. Accurate localization, improvements in instrument design, and newer surgical techniques havemore » been employed. All of these factors have transformed /sup 32/P testing into a highly accurate and reliable procedure. If done properly, the test is accurate not only for large anterior lesions but also for smaller posterior lesions. This series will verify the reliability of /sup 32/P testing if properly performed and correctly interpreted. It will also point out the limitations and pitfalls in the procedure.« less

  2. Detection of malondialdehyde in processed meat products without interference from the ingredients.

    PubMed

    Jung, Samooel; Nam, Ki Chang; Jo, Cheorun

    2016-10-15

    Our aim was to develop a method for accurate quantification of malondialdehyde (MDA) in meat products. MDA content of uncured ground pork (Control); ground pork cured with sodium nitrite (Nitrite); and ground pork cured with sodium nitrite, sodium chloride, sodium pyrophosphate, maltodextrin, and a sausage seasoning (Mix) was measured by the 2-thiobarbituric acid (TBA) assay with MDA extraction by trichloroacetic acid (method A) and two high-performance liquid chromatography (HPLC) methods: i) HPLC separation of the MDA-dinitrophenyl hydrazine adduct (method B) and ii) HPLC separation of MDA (method C) after MDA extraction with acetonitrile. Methods A and B could not quantify MDA accurately in groups Nitrite and Mix. Nevertheless, MDA in groups Control, Nitrite, and Mix was accurately quantified by method C with good recovery. Therefore, direct MDA quantification by HPLC after MDA extraction with acetonitrile (method C) is useful for accurate measurement of MDA content in processed meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  4. High-definition fiber tractography of the human brain: neuroanatomical validation and neurosurgical applications.

    PubMed

    Fernandez-Miranda, Juan C; Pathak, Sudhir; Engh, Johnathan; Jarbo, Kevin; Verstynen, Timothy; Yeh, Fang-Cheng; Wang, Yibao; Mintz, Arlan; Boada, Fernando; Schneider, Walter; Friedlander, Robert

    2012-08-01

    High-definition fiber tracking (HDFT) is a novel combination of processing, reconstruction, and tractography methods that can track white matter fibers from cortex, through complex fiber crossings, to cortical and subcortical targets with subvoxel resolution. To perform neuroanatomical validation of HDFT and to investigate its neurosurgical applications. Six neurologically healthy adults and 36 patients with brain lesions were studied. Diffusion spectrum imaging data were reconstructed with a Generalized Q-Ball Imaging approach. Fiber dissection studies were performed in 20 human brains, and selected dissection results were compared with tractography. HDFT provides accurate replication of known neuroanatomical features such as the gyral and sulcal folding patterns, the characteristic shape of the claustrum, the segmentation of the thalamic nuclei, the decussation of the superior cerebellar peduncle, the multiple fiber crossing at the centrum semiovale, the complex angulation of the optic radiations, the terminal arborization of the arcuate tract, and the cortical segmentation of the dorsal Broca area. From a clinical perspective, we show that HDFT provides accurate structural connectivity studies in patients with intracerebral lesions, allowing qualitative and quantitative white matter damage assessment, aiding in understanding lesional patterns of white matter structural injury, and facilitating innovative neurosurgical applications. High-grade gliomas produce significant disruption of fibers, and low-grade gliomas cause fiber displacement. Cavernomas cause both displacement and disruption of fibers. Our HDFT approach provides an accurate reconstruction of white matter fiber tracts with unprecedented detail in both the normal and pathological human brain. Further studies to validate the clinical findings are needed.

  5. Experimental and Numerical Examination of the Thermal Transmittance of High Performance Window Frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustavsen Ph.D., Arild; Goudey, Howdy; Kohler, Christian

    2010-06-17

    While window frames typically represent 20-30percent of the overall window area, their impact on the total window heat transfer rates may be much larger. This effect is even greater in low-conductance (highly insulating) windows which incorporate very low conductance glazings. Developing low-conductance window frames requires accurate simulation tools for product research and development. The Passivhaus Institute in Germany states that windows (glazing and frames, combined) should have U-values not exceeding 0.80 W/(m??K). This has created a niche market for highly insulating frames, with frame U-values typically around 0.7-1.0 W/(m2 cdot K). The U-values reported are often based on numerical simulationsmore » according to international simulation standards. It is prudent to check the accuracy of these calculation standards, especially for high performance products before more manufacturers begin to use them to improve other product offerings. In this paper the thermal transmittance of five highly insulating window frames (three wooden frames, one aluminum frame and one PVC frame), found from numerical simulations and experiments, are compared. Hot box calorimeter results are compared with numerical simulations according to ISO 10077-2 and ISO 15099. In addition CFD simulations have been carried out, in order to use the most accurate tool available to investigate the convection and radiation effects inside the frame cavities. Our results show that available tools commonly used to evaluate window performance, based on ISO standards, give good overall agreement, but specific areas need improvement.« less

  6. Examining Accuracy of Self-Assessment of In-Training Examination Performance in a Context of Guided Self-Assessment.

    PubMed

    Babenko, Oksana; Campbell-Scherer, Denise; Schipper, Shirley; Chmelicek, John; Barber, Tanya; Duerksen, Kimberley; Ross, Shelley

    2017-06-01

    In our family medicine residency program, we have established a culture of guided self-assessment through a systematic approach of direct observation of residents and documentation of formative feedback. We have observed that our residents have become more accurate in self-assessing their clinical performance. The objective of this study was to examine whether this improved accuracy extended to residents' self-assessment of their medical knowledge and clinical reasoning on the In-Training Examination (ITE). In November each year, residents in their first (PGY1) and second (PGY2) years of residency take the ITE (240 multiple-choice questions). Immediately before and right after taking the ITE, residents complete a questionnaire, self-assessing their knowledge and predicting their performances, overall and in eight high-level domains. Consented data from residents who took the ITE in 2009-2015 (n=380, 60% participation rate) were used in the Generalized Estimating Equations analyses. PGY2 residents outperformed PGY1 residents; Canadian medical graduates consistently outperformed international medical graduates; urban and rural residents performed similarly overall. Residents' pre-post self-assessments were in line with residents' actual performance on the overall examination and in the domains of Adult Medicine and Care of Surgical Patients. The underperforming residents in this study accurately predicted both pre- and post-ITE that they would perform poorly. Our findings suggest that the ITE operates well in our program. There was a tendency among residents in this study to appropriately adjust their self-assessment of their overall performance after completing the ITE. Irrespective of the residency year, resident self-assessment was less accurate on individual domains.

  7. An autoregressive model-based particle filtering algorithms for extraction of respiratory rates as high as 90 breaths per minute from pulse oximeter.

    PubMed

    Lee, Jinseok; Chon, Ki H

    2010-09-01

    We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.

  8. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  9. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  10. Characterizing the constitutive response and energy absorption of rigid polymeric foams subjected to intermediate-velocity impact

    DOE PAGES

    Koohbor, Behrad; Kidane, Addis; Lu, Wei-Yang

    2016-06-27

    As an optimum energy-absorbing material system, polymeric foams are needed to dissipate the kinetic energy of an impact, while maintaining the impact force transferred to the protected object at a low level. As a result, it is crucial to accurately characterize the load bearing and energy dissipation performance of foams at high strain rate loading conditions. There are certain challenges faced in the accurate measurement of the deformation response of foams due to their low mechanical impedance. In the present work, a non-parametric method is successfully implemented to enable the accurate assessment of the compressive constitutive response of rigid polymericmore » foams subjected to impact loading conditions. The method is based on stereovision high speed photography in conjunction with 3D digital image correlation, and allows for accurate evaluation of inertia stresses developed within the specimen during deformation time. In conclusion, full-field distributions of stress, strain and strain rate are used to extract the local constitutive response of the material at any given location along the specimen axis. In addition, the effective energy absorbed by the material is calculated. Finally, results obtained from the proposed non-parametric analysis are compared with data obtained from conventional test procedures.« less

  11. Denaturing high-performance liquid chromatography for mutation detection and genotyping.

    PubMed

    Fackenthal, Donna Lee; Chen, Pei Xian; Howe, Ted; Das, Soma

    2013-01-01

    Denaturing high-performance liquid chromatography (DHPLC) is an accurate and efficient screening technique used for detecting DNA sequence changes by heteroduplex analysis. It can also be used for genotyping of single nucleotide polymorphisms (SNPs). The high sensitivity of DHPLC has made this technique one of the most reliable approaches to mutation analysis and, therefore, used in various areas of genetics, both in the research and clinical arena. This chapter describes the methods used for mutation detection analysis and the genotyping of SNPs by DHPLC on the WAVE™ system from Transgenomic Inc. ("WAVE" and "DNASep" are registered trademarks, and "Navigator" is a trademark, of Transgenomic, used with permission. All other trademarks are property of the respective owners).

  12. Design of a new high-performance pointing controller for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1993-01-01

    A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.

  13. SU-F-T-32: Evaluation of the Performance of a Multiple-Array-Diode Detector for Quality Assurance Tests in High-Dose-Rate Brachytherapy with Ir-192 Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpool, K; De La Fuente Herman, T; Ahmad, S

    Purpose: To evaluate the performance of a two-dimensional (2D) array-diode- detector for geometric and dosimetric quality assurance (QA) tests of high-dose-rate (HDR) brachytherapy with an Ir-192-source. Methods: A phantom setup was designed that encapsulated a two-dimensional (2D) array-diode-detector (MapCheck2) and a catheter for the HDR brachytherapy Ir-192 source. This setup was used to perform both geometric and dosimetric quality assurance for the HDR-Ir192 source. The geometric tests included: (a) measurement of the position of the source and (b) spacing between different dwell positions. The dosimteric tests include: (a) linearity of output with time, (b) end effect and (c) relative dosemore » verification. The 2D-dose distribution measured with MapCheck2 was used to perform the previous tests. The results of MapCheck2 were compared with the corresponding quality assurance testes performed with Gafchromic-film and well-ionization-chamber. Results: The position of the source and the spacing between different dwell-positions were reproducible within 1 mm accuracy by measuring the position of maximal dose using MapCheck2 in contrast to the film which showed a blurred image of the dwell positions due to limited film sensitivity to irradiation. The linearity of the dose with dwell times measured from MapCheck2 was superior to the linearity measured with ionization chamber due to higher signal-to-noise ratio of the diode readings. MapCheck2 provided more accurate measurement of the end effect with uncertainty < 1.5% in comparison with the ionization chamber uncertainty of 3%. Although MapCheck2 did not provide absolute calibration dosimeter for the activity of the source, it provided accurate tool for relative dose verification in HDR-brachytherapy. Conclusion: The 2D-array-diode-detector provides a practical, compact and accurate tool to perform quality assurance for HDR-brachytherapy with an Ir-192 source. The diodes in MapCheck2 have high radiation sensitivity and linearity that is superior to Gafchromic-films and ionization chamber used for geometric and dosimetric QA in HDR-brachytherapy, respectively.« less

  14. Engaging the Workforce - 12347

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaden, Michael D.; Wastren Advantage Inc.

    2012-07-01

    Likert, Covey, and a number of others studying and researching highly effective organizations have found that performing functions such as problem-solving, decision-making, safety analysis, planning, and continuous improvement as close to the working floor level as possible results in greater buy-in, feelings of ownership by the workers, and more effective use of resources. Empowering the workforce does several things: 1) people put more effort and thought into work for which they feel ownership, 2) the information they use for planning, analysis, problem-solving,and decision-making is more accurate, 3) these functions are performed in a more timely manner, and 4) the resultsmore » of these functions have more credibility with those who must implement them. This act of delegation and empowerment also allows management more time to perform functions they are uniquely trained and qualified to perform, such as strategic planning, staff development, succession planning, and organizational improvement. To achieve this state in an organization, however, requires a very open, transparent culture in which accurate, timely, relevant, candid, and inoffensive communication flourishes, a situation that does not currently exist in a majority of organizations. (authors)« less

  15. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  16. Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach

    PubMed Central

    Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474

  17. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    PubMed Central

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-01-01

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. PMID:18502191

  18. High-throughput analysis of sulfatides in cerebrospinal fluid using automated extraction and UPLC-MS/MS.

    PubMed

    Blomqvist, Maria; Borén, Jan; Zetterberg, Henrik; Blennow, Kaj; Månsson, Jan-Eric; Ståhlman, Marcus

    2017-07-01

    Sulfatides (STs) are a group of glycosphingolipids that are highly expressed in brain. Due to their importance for normal brain function and their potential involvement in neurological diseases, development of accurate and sensitive methods for their determination is needed. Here we describe a high-throughput oriented and quantitative method for the determination of STs in cerebrospinal fluid (CSF). The STs were extracted using a fully automated liquid/liquid extraction method and quantified using ultra-performance liquid chromatography coupled to tandem mass spectrometry. With the high sensitivity of the developed method, quantification of 20 ST species from only 100 μl of CSF was performed. Validation of the method showed that the STs were extracted with high recovery (90%) and could be determined with low inter- and intra-day variation. Our method was applied to a patient cohort of subjects with an Alzheimer's disease biomarker profile. Although the total ST levels were unaltered compared with an age-matched control group, we show that the ratio of hydroxylated/nonhydroxylated STs was increased in the patient cohort. In conclusion, we believe that the fast, sensitive, and accurate method described in this study is a powerful new tool for the determination of STs in clinical as well as preclinical settings. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  19. A hybrid method combining the surface integral equation method and ray tracing for the numerical simulation of high frequency diffraction involved in ultrasonic NDT

    NASA Astrophysics Data System (ADS)

    Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.

    2018-05-01

    Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.

  20. Evaluation of a new disposable silicon limbal relaxing incision knife by experienced users.

    PubMed

    Albanese, John; Dugue, Geoffrey; Parvu, Valentin; Bajart, Ann M; Lee, Edwin

    2009-12-21

    Previous research has suggested that the silicon BD Atomic Edge knife has superior performance characteristics when compared to a metal knife and performance similar to diamond knife when making various incisions. This study was designed to determine whether a silicon accurate depth knife has equivalent performance characteristics when compared to a diamond limbal relaxing incision (LRI) knife and superior performance characteristics when compared to a steel accurate depth knife when creating limbal relaxing incision. Sixty-five ophthalmic surgeons with limbal relaxing incision experience created limbal relaxing incisions in ex-vivo porcine eyes with silicon and steel accurate depth knives and diamond LRI knives. The ophthalmic surgeons rated multiple performance characteristics of the knives on Visual Analog Scales. The observed differences between the silicon knife and diamond knife were found to be insignificant. The mean ratio between the performance of the silicon knife and the diamond knife was shown to be greater than 90% (with 95% confidence). The silicon knife's mean performance was significantly higher than the performance of the steel knife for all characteristics. (p-value < .05) For experienced users, the silicon accurate depth knife was found to be equivalent in performance to the diamond LRI knife and superior to the steel accurate depth knife when making limbal relaxing incisions in ex vivo porcine eyes. Disposable silicon LRI knives may be an alternative to diamond LRI knives.

  1. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  2. Atomic force microscopy characterization of cellulose nanocrystals

    Treesearch

    Roya R. Lahiji; Xin Xu; Ronald Reifenberger; Arvind Raman; Alan Rudie; Robert J. Moon

    2010-01-01

    Cellulose nanocrystals (CNCs) are gaining interest as a “green” nanomaterial with superior mechanical and chemical properties for high-performance nanocomposite materials; however, there is a lack of accurate material property characterization of individual CNCs. Here, a detailed study of the topography, elastic and adhesive properties of individual wood-derived CNCs...

  3. A Comparison of Self versus Tutor Assessment among Hungarian Undergraduate Business Students

    ERIC Educational Resources Information Center

    Kun, András István

    2016-01-01

    This study analyses the self-assessment behaviour and efficiency of 163 undergraduate business students from Hungary. Using various statistical methods, the results support the hypothesis that high-achieving students are more accurate in their pre- and post-examination self-assessments, and also less likely to overestimate their performance, and,…

  4. Art Supports Reading Comprehension

    ERIC Educational Resources Information Center

    Wurst, Douglas; Jones, Dana; Moore, Jim

    2005-01-01

    State-mandated, high-stakes testing is the primary means by which schools are judged. Whether this is a fair and accurate way of judging the performance of schools may remain in debate for a long time. Some school districts have gone so far as reducing or eliminating "special" classes--in particular art and music. Art teachers can help prepare…

  5. Three-Dimensional Innervation Zone Imaging from Multi-Channel Surface EMG Recordings.

    PubMed

    Liu, Yang; Ning, Yong; Li, Sheng; Zhou, Ping; Rymer, William Z; Zhang, Yingchun

    2015-09-01

    There is an unmet need to accurately identify the locations of innervation zones (IZs) of spastic muscles, so as to guide botulinum toxin (BTX) injections for the best clinical outcome. A novel 3D IZ imaging (3DIZI) approach was developed by combining the bioelectrical source imaging and surface electromyogram (EMG) decomposition methods to image the 3D distribution of IZs in the target muscles. Surface IZ locations of motor units (MUs), identified from the bipolar map of their MU action potentials (MUAPs) were employed as a prior knowledge in the 3DIZI approach to improve its imaging accuracy. The performance of the 3DIZI approach was first optimized and evaluated via a series of designed computer simulations, and then validated with the intramuscular EMG data, together with simultaneously recorded 128-channel surface EMG data from the biceps of two subjects. Both simulation and experimental validation results demonstrate the high performance of the 3DIZI approach in accurately reconstructing the distributions of IZs and the dynamic propagation of internal muscle activities in the biceps from high-density surface EMG recordings.

  6. Long, elliptically bent, active X-ray mirrors with slope errors <200 nrad.

    PubMed

    Nistea, Ioana T; Alcock, Simon G; Kristiansen, Paw; Young, Adam

    2017-05-01

    Actively bent X-ray mirrors are important components of many synchrotron and X-ray free-electron laser beamlines. A high-quality optical surface and good bending performance are essential to ensure that the X-ray beam is accurately focused. Two elliptically bent X-ray mirror systems from FMB Oxford were characterized in the optical metrology laboratory at Diamond Light Source. A comparison of Diamond-NOM slope profilometry and finite-element analysis is presented to investigate how the 900 mm-long mirrors sag under gravity, and how this deformation can be adequately compensated using a single, spring-loaded compensator. It is shown that two independent mechanical actuators can accurately bend the trapezoidal substrates to a range of elliptical profiles. State-of-the-art residual slope errors of <200 nrad r.m.s. are achieved over the entire elliptical bending range. High levels of bending repeatability (ΔR/R = 0.085% and 0.156% r.m.s. for the two bending directions) and stability over 24 h (ΔR/R = 0.07% r.m.s.) provide reliable beamline performance.

  7. THREE-DIMENSIONAL INNERVATION ZONE IMAGING FROM MULTI-CHANNEL SURFACE EMG RECORDINGS

    PubMed Central

    LIU, YANG; NING, YONG; LI, SHENG; ZHOU, PING; RYMER, WILLIAM Z.; ZHANG, YINGCHUN

    2017-01-01

    There is an unmet need to accurately identify the locations of innervation zones (IZs) of spastic muscles, so as to guide botulinum toxin (BTX) injections for the best clinical outcome. A novel 3-dimensional IZ imaging (3DIZI) approach was developed by combining the bioelectrical source imaging and surface electromyogram (EMG) decomposition methods to image the 3D distribution of IZs in the target muscles. Surface IZ locations of motor units (MUs), identified from the bipolar map of their motor unit action potentials (MUAPs) were employed as a prior knowledge in the 3DIZI approach to improve its imaging accuracy. The performance of the 3DIZI approach was first optimized and evaluated via a series of designed computer simulations, and then validated with the intramuscular EMG data, together with simultaneously recorded 128-channel surface EMG data from the biceps of two subjects. Both simulation and experimental validation results demonstrate the high performance of the 3DIZI approach in accurately reconstructing the distributions of IZs and the dynamic propagation of internal muscle activities in the biceps from high-density surface EMG recordings. PMID:26160432

  8. In vitro metabolism study of Strychnos alkaloids using high-performance liquid chromatography combined with hybrid ion trap/time-of-flight mass spectrometry.

    PubMed

    Tian, Ji-Xin; Peng, Can; Xu, Lei; Tian, Yuan; Zhang, Zun-Jian

    2013-06-01

    In this report, the in vitro metabolism of Strychnos alkaloids was investigated using liquid chromatography/high-resolution mass spectrometry for the first time. Strychnine and brucine were selected as model compounds to determine the universal biotransformations of the Strychnos alkaloids in rat liver microsomes. The incubation mixtures were separated by a bidentate-C18 column, and then analyzed by on-line ion trap/time-of-flight mass spectrometry. With the assistance of mass defect filtering technique, full-scan accurate mass datasets were processed for the discovery of the related metabolites. The structural elucidations of these metabolites were achieved by comparing the changes in accurate molecular masses, calculating chemical component using Formula Predictor software and defining sites of biotransformation based upon accurate MS(n) spectral information. As a result, 31 metabolites were identified, of which 26 metabolites were reported for the first time. These biotransformations included hydroxylation, N-oxidation, epoxidation, methylation, dehydrogenation, de-methoxylation, O-demethylation, as well as hydrolysis reactions. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Mapping Sub-Saharan African Agriculture in High-Resolution Satellite Imagery with Computer Vision & Machine Learning

    NASA Astrophysics Data System (ADS)

    Debats, Stephanie Renee

    Smallholder farms dominate in many parts of the world, including Sub-Saharan Africa. These systems are characterized by small, heterogeneous, and often indistinct field patterns, requiring a specialized methodology to map agricultural landcover. In this thesis, we developed a benchmark labeled data set of high-resolution satellite imagery of agricultural fields in South Africa. We presented a new approach to mapping agricultural fields, based on efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. The algorithm achieved similar high performance across agricultural types, including spectrally indistinct smallholder fields, and demonstrated the ability to generalize across large geographic areas. In sensitivity analyses, we determined multi-temporal images provided greater performance gains than the addition of multi-spectral bands. We also demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples. This thesis furthers the goal of providing accurate agricultural landcover maps, at a scale that is relevant for the dominant smallholder class. Accurate maps are crucial for monitoring and promoting agricultural production. Furthermore, improved agricultural landcover maps will aid a host of other applications, including landcover change assessments, cadastral surveys to strengthen smallholder land rights, and constraints for crop modeling and famine prediction.

  10. WE-F-16A-04: Micro-Irradiator Treatment Verification with High-Resolution 3D-Printed Rodent-Morphic Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, S; Belley, M; Benning, R

    2014-06-15

    Purpose: Pre-clinical micro-radiation therapy studies often utilize very small beams (∼0.5-5mm), and require accurate dose delivery in order to effectively investigate treatment efficacy. Here we present a novel high-resolution absolute 3D dosimetry procedure, capable of ∼100-micron isotopic dosimetry in anatomically accurate rodent-morphic phantoms Methods: Anatomically accurate rat-shaped 3D dosimeters were made using 3D printing techniques from outer body contours and spinal contours outlined on CT. The dosimeters were made from a radiochromic plastic material PRESAGE, and incorporated high-Z PRESASGE inserts mimicking the spine. A simulated 180-degree spinal arc treatment was delivered through a 2 step process: (i) cone-beam-CT image-guided positioningmore » was performed to precisely position the rat-dosimeter for treatment on the XRad225 small animal irradiator, then (ii) treatment was delivered with a simulated spine-treatment with a 180-degree arc with 20mm x 10mm cone at 225 kVp. Dose distribution was determined from the optical density change using a high-resolution in-house optical-CT system. Absolute dosimetry was enabled through calibration against a novel nano-particle scintillation detector positioned in a channel in the center of the distribution. Results: Sufficient contrast between regular PRESAGE (tissue equivalent) and high-Z PRESAGE (spinal insert) was observed to enable highly accurate image-guided alignment and targeting. The PRESAGE was found to have linear optical density (OD) change sensitivity with respect to dose (R{sup 2} = 0.9993). Absolute dose for 360-second irradiation at isocenter was found to be 9.21Gy when measured with OD change, and 9.4Gy with nano-particle detector- an agreement within 2%. The 3D dose distribution was measured at 500-micron resolution Conclusion: This work demonstrates for the first time, the feasibility of accurate absolute 3D dose measurement in anatomically accurate rat phantoms containing variable density PRESAGE material (tissue equivalent and bone equivalent). This method enables precise treatment verification of micro-radiation therapies, and enhances the robustness of tumor radio-response studies. This work was supported by NIH R01CA100835.« less

  11. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  12. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  13. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 1

    DTIC Science & Technology

    2010-01-01

    Researchers in AHPCRC Technical Area 4 focus on improving processes for developing scalable, accurate parallel programs that are easily ported from one...control number. 1. REPORT DATE 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4 . TITLE AND SUBTITLE AHPCRC (Army High...continued on page 4 Virtual levels in Sequoia represent an abstract memory hierarchy without specifying data transfer mechanisms, giving the

  14. Fuzzy Reasoning to More Accurately Determine Void Areas on Optical Micrographs of Composite Structures

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne

    2013-01-01

    Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.

  15. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.

    2006-09-01

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.

  16. Nonlinear stability and control study of highly maneuverable high performance aircraft

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.

    1993-01-01

    This project is intended to research and develop new nonlinear methodologies for the control and stability analysis of high-performance, high angle-of-attack aircraft such as HARV (F18). Past research (reported in our Phase 1, 2, and 3 progress reports) is summarized and more details of final Phase 3 research is provided. While research emphasis is on nonlinear control, other tasks such as associated model development, system identification, stability analysis, and simulation are performed in some detail as well. An overview of various models that were investigated for different purposes such as an approximate model reference for control adaptation, as well as another model for accurate rigid-body longitudinal motion is provided. Only a very cursory analysis was made relative to type 8 (flexible body dynamics). Standard nonlinear longitudinal airframe dynamics (type 7) with the available modified F18 stability derivatives, thrust vectoring, actuator dynamics, and control constraints are utilized for simulated flight evaluation of derived controller performance in all cases studied.

  17. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  18. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  19. Testing approximations for non-linear gravitational clustering

    NASA Technical Reports Server (NTRS)

    Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.

  20. 42 CFR 493.1254 - Standard: Maintenance and function checks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ensures equipment, instrument, and test system performance that is necessary for accurate and reliable... equipment, instrument, and test system performance that is necessary for accurate and reliable test results...

  1. An optimized method for the accurate determination of patulin in apple products by isotope dilution-liquid chromatography/mass spectrometry.

    PubMed

    Seo, Miyeong; Kim, Byungjoo; Baek, Song-Yee

    2015-07-01

    Patulin, a mycotoxin produced by several molds in fruits, has been frequently detected in apple products. Therefore, regulatory bodies have established recommended maximum permitted patulin concentrations for each type of apple product. Although several analytical methods have been adopted to determine patulin in food, quality control of patulin analysis is not easy, as reliable certified reference materials (CRMs) are not available. In this study, as a part of a project for developing CRMs for patulin analysis, we developed isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC/MS/MS) as a higher-order reference method for the accurate value-assignment of CRMs. (13)C7-patulin was used as internal standard. Samples were extracted with ethyl acetate to improve recovery. For further sample cleanup with solid-phase extraction (SPE), the HLB SPE cartridge was chosen after comparing with several other types of SPE cartridges. High-performance liquid chromatography was performed on a multimode column for proper retention and separation of highly polar and water-soluble patulin from sample interferences. Sample extracts were analyzed by LC/MS/MS with electrospray ionization in negative ion mode with selected reaction monitoring of patulin and (13)C7-patulin at m/z 153→m/z 109 and m/z 160→m/z 115, respectively. The validity of the method was tested by measuring gravimetrically fortified samples of various apple products. In addition, the repeatability and the reproducibility of the method were tested to evaluate the performance of the method. The method was shown to provide accurate measurements in the 3-40 μg/kg range with a relative expanded uncertainty of around 1%.

  2. SU-F-T-486: A Simple Approach to Performing Light Versus Radiation Field Coincidence Quality Assurance Using An Electronic Portal Imaging Device (EPID)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herchko, S; Ding, G

    2016-06-15

    Purpose: To develop an accurate, straightforward, and user-independent method for performing light versus radiation field coincidence quality assurance utilizing EPID images, a simple phantom made of readily-accessible materials, and a free software program. Methods: A simple phantom consisting of a blocking tray, graph paper, and high-density wire was constructed. The phantom was used to accurately set the size of a desired light field and imaged on the electronic portal imaging device (EPID). A macro written for use in ImageJ, a free image processing software, was then use to determine the radiation field size utilizing the high density wires on themore » phantom for a pixel to distance calibration. The macro also performs an analysis on the measured radiation field utilizing the tolerances recommended in the AAPM Task Group #142. To verify the accuracy of this method, radiochromic film was used to qualitatively demonstrate agreement between the film and EPID results, and an additional ImageJ macro was used to quantitatively compare the radiation field sizes measured both with the EPID and film images. Results: The results of this technique were benchmarked against film measurements, which have been the gold standard for testing light versus radiation field coincidence. The agreement between this method and film measurements were within 0.5 mm. Conclusion: Due to the operator dependency associated with tracing light fields and measuring radiation fields by hand when using film, this method allows for a more accurate comparison between the light and radiation fields with minimal operator dependency. Removing the need for radiographic or radiochromic film also eliminates a reoccurring cost and increases procedural efficiency.« less

  3. Evaluating Thermodynamic Integration Performance of the New Amber Molecular Dynamics Package and Assess Potential Halogen Bonds of Enoyl-ACP Reductase (FabI) Benzimidazole Inhibitors

    PubMed Central

    Su, Pin-Chih; Johnson, Michael E.

    2015-01-01

    Thermodynamic integration (TI) can provide accurate binding free energy insights in a lead optimization program, but its high computational expense has limited its usage. In the effort of developing an efficient and accurate TI protocol for FabI inhibitors lead optimization program, we carefully compared TI with different Amber molecular dynamics (MD) engines (sander and pmemd), MD simulation lengths, the number of intermediate states and transformation steps, and the Lennard-Jones and Coulomb Softcore potentials parameters in the one-step TI, using eleven benzimidazole inhibitors in complex with Francisella tularensis enoyl acyl reductase (FtFabI). To our knowledge, this is the first study to extensively test the new AMBER MD engine, pmemd, on TI and compare the parameters of the Softcore potentials in the one-step TI in a protein-ligand binding system. The best performing model, the one-step pmemd TI, using 6 intermediate states and 1 ns MD simulations, provides better agreement with experimental results (RMSD = 0.52 kcal/mol) than the best performing implicit solvent method, QM/MM-GBSA from our previous study (RMSD = 3.00 kcal/mol), while maintaining similar efficiency. Briefly, we show the optimized TI protocol to be highly accurate and affordable for the FtFabI system. This approach can be implemented in a larger scale benzimidazole scaffold lead optimization against FtFabI. Lastly, the TI results here also provide structure-activity relationship insights, and suggest the para-halogen in benzimidazole compounds might form a weak halogen bond with FabI, which is a well-known halogen bond favoring enzyme. PMID:26666582

  4. Evaluating thermodynamic integration performance of the new amber molecular dynamics package and assess potential halogen bonds of enoyl-ACP reductase (FabI) benzimidazole inhibitors.

    PubMed

    Su, Pin-Chih; Johnson, Michael E

    2016-04-05

    Thermodynamic integration (TI) can provide accurate binding free energy insights in a lead optimization program, but its high computational expense has limited its usage. In the effort of developing an efficient and accurate TI protocol for FabI inhibitors lead optimization program, we carefully compared TI with different Amber molecular dynamics (MD) engines (sander and pmemd), MD simulation lengths, the number of intermediate states and transformation steps, and the Lennard-Jones and Coulomb Softcore potentials parameters in the one-step TI, using eleven benzimidazole inhibitors in complex with Francisella tularensis enoyl acyl reductase (FtFabI). To our knowledge, this is the first study to extensively test the new AMBER MD engine, pmemd, on TI and compare the parameters of the Softcore potentials in the one-step TI in a protein-ligand binding system. The best performing model, the one-step pmemd TI, using 6 intermediate states and 1 ns MD simulations, provides better agreement with experimental results (RMSD = 0.52 kcal/mol) than the best performing implicit solvent method, QM/MM-GBSA from our previous study (RMSD = 3.00 kcal/mol), while maintaining similar efficiency. Briefly, we show the optimized TI protocol to be highly accurate and affordable for the FtFabI system. This approach can be implemented in a larger scale benzimidazole scaffold lead optimization against FtFabI. Lastly, the TI results here also provide structure-activity relationship insights, and suggest the parahalogen in benzimidazole compounds might form a weak halogen bond with FabI, which is a well-known halogen bond favoring enzyme. © 2015 Wiley Periodicals, Inc.

  5. Real-time assessment of mental workload using psychophysiological measures and artificial neural networks.

    PubMed

    Wilson, Glenn F; Russell, Christopher A

    The functional state of the human operator is critical to optimal system performance. Degraded states of operator functioning can lead to errors and overall suboptimal system performance. Accurate assessment of operator functional state is crucial to the successful implementation of an adaptive aiding system. One method of determining operators' functional state is by monitoring their physiology. In the present study, artificial neural networks using physiological signals were used to continuously monitor, in real time, the functional state of 7 participants while they performed the Multi-Attribute Task Battery with two levels of task difficulty. Six channels of brain electrical activity and eye, heart and respiration measures were evaluated on line. The accuracy of the classifier was determined to test its utility as an on-line measure of operator state. The mean classification accuracies were 85%, 82%, and 86% for the baseline, low task difficulty, and high task difficulty conditions, respectively. The high levels of accuracy suggest that these procedures can be used to provide accurate estimates of operator functional state that can be used to provide adaptive aiding. The relative contribution of each of the 43 psychophysiological features was also determined. Actual or potential applications of this research include test and evaluation and adaptive aiding implementation.

  6. Accuracy of the Velotron ergometer and SRM power meter.

    PubMed

    Abbiss, C R; Quod, M J; Levin, G; Martin, D T; Laursen, P B

    2009-02-01

    The purpose of this study was to determine the accuracy of the Velotron cycle ergometer and the SRM power meter using a dynamic calibration rig over a range of exercise protocols commonly applied in laboratory settings. These trials included two sustained constant power trials (250 W and 414 W), two incremental power trials and three high-intensity interval power trials. To further compare the two systems, 15 subjects performed three dynamic 30 km performance time trials. The Velotron and SRM displayed accurate measurements of power during both constant power trials (<1% error). However, during high-intensity interval trials the Velotron and SRM were found to be less accurate (3.0%, CI=1.6-4.5% and -2.6%, CI=-3.2--2.0% error, respectively). During the dynamic 30 km time trials, power measured by the Velotron was 3.7+/-1.9% (CI=2.9-4.8%) greater than that measured by the SRM. In conclusion, the accuracy of the Velotron cycle ergometer and the SRM power meter appears to be dependent on the type of test being performed. Furthermore, as each power monitoring system measures power at various positions (i.e. bottom bracket vs. rear wheel), caution should be taken when comparing power across the two systems, particularly when power is variable.

  7. Geometrically confined ultrasmall gadolinium oxide nanoparticles boost the T1 contrast ability

    NASA Astrophysics Data System (ADS)

    Ni, Kaiyuan; Zhao, Zhenghuan; Zhang, Zongjun; Zhou, Zijian; Yang, Li; Wang, Lirong; Ai, Hua; Gao, Jinhao

    2016-02-01

    High-performance magnetic resonance imaging (MRI) contrast agents and novel contrast enhancement strategies are urgently needed for sensitive and accurate diagnosis. Here we report a strategy to construct a new T1 contrast agent based on the Solomon-Bloembergen-Morgan (SBM) theory. We loaded the ultrasmall gadolinium oxide nanoparticles into worm-like interior channels of mesoporous silica nanospheres (Gd2O3@MSN nanocomposites). This unique structure endows the nanocomposites with geometrical confinement, high molecular tumbling time, and a large coordinated number of water molecules, which results in a significant enhancement of the T1 contrast with longitudinal proton relaxivity (r1) as high as 45.08 mM-1 s-1. Such a high r1 value of Gd2O3@MSN, compared to those of ultrasmall Gd2O3 nanoparticles and gadolinium-based clinical contrast agents, is mainly attributed to the strong geometrical confinement effect. This strategy provides new guidance for developing various high-performance T1 contrast agents for sensitive imaging and disease diagnosis.High-performance magnetic resonance imaging (MRI) contrast agents and novel contrast enhancement strategies are urgently needed for sensitive and accurate diagnosis. Here we report a strategy to construct a new T1 contrast agent based on the Solomon-Bloembergen-Morgan (SBM) theory. We loaded the ultrasmall gadolinium oxide nanoparticles into worm-like interior channels of mesoporous silica nanospheres (Gd2O3@MSN nanocomposites). This unique structure endows the nanocomposites with geometrical confinement, high molecular tumbling time, and a large coordinated number of water molecules, which results in a significant enhancement of the T1 contrast with longitudinal proton relaxivity (r1) as high as 45.08 mM-1 s-1. Such a high r1 value of Gd2O3@MSN, compared to those of ultrasmall Gd2O3 nanoparticles and gadolinium-based clinical contrast agents, is mainly attributed to the strong geometrical confinement effect. This strategy provides new guidance for developing various high-performance T1 contrast agents for sensitive imaging and disease diagnosis. Electronic supplementary information (ESI) available: Supplementary Fig. S1-S6. See DOI: 10.1039/c5nr08402d

  8. Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Narducci, Robert; Orr, Stanley; Kreeger, Richard E.

    2012-01-01

    An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.

  9. Corot telescope (COROTEL)

    NASA Astrophysics Data System (ADS)

    Viard, Thierry; Mathieu, Jean-Claude; Fer, Yann; Bouzou, Nathalie; Spalinger, Etienne; Chataigner, Bruno; Bodin, Pierre; Magnan, Alain; Baglin, Annie

    2017-11-01

    COROTEL is the telescope of the COROT Satellite which aims at measuring stellar flux variations very accurately. To perform this mission, COROTEL has to be very well protected against straylight (from Sun and Earth) and must be very stable with time. Thanks to its high experience in this field, Alcatel Alenia Space has proposed, manufactured and tested an original telescope concept associated with a high baffling performance. Since its delivery to LAM (Laboratoire d'Astrophysique de Marseille, CNRS) the telescope has passed successfully the qualification tests at instrument level performed by CNES. Now, the instrument is mounted on a Proteus platform and should be launched end of 2006. The satellite should bring to scientific community for the first time precious data coming from stars and their possible companions.

  10. Ultra-stable sub-meV monochromator for hard X-rays

    DOE PAGES

    Toellner, T. S.; Collins, J.; Goetze, K.; ...

    2015-07-17

    A high-resolution silicon monochromator suitable for 21.541 keV synchrotron radiation is presented that produces a bandwidth of 0.27 meV. The operating energy corresponds to a nuclear transition in 151Eu. The first-of-its-kind, fully cryogenic design achieves an energy-alignment stability of 0.017 meV r.m.s. per day, or a 100-fold improvement over other meV-monochromators, and can tolerate higher X-ray power loads than room-temperature designs of comparable resolution. This offers the potential for significantly more accurate measurements of lattice excitation energies using nuclear resonant vibrational spectroscopy if combined with accurate energy calibration using, for example, high-speed Doppler shifting. The design of the monochromator alongmore » with its performance and impact on transmitted beam properties are presented.« less

  11. Machine learning bandgaps of double perovskites

    DOE PAGES

    Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; ...

    2016-01-19

    The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the mostmore » crucial and relevant predictors. As a result, the developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance.« less

  12. Highly accurate apparatus for electrochemical characterization of the felt electrodes used in redox flow batteries

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon

    2016-04-01

    Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.

  13. Microscopic 3D measurement of dynamic scene using optimized pulse-width-modulation binary fringe

    NASA Astrophysics Data System (ADS)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-10-01

    Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.

  14. Turbine Performance Optimization Task Status

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Turner, James E. (Technical Monitor)

    2001-01-01

    Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.

  15. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  16. The solution of the point kinetics equations via converged accelerated Taylor series (CATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.; Picca, P.; Previti, A.

    This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less

  17. Visual search performance in the autism spectrum II: the radial frequency search task with additional segmentation cues.

    PubMed

    Almeida, Renita A; Dickinson, J Edwin; Maybery, Murray T; Badcock, Johanna C; Badcock, David R

    2010-12-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial frequency (RF) patterns with controllable amounts of target/distracter overlap on which high AQ participants showed more efficient search than low AQ observers. The current study extended the design of this search task by adding two lines which traverse the display on random paths sometimes intersecting target/distracters, other times passing between them. As with the EFT, these lines segment and group the display in ways that are task irrelevant. We tested two new groups of observers and found that while RF search was slowed by the addition of segmenting lines for both groups, the high AQ group retained a consistent search advantage (reflected in a shallower gradient for reaction time as a function of set size) over the low AQ group. Further, the high AQ group were significantly faster and more accurate on the EFT compared to the low AQ group. That is, the results from the present RF search task demonstrate that segmentation and grouping created by intersecting lines does not further differentiate the groups and is therefore unlikely to be a critical factor underlying the EFT performance difference. However, once again, we found that superior EFT performance was associated with shallower gradients on the RF search task. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. An Engine Research Program Focused on Low Pressure Turbine Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Castner, Raymond; Wyzykowski, John; Chiapetta, Santo; Adamczyk, John

    2002-01-01

    A comprehensive test program was performed in the Propulsion Systems Laboratory at the NASA Glenn Research Center, Cleveland Ohio using a highly instrumented Pratt and Whitney Canada PW 545 turbofan engine. A key objective of this program was the development of a high-altitude database on small, high-bypass ratio engine performance and operability. In particular, the program documents the impact of altitude (Reynolds Number) on the aero-performance of the low-pressure turbine (fan turbine). A second objective was to assess the ability of a state-of-the-art CFD code to predict the effect of Reynolds number on the efficiency of the low-pressure turbine. CFD simulation performed prior and after the engine tests will be presented and discussed. Key findings are the ability of a state-of-the art CFD code to accurately predict the impact of Reynolds Number on the efficiency and flow capacity of the low-pressure turbine. In addition the CFD simulations showed the turbulent intensity exiting the low-pressure turbine to be high (9%). The level is consistent with measurements taken within an engine.

  19. Development of components for IFOG-based inertial measurement units using polymer waveguide fabrication technologies

    NASA Astrophysics Data System (ADS)

    Ashley, P. R.; Temmen, M. G.; Diffey, W. M.; Sanghadasa, M.; Bramson, M. D.

    2007-10-01

    Active and passive polymer materials have been successfully used in the development of highly accurate, compact and low cost guided-wave components: an optical transceiver and a phase modulator, for inertial measurement units (IMUs) based on the interferometric fibre optic gyroscope (IFOG) technology for precision guidance in navigation systems. High performance and low noise transceivers with high optical power and good spectral quality were fabricated using a silicon-bench architecture. Low loss phase modulators with low halfwave drive voltage (Vπ) have been fabricated with a backscatter compensated design using polarizing waveguides consisting of CLD- and FTC-type high performance electro-optic (E-O) chromophores. Gyro bias stability of less than 0.02° h-1 has been demonstrated with these guided-wave components.

  20. Robust state preparation in quantum simulations of Dirac dynamics

    NASA Astrophysics Data System (ADS)

    Song, Xue-Ke; Deng, Fu-Guo; Lamata, Lucas; Muga, J. G.

    2017-02-01

    A nonrelativistic system such as an ultracold trapped ion may perform a quantum simulation of a Dirac equation dynamics under specific conditions. The resulting Hamiltonian and dynamics are highly controllable, but the coupling between momentum and internal levels poses some difficulties to manipulate the internal states accurately in wave packets. We use invariants of motion to inverse engineer robust population inversion processes with a homogeneous, time-dependent simulated electric field. This exemplifies the usefulness of inverse-engineering techniques to improve the performance of quantum simulation protocols.

  1. Military engine computational structures technology

    NASA Technical Reports Server (NTRS)

    Thomson, Daniel E.

    1992-01-01

    Integrated High Performance Turbine Engine Technology Initiative (IHPTET) goals require a strong analytical base. Effective analysis of composite materials is critical to life analysis and structural optimization. Accurate life prediction for all material systems is critical. User friendly systems are also desirable. Post processing of results is very important. The IHPTET goal is to double turbine engine propulsion capability by the year 2003. Fifty percent of the goal will come from advanced materials and structures, the other 50 percent will come from increasing performance. Computer programs are listed.

  2. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  3. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  4. Systematic and stochastic influences on the performance of the MinION nanopore sequencer across a range of nucleotide bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.

    Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less

  5. Using contrasting cases to improve self-assessment in physics learning

    NASA Astrophysics Data System (ADS)

    Jax, Jared Michael

    Accurate self-assessment (SA) is widely regarded as a valuable tool for conducting scientific work, although there is growing concern that students present difficulties in accurately assessing their own learning. For students, the challenge of accurately self-assessing their work prevents them from effectively critiquing their own knowledge and skills, and making corrections when necessary to improve their performance. An overwhelming majority of researchers have acknowledged the importance of developing and practicing the necessary reflective skills SA in science, yet it is rarely a focus of daily instruction leading to students typically overestimate their abilities. In an effort to provide a pragmatic approach to overcoming these deficiencies, this study will demonstrate the effect of using positive and negative examples of solutions (contrasting cases) on performance and accuracy of SA when compared to student who are only shown positive examples of solutions. The work described here sought, first, to establish the areas of flawed SA that introductory high school physics students experience when studying circuitry, and, second, to examine how giving students Content Knowledge in addition to Positive and Negative Examples focused on helping them self-assess might help overcome these deficiencies. In doing so, this work highlights the positive impact that these types of support have in significantly increasing student performance, SA accuracy, and the ability to evaluate solutions in physics education.

  6. Development and Preliminary Performance of a Risk Factor Screen to Predict Posttraumatic Psychological Disorder After Trauma Exposure

    PubMed Central

    Carlson, Eve B.; Palmieri, Patrick A.; Spain, David A.

    2017-01-01

    Objective We examined data from a prospective study of risk factors that increase vulnerability or resilience, exacerbate distress, or foster recovery to determine whether risk factors accurately predict which individuals will later have high posttraumatic (PT) symptom levels and whether brief measures of risk factors also accurately predict later symptom elevations. Method Using data from 129 adults exposed to traumatic injury of self or a loved one, we conducted receiver operating characteristic (ROC) analyses of 14 risk factors assessed by full-length measures, determined optimal cutoff scores and calculated predictive performance for the nine that were most predictive. For five risk factors, we identified sets of items that accounted for 90% of variance in total scores and calculated predictive performance for sets of brief risk measures. Results A set of nine risk factors assessed by full measures identified 89% of those who later had elevated PT symptoms (sensitivity) and 78% of those who did not (specificity). A set of four brief risk factor measures assessed soon after injury identified 86% of those who later had elevated PT symptoms and 72% of those who did not. Conclusions Use of sets of brief risk factor measures shows promise of accurate prediction of PT psychological disorder and probable PTSD or depression. Replication of predictive accuracy is needed in a new and larger sample. PMID:28622811

  7. Systematic and stochastic influences on the performance of the MinION nanopore sequencer across a range of nucleotide bias

    DOE PAGES

    Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.; ...

    2018-02-16

    Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less

  8. Commonalities and Differences in Word Identification Skills among Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Wang, Min; Koda, Keiko

    2005-01-01

    This study examined word identification skills among Chinese and Korean college students learning to read English as a second language in a naming experiment and an auditory category judgment task. Both groups demonstrated faster and more accurate naming performance on high-frequency words than low-frequency words and on regular words than…

  9. A hardware-oriented algorithm for floating-point function generation

    NASA Technical Reports Server (NTRS)

    O'Grady, E. Pearse; Young, Baek-Kyu

    1991-01-01

    An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.

  10. Applications of the Peng-Robinson Equation of State Using MATLAB[R

    ERIC Educational Resources Information Center

    Nasri, Zakia; Binous, Housam

    2009-01-01

    A single equation of state (EOS) such as the Peng-Robinson (PR) EOS can accurately describe both the liquid and vapor phase. We present several applications of this equation of state, including estimation of pure component properties and computation of the vapor-liquid equilibrium (VLE) diagram for binary mixtures. We perform high-pressure…

  11. The Effects of Self-Monitoring and Performance Feedback on the Treatment Integrity of Behavior Intervention Plan Implementation and Generalization

    ERIC Educational Resources Information Center

    Mouzakitis, Angela; Codding, Robin S.; Tryon, Georgiana

    2015-01-01

    Accurate implementation of individualized behavior intervention plans (BIPs) is a critical aspect of evidence-based practice. Research demonstrates that neither training nor consultation is sufficient to improve and maintain high rates of treatment integrity (TI). Therefore, evaluation of ongoing support strategies is needed. The purpose of this…

  12. SAE for the prediction of road traffic status from taxicab operating data and bus smart card data

    NASA Astrophysics Data System (ADS)

    Zhengfeng, Huang; Pengjun, Zheng; Wenjun, Xu; Gang, Ren

    Road traffic status is significant for trip decision and traffic management, and thus should be predicted accurately. A contribution is that we consider multi-modal data for traffic status prediction than only using single source data. With the substantial data from Ningbo Passenger Transport Management Sector (NPTMS), we wished to determine whether it was possible to develop Stacked Autoencoders (SAEs) for accurately predicting road traffic status from taxicab operating data and bus smart card data. We show that SAE performed better than linear regression model and Back Propagation (BP) neural network for determining the relationship between road traffic status and those factors. In a 26-month data experiment using SAE, we show that it is possible to develop highly accurate predictions (91% test accuracy) of road traffic status from daily taxicab operating data and bus smart card data.

  13. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  14. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  15. Revisit to three-dimensional percolation theory: Accurate analysis for highly stretchable conductive composite materials

    PubMed Central

    Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek

    2016-01-01

    A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties. PMID:27694856

  16. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  17. Evaluation of a numerical model's ability to predict bed load transport observed in braided river experiments

    NASA Astrophysics Data System (ADS)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2018-05-01

    New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.

  18. Innovative GOCI algorithm to derive turbidity in highly turbid waters: a case study in the Zhejiang coastal area.

    PubMed

    Qiu, Zhongfeng; Zheng, Lufei; Zhou, Yan; Sun, Deyong; Wang, Shengqiang; Wu, Wei

    2015-09-21

    An innovative algorithm is developed and validated to estimate the turbidity in Zhejiang coastal area (highly turbid waters) using data from the Geostationary Ocean Color Imager (GOCI). First, satellite-ground synchronous data (n = 850) was collected from 2014 to 2015 using 11 buoys equipped with a Yellow Spring Instrument (YSI) multi-parameter sonde capable of taking hourly turbidity measurements. The GOCI data-derived Rayleigh-corrected reflectance (R(rc)) was used in place of the widely used remote sensing reflectance (R(rs)) to model turbidity. Various band characteristics, including single band, band ratio, band subtraction, and selected band combinations, were analyzed to identify correlations with turbidity. The results indicated that band 6 had the closest relationship to turbidity; however, the combined bands 3 and 6 model simulated turbidity most accurately (R(2) = 0.821, p<0.0001), while the model based on band 6 alone performed almost as well (R(2) = 0.749, p<0.0001). An independent validation data set was used to evaluate the performances of both models, and the mean relative error values of 42.5% and 51.2% were obtained for the combined model and the band 6 model, respectively. The accurate performances of the proposed models indicated that the use of R(rc) to model turbidity in highly turbid coastal waters is feasible. As an example, the developed model was applied to 8 hourly GOCI images on 30 December 2014. Three cross sections were selected to identify the spatiotemporal variation of turbidity in the study area. Turbidity generally decreased from near-shore to offshore and from morning to afternoon. Overall, the findings of this study provide a simple and practical method, based on GOCI data, to estimate turbidity in highly turbid coastal waters at high temporal resolutions.

  19. Reliable enumeration of malaria parasites in thick blood films using digital image analysis.

    PubMed

    Frean, John A

    2009-09-23

    Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.

  20. Mapping detailed 3D information onto high resolution SAR signatures

    NASA Astrophysics Data System (ADS)

    Anglberger, H.; Speck, R.

    2017-05-01

    Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.

  1. An Image-Based Algorithm for Precise and Accurate High Throughput Assessment of Drug Activity against the Human Parasite Trypanosoma cruzi

    PubMed Central

    Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H.; Hansen, Michael A. E.

    2014-01-01

    We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity. PMID:24503652

  2. Novel real-time tumor-contouring method using deep learning to prevent mistracking in X-ray fluoroscopy.

    PubMed

    Terunuma, Toshiyuki; Tokui, Aoi; Sakae, Takeji

    2018-03-01

    Robustness to obstacles is the most important factor necessary to achieve accurate tumor tracking without fiducial markers. Some high-density structures, such as bone, are enhanced on X-ray fluoroscopic images, which cause tumor mistracking. Tumor tracking should be performed by controlling "importance recognition": the understanding that soft-tissue is an important tracking feature and bone structure is unimportant. We propose a new real-time tumor-contouring method that uses deep learning with importance recognition control. The novelty of the proposed method is the combination of the devised random overlay method and supervised deep learning to induce the recognition of structures in tumor contouring as important or unimportant. This method can be used for tumor contouring because it uses deep learning to perform image segmentation. Our results from a simulated fluoroscopy model showed accurate tracking of a low-visibility tumor with an error of approximately 1 mm, even if enhanced bone structure acted as an obstacle. A high similarity of approximately 0.95 on the Jaccard index was observed between the segmented and ground truth tumor regions. A short processing time of 25 ms was achieved. The results of this simulated fluoroscopy model support the feasibility of robust real-time tumor contouring with fluoroscopy. Further studies using clinical fluoroscopy are highly anticipated.

  3. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    NASA Astrophysics Data System (ADS)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  4. Determination of rivaroxaban in patient’s plasma samples by anti-Xa chromogenic test associated to High Performance Liquid Chromatography tandem Mass Spectrometry (HPLC-MS/MS)

    PubMed Central

    Derogis, Priscilla Bento Matos; Sanches, Livia Rentas; de Aranda, Valdir Fernandes; Colombini, Marjorie Paris; Mangueira, Cristóvão Luis Pitangueira; Katz, Marcelo; Faulhaber, Adriana Caschera Leme; Mendes, Claudio Ernesto Albers; Ferreira, Carlos Eduardo dos Santos; França, Carolina Nunes; Guerra, João Carlos de Campos

    2017-01-01

    Rivaroxaban is an oral direct factor Xa inhibitor, therapeutically indicated in the treatment of thromboembolic diseases. As other new oral anticoagulants, routine monitoring of rivaroxaban is not necessary, but important in some clinical circumstances. In our study a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was validated to measure rivaroxaban plasmatic concentration. Our method used a simple sample preparation, protein precipitation, and a fast chromatographic run. It was developed a precise and accurate method, with a linear range from 2 to 500 ng/mL, and a lower limit of quantification of 4 pg on column. The new method was compared to a reference method (anti-factor Xa activity) and both presented a good correlation (r = 0.98, p < 0.001). In addition, we validated hemolytic, icteric or lipemic plasma samples for rivaroxaban measurement by HPLC-MS/MS without interferences. The chromogenic and HPLC-MS/MS methods were highly correlated and should be used as clinical tools for drug monitoring. The method was applied successfully in a group of 49 real-life patients, which allowed an accurate determination of rivaroxaban in peak and trough levels. PMID:28170419

  5. Stability of mixing layers

    NASA Technical Reports Server (NTRS)

    Tam, Christopher; Krothapalli, A

    1993-01-01

    The research program for the first year of this project (see the original research proposal) consists of developing an explicit marching scheme for solving the parabolized stability equations (PSE). Performing mathematical analysis of the computational algorithm including numerical stability analysis and the determination of the proper boundary conditions needed at the boundary of the computation domain are implicit in the task. Before one can solve the parabolized stability equations for high-speed mixing layers, the mean flow must first be found. In the past, instability analysis of high-speed mixing layer has mostly been performed on mean flow profiles calculated by the boundary layer equations. In carrying out this project, it is believed that the boundary layer equations might not give an accurate enough nonparallel, nonlinear mean flow needed for parabolized stability analysis. A more accurate mean flow can, however, be found by solving the parabolized Navier-Stokes equations. The advantage of the parabolized Navier-Stokes equations is that its accuracy is consistent with the PSE method. Furthermore, the method of solution is similar. Hence, the major part of the effort of the work of this year has been devoted to the development of an explicit numerical marching scheme for the solution of the Parabolized Navier-Stokes equation as applied to the high-seed mixing layer problem.

  6. High Stability Engine Control (HISTEC) Flight Test Results

    NASA Technical Reports Server (NTRS)

    Southwick, Robert D.; Gallops, George W.; Kerr, Laura J.; Kielb, Robert P.; Welsh, Mark G.; DeLaat, John C.; Orme, John S.

    1998-01-01

    The High Stability Engine Control (HISTEC) Program, managed and funded by the NASA Lewis Research Center, is a cooperative effort between NASA and Pratt & Whitney (P&W). The program objective is to develop and flight demonstrate an advanced high stability integrated engine control system that uses real-time, measurement-based estimation of inlet pressure distortion to enhance engine stability. Flight testing was performed using the NASA Advanced Controls Technologies for Integrated Vehicles (ACTIVE) F-15 aircraft at the NASA Dryden Flight Research Center. The flight test configuration, details of the research objectives, and the flight test matrix to achieve those objectives are presented. Flight test results are discussed that show the design approach can accurately estimate distortion and perform real-time control actions for engine accommodation.

  7. 12-GHz thin-film transistors on transferrable silicon nanomembranes for high-performance flexible electronics.

    PubMed

    Sun, Lei; Qin, Guoxuan; Seo, Jung-Hun; Celler, George K; Zhou, Weidong; Ma, Zhenqiang

    2010-11-22

    Multigigahertz flexible electronics are attractive and have broad applications. A gate-after-source/drain fabrication process using preselectively doped single-crystal silicon nanomembranes (SiNM) is an effective approach to realizing high device speed. However, further downscaling this approach has become difficult in lithography alignment. In this full paper, a local alignment scheme in combination with more accurate SiNM transfer measures for minimizing alignment errors is reported. By realizing 1 μm channel alignment for the SiNMs on a soft plastic substrate, thin-film transistors with a record speed of 12 GHz maximum oscillation frequency are demonstrated. These results indicate the great potential of properly processed SiNMs for high-performance flexible electronics.

  8. Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties

    PubMed Central

    Huang, Shouren; Bergström, Niklas; Yamakawa, Yuji; Senoo, Taku; Ishikawa, Masatoshi

    2016-01-01

    It is traditionally difficult to implement fast and accurate position regulation on an industrial robot in the presence of uncertainties. The uncertain factors can be attributed either to the industrial robot itself (e.g., a mismatch of dynamics, mechanical defects such as backlash, etc.) or to the external environment (e.g., calibration errors, misalignment or perturbations of a workpiece, etc.). This paper proposes a systematic approach to implement high-performance position regulation under uncertainties on a general industrial robot (referred to as the main robot) with minimal or no manual teaching. The method is based on a coarse-to-fine strategy that involves configuring an add-on module for the main robot’s end effector. The add-on module consists of a 1000 Hz vision sensor and a high-speed actuator to compensate for accumulated uncertainties. The main robot only focuses on fast and coarse motion, with its trajectories automatically planned by image information from a static low-cost camera. Fast and accurate peg-and-hole alignment in one dimension was implemented as an application scenario by using a commercial parallel-link robot and an add-on compensation module with one degree of freedom (DoF). Experimental results yielded an almost 100% success rate for fast peg-in-hole manipulation (with regulation accuracy at about 0.1 mm) when the workpiece was randomly placed. PMID:27483274

  9. High-Accuracy Decoupling Estimation of the Systematic Coordinate Errors of an INS and Intensified High Dynamic Star Tracker Based on the Constrained Least Squares Method

    PubMed Central

    Jiang, Jie; Yu, Wenbo; Zhang, Guangjun

    2017-01-01

    Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179

  10. Putative identification of new p-coumaroyl glycoside flavonoids in grape by ultra-high performance liquid chromatography/high-resolution mass spectrometry.

    PubMed

    Panighel, Annarita; De Rosso, Mirko; Dalla Vedova, Antonio; Flamini, Riccardo

    2015-02-28

    Grape polyphenols are antioxidant compounds, markers in vine chemotaxonomy, and involved in color stabilization of red wines. Sugar acylation usually confers higher stability on glycoside derivatives and this effect is enhanced by an aromatic substituent such as p-coumaric acid. Until now, only p-coumaroyl anthocyanins have been found in grape. A method of 'suspect screening analysis' by ultra-high-performance liquid chromatography/high-resolution mass spectrometry (UHPLC/QTOFMS) has recently been developed to study grape metabolomics. In the present study, this approach was used to identify new polyphenols in grape by accurate mass measurement, MS/MS fragmentation, and study of correlations between fragments observed and putative structures. Three putative p-coumaroyl flavonoids were identified in Raboso Piave grape extract: a dihydrokaempferide-3-O-p-coumaroylhexoside-like flavanone, isorhamnetin-3-O-p-coumaroylglucoside, and a chrysoeriol-p-coumaroylhexoside-like flavone. Accurate MS provided structural characterization of functional groups, and literature data indicates their probable position in the molecule. A fragmentation scheme is proposed for each compound. Compounds were identified by overlapping various analytical methods according to recommendations in the MS-based metabolomics literature. Stereochemistry and the definitive position of substituents in the molecule can only be confirmed by isolation and characterization or synthesis of each compound. These findings suggest addressing research of acylated polyphenol glycosides to other grape varieties. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Unstructured mesh adaptivity for urban flooding modelling

    NASA Astrophysics Data System (ADS)

    Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.

    2018-05-01

    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.

  12. Extrapolation-Based References Improve Motion and Eddy-Current Correction of High B-Value DWI Data: Application in Parkinson's Disease Dementia.

    PubMed

    Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar

    2015-01-01

    Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.

  13. Development of low-shock pyrotechnic separation nuts. [design performance of flight type nuts

    NASA Technical Reports Server (NTRS)

    Bement, L. J.; Neubert, V. H.

    1973-01-01

    Performance demonstrations and comparisons were made on six flight type pyrotechnic separation nut designs, two of which are standard designs in current use, and four of which were designed to produce low shock on actuation. Although the shock performances of the four low shock designs are considerably lower than the standard designs, some penalties may be incurred in increased volume, weight, or complexity. These nuts, and how they are installed, can significantly influence the pyrotechnic shock created in spacecraft structures. A high response monitoring system has been developed and demonstrated to provide accurate performance comparisons for pyrotechnic separation nuts.

  14. Robust and automated three-dimensional segmentation of densely packed cell nuclei in different biological specimens with Lines-of-Sight decomposition.

    PubMed

    Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C

    2015-06-08

    Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.

  15. Validation of morphing wing methodologies on an unmanned aerial system and a wind tunnel technology demonstrator

    NASA Astrophysics Data System (ADS)

    Gabor, Oliviu Sugar

    To increase the aerodynamic efficiency of aircraft, in order to reduce the fuel consumption, a novel morphing wing concept has been developed. It consists in replacing a part of the wing upper and lower surfaces with a flexible skin whose shape can be modified using an actuation system placed inside the wing structure. Numerical studies in two and three dimensions were performed in order to determine the gains the morphing system achieves for the case of an Unmanned Aerial System and for a morphing technology demonstrator based on the wing tip of a transport aircraft. To obtain the optimal wing skin shapes in function of the flight condition, different global optimization algorithms were implemented, such as the Genetic Algorithm and the Artificial Bee Colony Algorithm. To reduce calculation times, a hybrid method was created by coupling the population-based algorithm with a fast, gradient-based local search method. Validations were performed with commercial state-of-the-art optimization tools and demonstrated the efficiency of the proposed methods. For accurately determining the aerodynamic characteristics of the morphing wing, two new methods were developed, a nonlinear lifting line method and a nonlinear vortex lattice method. Both use strip analysis of the span-wise wing section to account for the airfoil shape modifications induced by the flexible skin, and can provide accurate results for the wing drag coefficient. The methods do not require the generation of a complex mesh around the wing and are suitable for coupling with optimization algorithms due to the computational time several orders of magnitude smaller than traditional three-dimensional Computational Fluid Dynamics methods. Two-dimensional and three-dimensional optimizations of the Unmanned Aerial System wing equipped with the morphing skin were performed, with the objective of improving its performances for an extended range of flight conditions. The chordwise positions of the internal actuators, the spanwise number of actuation stations as well as the displacement limits were established. The performance improvements obtained and the limitations of the morphing wing concept were studied. To verify the optimization results, high-fidelity Computational Fluid Dynamics simulations were also performed, giving very accurate indications of the obtained gains. For the morphing model based on an aircraft wing tip, the skin shapes were optimized in order to control laminar flow on the upper surface. An automated structured mesh generation procedure was developed and implemented. To accurately capture the shape of the skin, a precision scanning procedure was done and its results were included in the numerical model. High-fidelity simulations were performed to determine the upper surface transition region and the numerical results were validated using experimental wind tunnel data.

  16. Gaze Behavior of Gymnastics Judges: Where Do Experienced Judges and Gymnasts Look While Judging?

    PubMed

    Pizzera, Alexandra; Möller, Carsten; Plessner, Henning

    2018-03-01

    Gymnastics judges and former gymnasts have been shown to be quite accurate in detecting errors and accurately judging performance. The purpose of the current study was to examine if this superior judging performance is reflected in judges' gaze behavior. Thirty-five judges were asked to judge 21 gymnasts who performed a skill on the vault in a video-based test. Classifying 1 sample on 2 different criteria, judging performance and gaze behavior were compared between judges with a higher license level and judges with a lower license level and between judges who were able to perform the skill (specific motor experience [SME]) and those who were not. The results revealed better judging performance among judges with a higher license level compared with judges with a lower license level and more fixations on the gymnast during the whole skill and the landing phase, specifically on the head and arms of the gymnast. Specific motor experience did not result in any differences in judging performance; however, judges with SME showed similar gaze patterns to those of judges with a high license level, with 1 difference in their increased focus on the gymnasts' feet. Superior judging performance seems to be reflected in a specific gaze behavior. This gaze behavior appears to partly stem from judges' own sensorimotor experiences for this skill and reflects the gymnasts' perspective onto the skill.

  17. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less

  18. Engineering the Charge Transport of Ag Nanocrystals for Highly Accurate, Wearable Temperature Sensors through All-Solution Processes.

    PubMed

    Joh, Hyungmok; Lee, Seung-Wook; Seong, Mingi; Lee, Woo Seok; Oh, Soong Ju

    2017-06-01

    All-nanocrystal (NC)-based and all-solution-processed wearable resistance temperature detectors (RTDs) are introduced. The charge transport mechanisms of Ag NC thin films are engineered through various ligand treatments to design high performance RTDs. Highly conductive Ag NC thin films exhibiting metallic transport behavior with high positive temperature coefficients of resistance (TCRs) are achieved through tetrabutylammonium bromide treatment. Ag NC thin films showing hopping transport with high negative TCRs are created through organic ligand treatment. All-solution-based, one-step photolithography techniques that integrate two distinct opposite-sign TCR Ag NC thin films into an ultrathin single device are developed to decouple the mechanical effects such as human motion. The unconventional materials design and strategy enables highly accurate, sensitive, wearable and motion-free RTDs, demonstrated by experiments on moving or curved objects such as human skin, and simulation results based on charge transport analysis. This strategy provides a low cost and simple method to design wearable multifunctional sensors with high sensitivity which could be utilized in various fields such as biointegrated sensors or electronic skin. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Nonlinear analysis and performance evaluation of the Annular Suspension and Pointing System (ASPS)

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1978-01-01

    The Annular Suspension and Pointing System (ASPS) can provide high accurate fine pointing for a variety of solar-, stellar-, and Earth-viewing scientific instruments during space shuttle orbital missions. In this report, a detailed nonlinear mathematical model is developed for the ASPS/Space Shuttle system. The equations are augmented with nonlinear models of components such as magnetic actuators and gimbal torquers. Control systems and payload attitude state estimators are designed in order to obtain satisfactory pointing performance, and statistical pointing performance is predicted in the presence of measurement noise and disturbances.

  20. Gymnastic judges benefit from their own motor experience as gymnasts.

    PubMed

    Pizzera, Alexandra

    2012-12-01

    Gymnastic judges have the difficult task of evaluating highly complex skills. My purpose in the current study was to examine evidence that judges use their sensorimotor experiences to enhance their perceptual judgments. In a video test, 58 judges rated 31 gymnasts performing a balance beam skill. I compared decision quality between judges who could perform the skill themselves on the balance beam (specific motor experience = SME) and those who could not. Those with SME showed better performance than those without SME. These data suggest that judges use their personal experiences as information to accurately assess complex gymnastic skills. [corrected].

  1. VCSEL-based fiber optic link for avionics: implementation and performance analyses

    NASA Astrophysics Data System (ADS)

    Shi, Jieqin; Zhang, Chunxi; Duan, Jingyuan; Wen, Huaitao

    2006-11-01

    A Gb/s fiber optic link with built-in test capability (BIT) basing on vertical-cavity surface-emitting laser (VCSEL) sources for military avionics bus for next generation has been presented in this paper. To accurately predict link performance, statistical methods and Bit Error Rate (BER) measurements have been examined. The results show that the 1Gb/s fiber optic link meets the BER requirement and values for link margin can reach up to 13dB. Analysis shows that the suggested photonic network may provide high performance and low cost interconnections alternative for future military avionics.

  2. DOE FES FY2017 Joint Research Target Fourth Quarter Milestone Report for theNational Spherical Torus Experiment Upgrade.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soukhanovskii, V. A.

    2017-09-13

    A successful high-performance plasma operation with a radiative divertor has been demonstrated on many tokamak devices, however, significant uncertainty remains in accurately modeling detachment thresholds, and in how detachment depends on divertor geometry. Whereas it was originally planned to perform dedicated divertor experiments on the National Spherical Tokamak Upgrade to address critical detachment and divertor geometry questions for this milestone, the experiments were deferred due to technical difficulties. Instead, existing NSTX divertor data was summarized and re-analyzed where applicable, and additional simulations were performed.

  3. [Research and Design of a System for Detecting Automated External Defbrillator Performance Parameters].

    PubMed

    Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai

    2017-09-30

    In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.

  4. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  5. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  6. A miniature Hopkinson experiment device based on multistage reluctance coil electromagnetic launch.

    PubMed

    Huang, Wenkai; Huan, Shi; Xiao, Ying

    2017-09-01

    A set of seven-stage reluctance miniaturized Hopkinson bar electromagnetic launcher has been developed in this paper. With the characteristics of high precision, small size, and little noise pollution, the device complies with the requirements of miniaturized Hopkinson bar for high strain rate. The launcher is a seven-stage accelerating device up to 65.5 m/s. A high performance microcontroller is used to control accurately the discharge of capacitor sets, by means of which the outlet velocity of the projectile can be controlled within a certain velocity range.

  7. A miniature Hopkinson experiment device based on multistage reluctance coil electromagnetic launch

    NASA Astrophysics Data System (ADS)

    Huang, Wenkai; Huan, Shi; Xiao, Ying

    2017-09-01

    A set of seven-stage reluctance miniaturized Hopkinson bar electromagnetic launcher has been developed in this paper. With the characteristics of high precision, small size, and little noise pollution, the device complies with the requirements of miniaturized Hopkinson bar for high strain rate. The launcher is a seven-stage accelerating device up to 65.5 m/s. A high performance microcontroller is used to control accurately the discharge of capacitor sets, by means of which the outlet velocity of the projectile can be controlled within a certain velocity range.

  8. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  9. Using sensors to measure activity in people with stroke.

    PubMed

    Fulk, George D; Sazonov, Edward

    2011-01-01

    The purpose of this study was to determine the ability of a novel shoe-based sensor that uses accelerometers, pressure sensors, and pattern recognition with a support vector machine (SVM) to accurately identify sitting, standing, and walking postures in people with stroke. Subjects with stroke wore the shoe-based sensor while randomly assuming 3 main postures: sitting, standing, and walking. A SVM classifier was used to train and validate the data to develop individual and group models, which were tested for accuracy, recall, and precision. Eight subjects participated. Both individual and group models were able to accurately identify the different postures (99.1% to 100% individual models and 76.9% to 100% group models). Recall and precision were also high for both individual (0.99 to 1.00) and group (0.82 to 0.99) models. The unique combination of accelerometer and pressure sensors built into the shoe was able to accurately identify postures. This shoe sensor could be used to provide accurate information on community performance of activities in people with stroke as well as provide behavioral enhancing feedback as part of a telerehabilitation intervention.

  10. High accuracy electronic material level sensor

    DOEpatents

    McEwan, T.E.

    1997-03-11

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.

  11. High accuracy electronic material level sensor

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.

  12. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  13. Alumina ceramic based high-temperature performance of wireless passive pressure sensor

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Wu, Guozhu; Guo, Tao; Tan, Qiulin

    2016-12-01

    A wireless passive pressure sensor equivalent to inductive-capacitive (LC) resonance circuit and based on alumina ceramic is fabricated by using high temperature sintering ceramic and post-fire metallization processes. Cylindrical copper spiral reader antenna and insulation layer are designed to realize the wireless measurement for the sensor in high temperature environment. The high temperature performance of the sensor is analyzed and discussed by studying the phase-frequency and amplitude-frequency characteristics of reader antenna. The average frequency change of sensor is 0.68 kHz/°C when the temperature changes from 27°C to 700°C and the relative change of twice measurements is 2.12%, with high characteristic of repeatability. The study of temperature-drift characteristic of pressure sensor in high temperature environment lays a good basis for the temperature compensation methods and insures the pressure signal readout accurately.

  14. High-performance time-resolved fluorescence by direct waveform recording.

    PubMed

    Muretta, Joseph M; Kyrychenko, Alexander; Ladokhin, Alexey S; Kast, David J; Gillispie, Gregory D; Thomas, David D

    2010-10-01

    We describe a high-performance time-resolved fluorescence (HPTRF) spectrometer that dramatically increases the rate at which precise and accurate subnanosecond-resolved fluorescence emission waveforms can be acquired in response to pulsed excitation. The key features of this instrument are an intense (1 μJ/pulse), high-repetition rate (10 kHz), and short (1 ns full width at half maximum) laser excitation source and a transient digitizer (0.125 ns per time point) that records a complete and accurate fluorescence decay curve for every laser pulse. For a typical fluorescent sample containing a few nanomoles of dye, a waveform with a signal/noise of about 100 can be acquired in response to a single laser pulse every 0.1 ms, at least 10(5) times faster than the conventional method of time-correlated single photon counting, with equal accuracy and precision in lifetime determination for lifetimes as short as 100 ps. Using standard single-lifetime samples, the detected signals are extremely reproducible, with waveform precision and linearity to within 1% error for single-pulse experiments. Waveforms acquired in 0.1 s (1000 pulses) with the HPTRF instrument were of sufficient precision to analyze two samples having different lifetimes, resolving minor components with high accuracy with respect to both lifetime and mole fraction. The instrument makes possible a new class of high-throughput time-resolved fluorescence experiments that should be especially powerful for biological applications, including transient kinetics, multidimensional fluorescence, and microplate formats.

  15. Identification and accurate quantification of structurally related peptide impurities in synthetic human C-peptide by liquid chromatography-high resolution mass spectrometry.

    PubMed

    Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei

    2018-06-04

    Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.

  16. Characterization of in-flight performance of ion propulsion systems

    NASA Astrophysics Data System (ADS)

    Sovey, James S.; Rawlin, Vincent K.

    1993-06-01

    In-flight measurements of ion propulsion performance, ground test calibrations, and diagnostic performance measurements were reviewed. It was found that accelerometers provided the most accurate in-flight thrust measurements compared with four other methods that were surveyed. An experiment has also demonstrated that pre-flight alignment of the thrust vector was sufficiently accurate so that gimbal adjustments and use of attitude control thrusters were not required to counter disturbance torques caused by thrust vector misalignment. The effects of facility background pressure, facility enhanced charge-exchange reactions, and contamination on ground-based performance measurements are also discussed. Vacuum facility pressures for inert-gas ion thruster life tests and flight qualification tests will have to be less than 2 mPa to ensure accurate performance measurements.

  17. Characterization of in-flight performance of ion propulsion systems

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Rawlin, Vincent K.

    1993-01-01

    In-flight measurements of ion propulsion performance, ground test calibrations, and diagnostic performance measurements were reviewed. It was found that accelerometers provided the most accurate in-flight thrust measurements compared with four other methods that were surveyed. An experiment has also demonstrated that pre-flight alignment of the thrust vector was sufficiently accurate so that gimbal adjustments and use of attitude control thrusters were not required to counter disturbance torques caused by thrust vector misalignment. The effects of facility background pressure, facility enhanced charge-exchange reactions, and contamination on ground-based performance measurements are also discussed. Vacuum facility pressures for inert-gas ion thruster life tests and flight qualification tests will have to be less than 2 mPa to ensure accurate performance measurements.

  18. Highly sensitive piezo-resistive graphite nanoplatelet-carbon nanotube hybrids/polydimethylsilicone composites with improved conductive network construction.

    PubMed

    Zhao, Hang; Bai, Jinbo

    2015-05-13

    The constructions of internal conductive network are dependent on microstructures of conductive fillers, determining various electrical performances of composites. Here, we present the advanced graphite nanoplatelet-carbon nanotube hybrids/polydimethylsilicone (GCHs/PDMS) composites with high piezo-resistive performance. GCH particles were synthesized by the catalyst chemical vapor deposition approach. The synthesized GCHs can be well dispersed in the matrix through the mechanical blending process. Due to the exfoliated GNP and aligned CNTs coupling structure, the flexible composite shows an ultralow percolation threshold (0.64 vol %) and high piezo-resistive sensitivity (gauge factor ∼ 10(3) and pressure sensitivity ∼ 0.6 kPa(-1)). Slight motions of finger can be detected and distinguished accurately using the composite film as a typical wearable sensor. These results indicate that designing the internal conductive network could be a reasonable strategy to improve the piezo-resistive performance of composites.

  19. Simultaneous determination of the HIV nucleoside analogue reverse transcriptase inhibitors lamivudine, didanosine, stavudine, zidovudine and abacavir in human plasma by reversed phase high performance liquid chromatography.

    PubMed

    Verweij-van Wissen, C P W G M; Aarnoutse, R E; Burger, D M

    2005-02-25

    A reversed phase high performance liquid chromatography method was developed for the simultaneous quantitative determination of the nucleoside reverse transcriptase inhibitors (NRTIs) lamivudine, didanosine, stavudine, zidovudine and abacavir in plasma. The method involved solid-phase extraction with Oasis MAX cartridges from plasma, followed by high performance liquid chromatography with a SymmetryShield RP 18 column and ultraviolet detection set at a wavelength of 260 nm. The assay was validated over the concentration range of 0.015-5 mg/l for all five NRTIs. The average accuracies for the assay were 92-102%, inter- and intra-day coefficients of variation (CV) were <2.5% and extraction recoveries were higher than 97%. This method proved to be simple, accurate and precise, and is currently in use in our laboratory for the quantitative analysis of NRTIs in plasma.

  20. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  1. The FCI on board MTG : optical design and performances

    NASA Astrophysics Data System (ADS)

    Ouaknine, J.; Viard, T.; Napierala, B.; Foerster, U.; Fray, S.; Hallibert, P.; Durand, Y.; Imperiali, S.; Pelouas, P.; Rodolfo, J.; Riguet, F.; Carel, J.-L.

    2017-11-01

    Meteosat Third Generation is the next ESA Program of Earth Observation dedicated to provide Europe with an operational satellite system able to support accurate prediction of meteorological phenomena until the late 2030s. The satellites will be operating from the Geostationary orbit using a 3 axes stabilized platform. The main instrument is called the Flexible Combined Imager (FCI), currently under development by Thales Alenia Space France. It will continue the successful operation of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on Meteosat Second Generation (MSG) with improved performance. This instrument will provide full images of the Earth every 10 minutes in 16 spectral channels between 0.44 and 13.3 μm. The ground resolution is ranging from 0.5 km to 2 km. The FCI is composed of a telescope developed by Kayser-Threde, which includes a Scan mirror for the full Earth coverage, and a calibration mechanism with an embedded black body dedicated to accurate in-flight IR radiometric calibration. The image produced by the telescope is split into several spectral groups by a spectral separation assembly (SSA) thanks to dichroïc beamsplitters. The output beams are collimated to ease the instrument integration before reaching the cryostat. Inside, the cold optics (CO-I) focalize the optical beams onto the IR detectors. The cold optics and IR detectors are accurately positioned inside a common cold plate to improve registration between spectral channels. Spectral filters are integrated on top of the detectors in order to achieve the required spectral selection. This article describes the FCI optical design and performances. We will focus on the image quality needs, the high line-of-sight stability required, the spectral transmittance performance, and the stray-light rejection. The FCI currently under development will exhibit a significant improvement of performances with respect to MSG.

  2. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  3. A Modified Isotropic-Kinematic Hardening Model to Predict the Defects in Tube Hydroforming Process

    NASA Astrophysics Data System (ADS)

    Jin, Kai; Guo, Qun; Tao, Jie; Guo, Xun-zhong

    2017-11-01

    Numerical simulations of tube hydroforming process of hollow crankshafts were conducted by using finite element analysis method. Moreover, the modified model involving the integration of isotropic-kinematic hardening model with ductile criteria model was used to more accurately optimize the process parameters such as internal pressure, feed distance and friction coefficient. Subsequently, hydroforming experiments were performed based on the simulation results. The comparison between experimental and simulation results indicated that the prediction of tube deformation, crack and wrinkle was quite accurate for the tube hydroforming process. Finally, hollow crankshafts with high thickness uniformity were obtained and the thickness distribution between numerical and experimental results was well consistent.

  4. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  5. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  6. Recognizing Dynamic Faces in Malaysian Chinese Participants.

    PubMed

    Tan, Chrystalle B Y; Sheppard, Elizabeth; Stephen, Ian D

    2016-03-01

    High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies. © The Author(s) 2015.

  7. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  8. Evaluation in Appalachian pasture systems of the 1996 (update 2000) National Research Council model for weaning cattle.

    PubMed

    Whetsell, M S; Rayburn, E B; Osborne, P I

    2006-05-01

    This study was conducted to evaluate the accuracy of the National Research Council's (2000) Nutrient Requirements of Beef Cattle computer model when used to predict calf performance during on-farm pasture or dry-lot weaning and backgrounding. Calf performance was measured on 22 farms in 2002 and 8 farms in 2003 that participated in West Virginia Beef Quality Assurance Sale marketing pools. Calves were weaned on pasture (25 farms) or dry-lot (5 farms) and fed supplemental hay, haylage, ground shell corn, soybean hulls, or a commercial concentrate. Concentrates were fed at a rate of 0.0 to 1.5% of BW. The National Research Council (2000) model was used to predict ADG of each group of calves observed on each farm. The model error was measured by calculating residuals (the difference between predicted ADG minus observed ADG). Predicted animal performance was determined using level 1 of the model. Results show that, when using normal on-farm pasture sampling and forage analysis methods, the model error for ADG is high and did not accurately predict the performance of steers or heifers fed high-forage pasture-based diets; the predicted ADG was lower (P < 0.05) than the observed ADG. The estimated intake of low-producing animals was similar to the expected DMI, but for the greater-producing animals it was not. The NRC (2000) beef model may more accurately predict on-farm animal performance in pastured situations if feed analysis values reflect the energy value of the feed, account for selective grazing, and relate empty BW and shrunk BW to NDF.

  9. Integration of Geomatics Techniques for Digitizing Highly Relevant Geological and Cultural Heritage Sites: the Case of San Leo (italy)

    NASA Astrophysics Data System (ADS)

    Girelli, V. A.; Borgatti, L.; Dellapasqua, M.; Mandanici, E.; Spreafico, M. C.; Tini, M. A.; Bitelli, G.

    2017-08-01

    The research activities described in this contribution were carried out at San Leo (Italy). The town is located on the top of a quadrangular rock slab affected by a complex system of fractures and has a wealth of cultural heritage, as evidenced by the UNESCO's nomination. The management of this fragile set requires a comprehensive system of geometrical information to analyse and preserve all the geological and cultural features. In this perspective, the latest Geomatics techniques were used to perform some detailed surveys and to manage the great amount of acquired geometrical knowledge of both natural (the cliff) and historical heritage. All the data were also georeferenced in a unique reference system. In particular, high accurate terrestrial laser scanner surveys were performed for the whole cliff, in order to obtain a dense point cloud useful for a large number of geological studies, among others the analyses of the last rockslide by comparing pre- and post-event data. Moreover, the geometrical representation of the historical centre was performed using different approaches, in order to generate an accurate DTM and DSM of the site. For these purposes, a large scale numerical map was used, integrating the data with GNSS and laser surveys of the area. Finally, many surveys were performed with different approaches on some of the most relevant monuments of the town. In fact, these surveys were performed by terrestrial laser scanner, light structured scanner and photogrammetry, the last mainly applied with the Structure from Motion approach.

  10. PASTA: splice junction identification from RNA-Sequencing data

    PubMed Central

    2013-01-01

    Background Next generation transcriptome sequencing (RNA-Seq) is emerging as a powerful experimental tool for the study of alternative splicing and its regulation, but requires ad-hoc analysis methods and tools. PASTA (Patterned Alignments for Splicing and Transcriptome Analysis) is a splice junction detection algorithm specifically designed for RNA-Seq data, relying on a highly accurate alignment strategy and on a combination of heuristic and statistical methods to identify exon-intron junctions with high accuracy. Results Comparisons against TopHat and other splice junction prediction software on real and simulated datasets show that PASTA exhibits high specificity and sensitivity, especially at lower coverage levels. Moreover, PASTA is highly configurable and flexible, and can therefore be applied in a wide range of analysis scenarios: it is able to handle both single-end and paired-end reads, it does not rely on the presence of canonical splicing signals, and it uses organism-specific regression models to accurately identify junctions. Conclusions PASTA is a highly efficient and sensitive tool to identify splicing junctions from RNA-Seq data. Compared to similar programs, it has the ability to identify a higher number of real splicing junctions, and provides highly annotated output files containing detailed information about their location and characteristics. Accurate junction data in turn facilitates the reconstruction of the splicing isoforms and the analysis of their expression levels, which will be performed by the remaining modules of the PASTA pipeline, still under development. Use of PASTA can therefore enable the large-scale investigation of transcription and alternative splicing. PMID:23557086

  11. Compact and Hybrid Feature Description for Building Extraction

    NASA Astrophysics Data System (ADS)

    Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.

    2017-05-01

    Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  12. A new head phantom with realistic shape and spatially varying skull resistivity distribution.

    PubMed

    Li, Jian-Bo; Tang, Chi; Dai, Meng; Liu, Geng; Shi, Xue-Tao; Yang, Bin; Xu, Can-Hua; Fu, Feng; You, Fu-Sheng; Tang, Meng-Xing; Dong, Xiu-Zhen

    2014-02-01

    Brain electrical impedance tomography (EIT) is an emerging method for monitoring brain injuries. To effectively evaluate brain EIT systems and reconstruction algorithms, we have developed a novel head phantom that features realistic anatomy and spatially varying skull resistivity. The head phantom was created with three layers, representing scalp, skull, and brain tissues. The fabrication process entailed 3-D printing of the anatomical geometry for mold creation followed by casting to ensure high geometrical precision and accuracy of the resistivity distribution. We evaluated the accuracy and stability of the phantom. Results showed that the head phantom achieved high geometric accuracy, accurate skull resistivity values, and good stability over time and in the frequency domain. Experimental impedance reconstructions performed using the head phantom and computer simulations were found to be consistent for the same perturbation object. In conclusion, this new phantom could provide a more accurate test platform for brain EIT research.

  13. snpAD: An ancient DNA genotype caller.

    PubMed

    Prüfer, Kay

    2018-06-21

    The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.

  14. Versatile mid-infrared frequency-comb referenced sub-Doppler spectrometer

    NASA Astrophysics Data System (ADS)

    Gambetta, A.; Vicentini, E.; Coluccelli, N.; Wang, Y.; Fernandez, T. T.; Maddaloni, P.; De Natale, P.; Castrillo, A.; Gianfrani, L.; Laporta, P.; Galzerano, G.

    2018-04-01

    We present a mid-IR high-precision spectrometer capable of performing accurate Doppler-free measurements with absolute calibration of the optical axis and high signal-to-noise ratio. The system is based on a widely tunable mid-IR offset-free frequency comb and a Quantum-Cascade-Laser (QCL). The QCL emission frequency is offset locked to one of the comb teeth to provide absolute-frequency calibration, spectral-narrowing, and accurate fine frequency tuning. Both the comb repetition frequency and QCL-comb offset frequency can be modulated to provide, respectively, slow- and fast-frequency-calibrated scanning capabilities. The characterisation of the spectrometer is demonstrated by recording sub-Doppler saturated absorption features of the CHF3 molecule at around 8.6 μm with a maximum signal-to-noise ratio of ˜7 × 103 in 10 s integration time, frequency-resolution of 160 kHz, and accuracy of less than 10 kHz.

  15. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

    NASA Astrophysics Data System (ADS)

    van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

    2014-11-01

    In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

  16. Optimal Design of Experiments by Combining Coarse and Fine Measurements

    NASA Astrophysics Data System (ADS)

    Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.

    2017-11-01

    In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.

  17. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  18. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  19. Behavioral and fMRI evidence of the differing cognitive load of domain-specific assessments.

    PubMed

    Howard, S J; Burianová, H; Ehrich, J; Kervin, L; Calleia, A; Barkus, E; Carmody, J; Humphry, S

    2015-06-25

    Standards-referenced educational reform has increased the prevalence of standardized testing; however, whether these tests accurately measure students' competencies has been questioned. This may be due to domain-specific assessments placing a differing domain-general cognitive load on test-takers. To investigate this possibility, functional magnetic resonance imaging (fMRI) was used to identify and quantify the neural correlates of performance on current, international standardized methods of spelling assessment. Out-of-scanner testing was used to further examine differences in assessment results. Results provide converging evidence that: (a) the spelling assessments differed in the cognitive load placed on test-takers; (b) performance decreased with increasing cognitive load of the assessment; and (c) brain regions associated with working memory were more highly activated during performance of assessments that were higher in cognitive load. These findings suggest that assessment design should optimize the cognitive load placed on test-takers, to ensure students' results are an accurate reflection of their true levels of competency. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. An Efficient and Effective Design of InP Nanowires for Maximal Solar Energy Harvesting.

    PubMed

    Wu, Dan; Tang, Xiaohong; Wang, Kai; He, Zhubing; Li, Xianqiang

    2017-11-25

    Solar cells based on subwavelength-dimensions semiconductor nanowire (NW) arrays promise a comparable or better performance than their planar counterparts by taking the advantages of strong light coupling and light trapping. In this paper, we present an accurate and time-saving analytical design for optimal geometrical parameters of vertically aligned InP NWs for maximal solar energy absorption. Short-circuit current densities are calculated for each NW array with different geometrical dimensions under solar illumination. Optimal geometrical dimensions are quantitatively presented for single, double, and multiple diameters of the NW arrays arranged both squarely and hexagonal achieving the maximal short-circuit current density of 33.13 mA/cm 2 . At the same time, intensive finite-difference time-domain numerical simulations are performed to investigate the same NW arrays for the highest light absorption. Compared with time-consuming simulations and experimental results, the predicted maximal short-circuit current densities have tolerances of below 2.2% for all cases. These results unambiguously demonstrate that this analytical method provides a fast and accurate route to guide high performance InP NW-based solar cell design.

  1. An Efficient and Effective Design of InP Nanowires for Maximal Solar Energy Harvesting

    NASA Astrophysics Data System (ADS)

    Wu, Dan; Tang, Xiaohong; Wang, Kai; He, Zhubing; Li, Xianqiang

    2017-11-01

    Solar cells based on subwavelength-dimensions semiconductor nanowire (NW) arrays promise a comparable or better performance than their planar counterparts by taking the advantages of strong light coupling and light trapping. In this paper, we present an accurate and time-saving analytical design for optimal geometrical parameters of vertically aligned InP NWs for maximal solar energy absorption. Short-circuit current densities are calculated for each NW array with different geometrical dimensions under solar illumination. Optimal geometrical dimensions are quantitatively presented for single, double, and multiple diameters of the NW arrays arranged both squarely and hexagonal achieving the maximal short-circuit current density of 33.13 mA/cm2. At the same time, intensive finite-difference time-domain numerical simulations are performed to investigate the same NW arrays for the highest light absorption. Compared with time-consuming simulations and experimental results, the predicted maximal short-circuit current densities have tolerances of below 2.2% for all cases. These results unambiguously demonstrate that this analytical method provides a fast and accurate route to guide high performance InP NW-based solar cell design.

  2. Performance of the upgraded LTP-II at the ALS Optical Metrology Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Advanced Light Source; Yashchuk, Valeriy V; Kirschman, Jonathan L.

    2008-07-14

    The next generation of synchrotrons and free electron laser facilities requires x-ray optical systems with extremely high performance, generally of diffraction limited quality. Fabrication and use of such optics requires adequate, highly accurate metrology and dedicated instrumentation. Previously, we suggested ways to improve the performance of the Long Trace Profiler (LTP), a slope measuring instrument widely used to characterize x-ray optics at long spatial wavelengths. The main way is use of a CCD detector and corresponding technique for calibration of photo-response non-uniformity [J. L. Kirschman, et al., Proceedings of SPIE 6704, 67040J (2007)]. The present work focuses on the performancemore » and characteristics of the upgraded LTP-II at the ALS Optical Metrology Laboratory. This includes a review of the overall aspects of the design, control system, the movement and measurement regimes for the stage, and analysis of the performance by a slope measurement of a highly curved super-quality substrate with less than 0.3 microradian (rms)slope variation.« less

  3. Hybrid Network Architectures for the Next Generation NAS

    NASA Technical Reports Server (NTRS)

    Madubata, Christian

    2003-01-01

    To meet the needs of the 21st Century NAS, an integrated, network-centric infrastructure is essential that is characterized by secure, high bandwidth, digital communication systems that support precision navigation capable of reducing position errors for all aircraft to within a few meters. This system will also require precision surveillance systems capable of accurately locating all aircraft, and automatically detecting any deviations from an approved path within seconds and be able to deliver high resolution weather forecasts - critical to create 4- dimensional (space and time) profiles for up to 6 hours for all atmospheric conditions affecting aviation, including wake vortices. The 21st Century NAS will be characterized by highly accurate digital data bases depicting terrain, obstacle, and airport information no matter what visibility conditions exist. This research task will be to perform a high-level requirements analysis of the applications, information and services required by the next generation National Airspace System. The investigation and analysis is expected to lead to the development and design of several national network-centric communications architectures that would be capable of supporting the Next Generation NAS.

  4. Advanced Space Propulsion System Flowfield Modeling

    NASA Technical Reports Server (NTRS)

    Smith, Sheldon

    1998-01-01

    Solar thermal upper stage propulsion systems currently under development utilize small low chamber pressure/high area ratio nozzles. Consequently, the resulting flow in the nozzle is highly viscous, with the boundary layer flow comprising a significant fraction of the total nozzle flow area. Conventional uncoupled flow methods which treat the nozzle boundary layer and inviscid flowfield separately by combining the two calculations via the influence of the boundary layer displacement thickness on the inviscid flowfield are not accurate enough to adequately treat highly viscous nozzles. Navier Stokes models such as VNAP2 can treat these flowfields but cannot perform a vacuum plume expansion for applications where the exhaust plume produces induced environments on adjacent structures. This study is built upon recently developed artificial intelligence methods and user interface methodologies to couple the VNAP2 model for treating viscous nozzle flowfields with a vacuum plume flowfield model (RAMP2) that is currently a part of the Plume Environment Prediction (PEP) Model. This study integrated the VNAP2 code into the PEP model to produce an accurate, practical and user friendly tool for calculating highly viscous nozzle and exhaust plume flowfields.

  5. Sensing and Active Flow Control for Advanced BWB Propulsion-Airframe Integration Concepts

    NASA Technical Reports Server (NTRS)

    Fleming, John; Anderson, Jason; Ng, Wing; Harrison, Neal

    2005-01-01

    In order to realize the substantial performance benefits of serpentine boundary layer ingesting diffusers, this study investigated the use of enabling flow control methods to reduce engine-face flow distortion. Computational methods and novel flow control modeling techniques were utilized that allowed for rapid, accurate analysis of flow control geometries. Results were validated experimentally using the Techsburg Ejector-based wind tunnel facility; this facility is capable of simulating the high-altitude, high subsonic Mach number conditions representative of BWB cruise conditions.

  6. Modeling the behavior of an earthquake base-isolated building.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coveney, V. A.; Jamil, S.; Johnson, D. E.

    1997-11-26

    Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.

  7. Combinatorial FSK modulation for power-efficient high-rate communications

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Budinger, James M.; Vanderaar, Mark J.

    1991-01-01

    Deep-space and satellite communications systems must be capable of conveying high-rate data accurately with low transmitter power, often through dispersive channels. A class of noncoherent Combinatorial Frequency Shift Keying (CFSK) modulation schemes is investigated which address these needs. The bit error rate performance of this class of modulation formats is analyzed and compared to the more traditional modulation types. Candidate modulator, demodulator, and digital signal processing (DSP) hardware structures are examined in detail. System-level issues are also discussed.

  8. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  9. Progeny testing: proceedings of servicewide genetics workshop

    Treesearch

    Dick Miller

    1984-01-01

    The primary objective of this workshop was to discuss in detail the state- of-the-art of progeny testing. All aspects, from setting objectives through data collection and analysis, was be covered. We all know progeny testing is a highly technical phase of our tree improvement programs. Each task is critical and must be performed accurately and within a prescribed time...

  10. Characterization of beryllium deformation using in-situ x-ray diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnuson, Eric Alan; Brown, Donald William; Clausen, Bjorn

    2015-08-24

    Beryllium’s unique mechanical properties are extremely important in a number of high performance applications. Consequently, accurate models for the mechanical behavior of beryllium are required. However, current models are not sufficiently microstructure aware to accurately predict the performance of beryllium under a range of processing and loading conditions. Previous experiments conducted using the SMARTS and HIPPO instruments at the Lujan Center(LANL), have studied the relationship between strain rate and texture development, but due to the limitations of neutron diffraction studies, it was not possible to measure the response of the material in real-time. In-situ diffraction experiments conducted at the Advancedmore » Photon Source have allowed the real time measurement of the mechanical response of compressed beryllium. Samples of pre-strained beryllium were reloaded orthogonal to their original load path to show the reorientation of already twinned grains. Additionally, the in-situ experiments allowed the real time tracking of twin evolution in beryllium strained at high rates. The data gathered during these experiments will be used in the development and validation of a new, microstructure aware model of the constitutive behavior of beryllium.« less

  11. Shape Sensing Techniques for Continuum Robots in Minimally Invasive Surgery: A Survey.

    PubMed

    Shi, Chaoyang; Luo, Xiongbiao; Qi, Peng; Li, Tianliang; Song, Shuang; Najdovski, Zoran; Fukuda, Toshio; Ren, Hongliang

    2017-08-01

    Continuum robots provide inherent structural compliance with high dexterity to access the surgical target sites along tortuous anatomical paths under constrained environments and enable to perform complex and delicate operations through small incisions in minimally invasive surgery. These advantages enable their broad applications with minimal trauma and make challenging clinical procedures possible with miniaturized instrumentation and high curvilinear access capabilities. However, their inherent deformable designs make it difficult to realize 3-D intraoperative real-time shape sensing to accurately model their shape. Solutions to this limitation can lead themselves to further develop closely associated techniques of closed-loop control, path planning, human-robot interaction, and surgical manipulation safety concerns in minimally invasive surgery. Although extensive model-based research that relies on kinematics and mechanics has been performed, accurate shape sensing of continuum robots remains challenging, particularly in cases of unknown and dynamic payloads. This survey investigates the recent advances in alternative emerging techniques for 3-D shape sensing in this field and focuses on the following categories: fiber-optic-sensor-based, electromagnetic-tracking-based, and intraoperative imaging modality-based shape-reconstruction methods. The limitations of existing technologies and prospects of new technologies are also discussed.

  12. Nanouric acid or nanocalcium phosphate as central nidus to induce calcium oxalate stone formation: a high-resolution transmission electron microscopy study on urinary nanocrystallites

    PubMed Central

    Gao, Jie; Xue, Jun-Fa; Xu, Meng; Gui, Bao-Song; Wang, Feng-Xin; Ouyang, Jian-Ming

    2014-01-01

    Purpose This study aimed to accurately analyze the relationship between calcium oxalate (CaOx) stone formation and the components of urinary nanocrystallites. Method High-resolution transmission electron microscopy (HRTEM), selected area electron diffraction, fast Fourier transformation of HRTEM, and energy dispersive X-ray spectroscopy were performed to analyze the components of these nanocrystallites. Results The main components of CaOx stones are calcium oxalate monohydrate and a small amount of dehydrate, while those of urinary nanocrystallites are calcium oxalate monohydrate, uric acid, and calcium phosphate. The mechanism of formation of CaOx stones was discussed based on the components of urinary nanocrystallites. Conclusion The formation of CaOx stones is closely related both to the properties of urinary nanocrystallites and to the urinary components. The combination of HRTEM, fast Fourier transformation, selected area electron diffraction, and energy dispersive X-ray spectroscopy could be accurately performed to analyze the components of single urinary nanocrystallites. This result provides evidence for nanouric acid and/or nanocalcium phosphate crystallites as the central nidus to induce CaOx stone formation. PMID:25258530

  13. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  14. NPAC-Nozzle Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.

  15. Simulation, measurement, and emulation of photovoltaic modules using high frequency and high power density power electronic circuits

    NASA Astrophysics Data System (ADS)

    Erkaya, Yunus

    The number of solar photovoltaic (PV) installations is growing exponentially, and to improve the energy yield and the efficiency of PV systems, it is necessary to have correct methods for simulation, measurement, and emulation. PV systems can be simulated using PV models for different configurations and technologies of PV modules. Additionally, different environmental conditions of solar irradiance, temperature, and partial shading can be incorporated in the model to accurately simulate PV systems for any given condition. The electrical measurement of PV systems both prior to and after making electrical connections is important for attaining high efficiency and reliability. Measuring PV modules using a current-voltage (I-V) curve tracer allows the installer to know whether the PV modules are 100% operational. The installed modules can be properly matched to maximize performance. Once installed, the whole system needs to be characterized similarly to detect mismatches, partial shading, or installation damage before energizing the system. This will prevent any reliability issues from the onset and ensure the system efficiency will remain high. A capacitive load is implemented in making I-V curve measurements with the goal of minimizing the curve tracer volume and cost. Additionally, the increase of measurement resolution and accuracy is possible via the use of accurate voltage and current measurement methods and accurate PV models to translate the curves to standard testing conditions. A move from mechanical relays to solid-state MOSFETs improved system reliability while significantly reducing device volume and costs. Finally, emulating PV modules is necessary for testing electrical components of a PV system. PV emulation simplifies and standardizes the tests allowing for different irradiance, temperature and partial shading levels to be easily tested. Proper emulation of PV modules requires an accurate and mathematically simple PV model that incorporates all known system variables so that any PV module can be emulated as the design requires. A non-synchronous buck converter is proposed for the emulation of a single, high-power PV module using traditional silicon devices. With the proof-of-concept working and improvements in efficiency, power density and steady-state errors made, dynamic tests were performed using an inverter connected to the PV emulator. In order to improve the dynamic characteristics, a synchronous buck converter topology is proposed along with the use of advanced GaNFET devices which resulted in very high power efficiency and improved dynamic response characteristics when emulating PV modules.

  16. A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions

    NASA Astrophysics Data System (ADS)

    Hagerty, S.; Ellis, H., Jr.

    2016-09-01

    Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.

  17. High-resolution, detailed simulations of low foot and high foot implosion experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Clark, Daniel

    2015-11-01

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3-D) character of the flow, accurately modeling NIF implosions remains at the edge of current radiation hydrodynamics simulation capabilities. This talk describes the current state of progress of 3-D, high-resolution, capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. Most importantly, it is found that a single, standard simulation methodology appears adequate to model both implosion types and gives confidence that such a model can be used to guide future implosion designs toward ignition. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  18. Sonic Thermometer for High-Altitude Balloons

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    The sonic thermometer is a specialized application of well-known sonic anemometer technology. Adaptations have been made to the circuit, including the addition of supporting sensors, which enable its use in the high-altitude environment and in non-air gas mixtures. There is a need to measure gas temperatures inside and outside of superpressure balloons that are flown at high altitudes. These measurements will allow the performance of the balloon to be modeled more accurately, leading to better flight performance. Small thermistors (solid-state temperature sensors) have been used for this general purpose, and for temperature measurements on radiosondes. A disadvantage to thermistors and other physical (as distinct from sonic) temperature sensors is that they are subject to solar heating errors when they are exposed to the Sun, and this leads to issues with their use in a very high-altitude environment

  19. Post Launch Calibration and Testing of the Advanced Baseline Imager on the GOES-R Satellite

    NASA Technical Reports Server (NTRS)

    Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.

    2016-01-01

    The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United State's National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.

  20. Post Launch Calibration and Testing of the Advanced Baseline Imager on the GOES-R Satellite

    NASA Technical Reports Server (NTRS)

    Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.

    2016-01-01

    The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United States National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.

  1. Post launch calibration and testing of the Advanced Baseline Imager on the GOES-R satellite

    NASA Astrophysics Data System (ADS)

    Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.

    2016-05-01

    The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United State's National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.

  2. An improved method for characterizing photoresist lithographic and defectivity performance for sub-20nm node lithography

    NASA Astrophysics Data System (ADS)

    Amblard, Gilles; Purdy, Sara; Cooper, Ryan; Hockaday, Marjory

    2016-03-01

    The overall quality and processing capability of lithographic materials are critical for ensuring high device yield and performance at sub-20nm technology nodes in a high volume manufacturing environment. Insufficient process margin and high line width roughness (LWR) cause poor manufacturing control, while high defectivity causes product failures. In this paper, we focus on the most critical layer of a sub-20nm technology node LSI device, and present an improved method for characterizing both lithographic and post-patterning defectivity performance of state-of-the-art immersion photoresists. Multiple formulations from different suppliers were used and compared. Photoresists were tested under various process conditions, and multiple lithographic metrics were investigated (depth of focus, exposure dose latitude, line width roughness, etc.). Results were analyzed and combined using an innovative approach based on advanced software, providing clearer results than previously available. This increased detail enables more accurate performance comparisons among the different photoresists. Post-patterning defectivity was also quantified, with defects reviewed and classified using state-of-the-art inspection tools. Correlations were established between the lithographic and post-patterning defectivity performances for each material, and overall ranking was established among the photoresists, enabling the selection of the best performer for implementation in a high volume manufacturing environment.

  3. Performance analysis and optimization of high capacity pulse tube refrigerator

    NASA Astrophysics Data System (ADS)

    Ghahremani, Amir R.; Saidi, M. H.; Jahanbakhshi, R.; Roshanghalb, F.

    High capacity pulse tube refrigerator (HCPTR) is a new generation of cryocoolers tailored to provide more than 250 W of cooling power at cryogenic temperatures. The most important characteristics of HCPTR when compared to other types of pulse tube refrigerators are a powerful pressure wave generator, and an accurate design. In this paper the influence of geometrical and operating parameters on the performance of a double inlet pulse tube refrigerator (DIPTR) is studied. The model is validated with the existing experimental data. As a result of this optimization, a new configuration of HCPTR is proposed. This configuration provides 335 W at 80 K cold end temperature with a frequency of 50 Hz and COP of 0.05.

  4. Evaluation of wet tantalum capacitors after exposure to extended periods of ripple current, volume 1

    NASA Technical Reports Server (NTRS)

    Watson, G. W.; Lasharr, J. C.; Shumaker, M. J.

    1974-01-01

    The application of tantalum capacitors in the Viking Lander includes both dc voltage and ripple current electrical stress, high temperature during nonoperating times (sterilization), and high vibration and shock loads. The capacitors must survive these severe environments without any degradation if reliable performance is to be achieved. A test program was established to evaluate both wet-slug tantalum and wet-foil capacitors under conditions accurately duplicating actual Viking applications. Test results of the electrical performance characteristics during extended periods of ripple current, the characteristics of the internal silver migration as a function for extended periods of ripple current, and the existence of any memory characteristics are presented.

  5. Evaluation of wet tantalum capacitors after exposure to extended periods of ripple current, volume 2

    NASA Technical Reports Server (NTRS)

    Ward, C. M.

    1975-01-01

    The application of tantalum capacitors in the Viking Lander includes dc voltage and ripple current electrical stress, high temperature during nonoperating times (sterilization), and high vibration and shock loads. The capacitors must survive these severe environments without any degradation if reliable performance is to be achieved. A test program was established to evaluate both wet-slug tantalum and wet-foil capacitors under conditions accurately duplicating actual Viking applications. Test results of the electrical performance characteristics during extended periods of ripple current, the characteristics of the internal silver migration as a function of extended periods of ripple current, and the existence of any memory characteristics are presented.

  6. Simple and sensitive method for quantification of fluorescent enzymatic mature and senescent crosslinks of collagen in bone hydrolysate using single-column high performance liquid chromatography.

    PubMed

    Viguet-Carrin, S; Gineyts, E; Bertholon, C; Delmas, P D

    2009-01-01

    A rapid high performance liquid chromatographic method was developed including an internal standard for the measurement of mature and senescent crosslinks concentration in non-demineralized bone hydrolysates. To avoid the demineralization which is a tedious step, we developed a method based on the use of a solid-phase extraction procedure to clean-up the samples. It resulted in sensitive and accurate measurements: the detection limits as low as 0.2 pmol for the pyridimium crosslinks and 0.02 pmol for the pentosidine. The inter- and intra-assay coefficients of variation were as low as 5% and 2%, respectively, for all crosslinks.

  7. [High performance liquid chromatogram (HPLC) determination of adenosine phosphates in rat myocardium].

    PubMed

    Miao, Yu; Wang, Cheng-long; Yin, Hui-jun; Shi, Da-zhuo; Chen, Ke-ji

    2005-04-18

    To establish method for the quantitative determination of adenosine phosphates in rat myocardium by optimized high performance liquid chromatogram (HPLC). ODS HYPERSIL C(18) column and a mobile phase of 50 mmol/L tribasic potassium phosphate buffer solution (pH 6.5), with UV detector at 254 nm were used. The average recovery rates of myocardial adenosine triphosphate (ATP), adenosine diphosphate (ADP) and adenosine monophosphate (AMP) were 99%-107%, 96%-104% and 95%-119%, respectively; relative standard deviations (RSDs) of within-day and between-days were less than 1.5% and 5.1%, respectively. The method is simple, rapid and accurate, and can be used to analyse the adenosine phosphates in myocardium.

  8. [Determination of glycyrrhizinic acid in biotransformation system by reversed-phase high performance liquid chromatography].

    PubMed

    Li, Hui; Lu, Dingqiang; Liu, Weimin

    2004-05-01

    A method for determining glycyrrhizinic acid in the biotransformation system by reversed-phase high performance liquid chromatography (RP-HPLC) was developed. The HPLC conditions were as follows: Hypersil C18 column (4.6 mm i.d. x 250 mm, 5 microm) with a mixture of methanol-water-acetic acid (70:30:1, v/v) as the mobile phase; flow rate at 1.0 mL/min; and UV detection at 254 nm. The linear range of glycyrrhizinic acid was 0.2-20 microg. The recoveries were 98%-103% with relative standard deviations between 0.16% and 1.58% (n = 3). The method is simple, rapid and accurate for determining glycyrrhizinic acid.

  9. Near-field acoustical holography of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Wall, Alan T.; Gee, Kent L.; Neilsen, Tracianne; Krueger, David W.; Sommerfeldt, Scott D.; James, Michael M.

    2010-10-01

    Noise radiated from high-performance military jet aircraft poses a hearing-loss risk to personnel. Accurate characterization of jet noise can assist in noise prediction and noise reduction techniques. In this work, sound pressure measurements were made in the near field of an F-22 Raptor. With more than 6000 measurement points, this is the most extensive near-field measurement of a high-performance jet to date. A technique called near-field acoustical holography has been used to propagate the complex pressure from a two- dimensional plane to a three-dimensional region in the jet vicinity. Results will be shown and what they reveal about jet noise characteristics will be discussed.

  10. Accurate on-chip measurement of the Seebeck coefficient of high mobility small molecule organic semiconductors

    NASA Astrophysics Data System (ADS)

    Warwick, C. N.; Venkateshvaran, D.; Sirringhaus, H.

    2015-09-01

    We present measurements of the Seebeck coefficient in two high mobility organic small molecules, 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) and 2,9-didecyl-dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene (C10-DNTT). The measurements are performed in a field effect transistor structure with high field effect mobilities of approximately 3 cm2/V s. This allows us to observe both the charge concentration and temperature dependence of the Seebeck coefficient. We find a strong logarithmic dependence upon charge concentration and a temperature dependence within the measurement uncertainty. Despite performing the measurements on highly polycrystalline evaporated films, we see an agreement in the Seebeck coefficient with modelled values from Shi et al. [Chem. Mater. 26, 2669 (2014)] at high charge concentrations. We attribute deviations from the model at lower charge concentrations to charge trapping.

  11. Turbulence modeling of free shear layers for high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas L.

    1993-01-01

    The High Performance Aircraft (HPA) Grand Challenge of the High Performance Computing and Communications (HPCC) program involves the computation of the flow over a high performance aircraft. A variety of free shear layers, including mixing layers over cavities, impinging jets, blown flaps, and exhaust plumes, may be encountered in such flowfields. Since these free shear layers are usually turbulent, appropriate turbulence models must be utilized in computations in order to accurately simulate these flow features. The HPCC program is relying heavily on parallel computers. A Navier-Stokes solver (POVERFLOW) utilizing the Baldwin-Lomax algebraic turbulence model was developed and tested on a 128-node Intel iPSC/860. Algebraic turbulence models run very fast, and give good results for many flowfields. For complex flowfields such as those mentioned above, however, they are often inadequate. It was therefore deemed that a two-equation turbulence model will be required for the HPA computations. The k-epsilon two-equation turbulence model was implemented on the Intel iPSC/860. Both the Chien low-Reynolds-number model and a generalized wall-function formulation were included.

  12. Performance of Gas Scintillation Proportional Counter Array for High-Energy X-Ray Observatory

    NASA Technical Reports Server (NTRS)

    Gubarev, Mikhail; Ramsey, Brian; Apple, Jeffery

    2004-01-01

    A focal plane array of high-pressure gas scintillation proportional counters (GSPC) for a High Energy X-Ray Observatory (HERO) is developed at the Marshall Space Flight Center. The array is consisted from eight GSPCs and is a part of balloon born payload scheduled to flight in May 2004. These detectors have an active area of approximately 20 square centimeters, and are filled with a high pressure (10(exp 6) Pa) xenon-helium mixture. Imaging is via crossed-grid position-sensitive phototubes sensitive in the UV region. The performance of the GSPC is well matched to that of the telescopes x-ray optics which have response to 75 keV and a focal spot size of approximately 500 microns. The detector's energy resolution, 4% FWHM at 60 keV, is adequate for resolving the broad spectral lines of astrophysical importance and for accurate continuum measurements. Results of the on-earth detector calibration will be presented and in-flight detector performance will be provided, as available.

  13. Orientation estimation algorithm applied to high-spin projectiles

    NASA Astrophysics Data System (ADS)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margaret A. Marshall

    In the early 1970’s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950’s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files.” (Reference 1) While performing the ORSphere experiments care was taken to accurately document component dimensions (±0. 0001 in. for non-spherical parts), masses (±0.01 g), and material data The experiment was also set up to minimize the amount of structural material in the sphere proximity. A three part sphere was initially assembled with an average radius of 3.4665 in. and was then machined down to an average radius of 3.4420 in. (3.4425 in. nominal). These two spherical configurations were evaluated and judged to be acceptable benchmark experiments; however, the two experiments are highly correlated.« less

  15. Genome Sequencing and Analysis of Yersina pestis KIM D27, an Avirulent Strain Exempt from Select Agent Regulation

    PubMed Central

    Losada, Liliana; Varga, John J.; Hostetler, Jessica; Radune, Diana; Kim, Maria; Durkin, Scott; Schneewind, Olaf; Nierman, William C.

    2011-01-01

    Yersinia pestis is the causative agent of the plague. Y. pestis KIM 10+ strain was passaged and selected for loss of the 102 kb pgm locus, resulting in an attenuated strain, KIM D27. In this study, whole genome sequencing was performed on KIM D27 in order to identify any additional differences. Initial assemblies of 454 data were highly fragmented, and various bioinformatic tools detected between 15 and 465 SNPs and INDELs when comparing both strains, the vast majority associated with A or T homopolymer sequences. Consequently, Illumina sequencing was performed to improve the quality of the assembly. Hybrid sequence assemblies were performed and a total of 56 validated SNP/INDELs and 5 repeat differences were identified in the D27 strain relative to published KIM 10+ sequence. However, further analysis showed that 55 of these SNP/INDELs and 3 repeats were errors in the KIM 10+ reference sequence. We conclude that both 454 and Illumina sequencing were required to obtain the most accurate and rapid sequence results for Y. pestis KIMD27. SNP and INDELS calls were most accurate when both Newbler and CLC Genomics Workbench were employed. For purposes of obtaining high quality genome sequence differences between strains, any identified differences should be verified in both the new and reference genomes. PMID:21559501

  16. Genome sequencing and analysis of Yersina pestis KIM D27, an avirulent strain exempt from select agent regulation.

    PubMed

    Losada, Liliana; Varga, John J; Hostetler, Jessica; Radune, Diana; Kim, Maria; Durkin, Scott; Schneewind, Olaf; Nierman, William C

    2011-04-29

    Yersinia pestis is the causative agent of the plague. Y. pestis KIM 10+ strain was passaged and selected for loss of the 102 kb pgm locus, resulting in an attenuated strain, KIM D27. In this study, whole genome sequencing was performed on KIM D27 in order to identify any additional differences. Initial assemblies of 454 data were highly fragmented, and various bioinformatic tools detected between 15 and 465 SNPs and INDELs when comparing both strains, the vast majority associated with A or T homopolymer sequences. Consequently, Illumina sequencing was performed to improve the quality of the assembly. Hybrid sequence assemblies were performed and a total of 56 validated SNP/INDELs and 5 repeat differences were identified in the D27 strain relative to published KIM 10+ sequence. However, further analysis showed that 55 of these SNP/INDELs and 3 repeats were errors in the KIM 10+ reference sequence. We conclude that both 454 and Illumina sequencing were required to obtain the most accurate and rapid sequence results for Y. pestis KIMD27. SNP and INDELS calls were most accurate when both Newbler and CLC Genomics Workbench were employed. For purposes of obtaining high quality genome sequence differences between strains, any identified differences should be verified in both the new and reference genomes.

  17. Afghan National Army: DOD Has Taken Steps to Remedy Poor Management of Vehicle Maintenance Program

    DTIC Science & Technology

    2016-07-01

    contract and program were designed to promote the accurate assessment of Afghan vehicle maintenance needs, contractor performance, and cost...containment; (2) the U.S. government provided effective management and oversight of contractor performance; and (3) the contract met its program objectives...maintenance, (2) underestimated the cost of spare parts, and (3) established performance metrics that did not accurately assess contractor performance or

  18. Development of high definition OCT system for clinical therapy of skin diseases

    NASA Astrophysics Data System (ADS)

    Baek, Daeyul; Seo, Young-Seok; Kim, Jung-Hyun

    2018-02-01

    OCT is a non-invasive imaging technique that can be applied to diagnose various skin disease. Since its introduction in 1997, dermatology has used OCT technology to obtain high quality images of human skin. Recently, in order to accurately diagnose skin diseases, it is essential to develop OCT equipment that can obtain high quality images. Therefore, we developed the system that can obtain a high quality image by using a 1300 nm light source with a wide bandwidth and deep penetration depth, high-resolution image, and a camera capable of high sensitivity and high speed processing. We introduce the performance of the developed system and the clinical application data.

  19. Dense and Sparse Matrix Operations on the Cell Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less

  20. Field Performance of Photovoltaic Systems in the Tucson Desert

    NASA Astrophysics Data System (ADS)

    Orsburn, Sean; Brooks, Adria; Cormode, Daniel; Greenberg, James; Hardesty, Garrett; Lonij, Vincent; Salhab, Anas; St. Germaine, Tyler; Torres, Gabe; Cronin, Alexander

    2011-10-01

    At the Tucson Electric Power (TEP) solar test yard, over 20 different grid-connected photovoltaic (PV) systems are being tested. The goal at the TEP solar test yard is to measure and model real-world performance of PV systems and to benchmark new technologies such as holographic concentrators. By studying voltage and current produced by the PV systems as a function of incident irradiance, and module temperature, we can compare our measurements of field-performance (in a harsh desert environment) to manufacturer specifications (determined under laboratory conditions). In order to measure high-voltage and high-current signals, we designed and built reliable, accurate sensors that can handle extreme desert temperatures. We will present several benchmarks of sensors in a controlled environment, including shunt resistors and Hall-effect current sensors, to determine temperature drift and accuracy. Finally we will present preliminary field measurements of PV performance for several different PV technologies.

  1. Pulmonary tumor measurements from x-ray computed tomography in one, two, and three dimensions.

    PubMed

    Villemaire, Lauren; Owrangi, Amir M; Etemad-Rezai, Roya; Wilson, Laura; O'Riordan, Elaine; Keller, Harry; Driscoll, Brandon; Bauman, Glenn; Fenster, Aaron; Parraga, Grace

    2011-11-01

    We evaluated the accuracy and reproducibility of three-dimensional (3D) measurements of lung phantoms and patient tumors from x-ray computed tomography (CT) and compared these to one-dimensional (1D) and two-dimensional (2D) measurements. CT images of three spherical and three irregularly shaped tumor phantoms were evaluated by three observers who performed five repeated measurements. Additionally, three observers manually segmented 29 patient lung tumors five times each. Follow-up imaging was performed for 23 tumors and response criteria were compared. For a single subject, imaging was performed on nine occasions over 2 years to evaluate multidimensional tumor response. To evaluate measurement accuracy, we compared imaging measurements to ground truth using analysis of variance. For estimates of precision, intraobserver and interobserver coefficients of variation and intraclass correlations (ICC) were used. Linear regression and Pearson correlations were used to evaluate agreement and tumor response was descriptively compared. For spherical shaped phantoms, all measurements were highly accurate, but for irregularly shaped phantoms, only 3D measurements were in high agreement with ground truth measurements. All phantom and patient measurements showed high intra- and interobserver reproducibility (ICC >0.900). Over a 2-year period for a single patient, there was disagreement between tumor response classifications based on 3D measurements and those generated using 1D and 2D measurements. Tumor volume measurements were highly reproducible and accurate for irregular, spherical phantoms and patient tumors with nonuniform dimensions. Response classifications obtained from multidimensional measurements suggest that 3D measurements provide higher sensitivity to tumor response. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  2. A Sub-Millimetric 3-DOF Force Sensing Instrument with Integrated Fiber Bragg Grating for Retinal Microsurgery

    PubMed Central

    He, Xingchi; Handa, James; Gehlbach, Peter; Taylor, Russell; Iordachita, Iulian

    2013-01-01

    Vitreoretinal surgery requires very fine motor control to perform precise manipulation of the delicate tissue in the interior of the eye. Besides physiological hand tremor, fatigue, poor kinesthetic feedback, and patient movement, the absence of force sensing is one of the main technical challenges. Previous two degrees of freedom (DOF) force sensing instruments have demonstrated robust force measuring performance. The main design challenge is to incorporate high sensitivity axial force sensing. This paper reports the development of a sub-millimetric 3-DOF force sensing pick instrument based on fiber Bragg grating (FBG) sensors. The configuration of the four FBG sensors is arranged to maximize the decoupling between axial and transverse force sensing. A super-elastic nitinol flexure is designed to achieve high axial force sensitivity. An automated calibration system was developed for repeatability testing, calibration, and validation. Experimental results demonstrate a FBG sensor repeatability of 1.3 pm. The linear model for calculating the transverse forces provides an accurate global estimate. While the linear model for axial force is only locally accurate within a conical region with a 30° vertex angle, a second-order polynomial model can provide a useful global estimate for axial force. Combining the linear model for transverse forces and nonlinear model for axial force, the 3-DOF force sensing instrument can provide sub-millinewton resolution for axial force and a quarter millinewton for transverse forces. Validation with random samples show the force sensor can provide consistent and accurate measurement of three dimensional forces. PMID:24108455

  3. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  4. Characterization and quantitative analysis of surfactants in textile wastewater by liquid chromatography/quadrupole-time-of-flight mass spectrometry.

    PubMed

    González, Susana; Petrović, Mira; Radetic, Maja; Jovancic, Petar; Ilic, Vesna; Barceló, Damià

    2008-05-01

    A method based on the application of ultra-performance liquid chromatography (UPLC) coupled to hybrid quadrupole-time-of-flight mass spectrometry (QqTOF-MS) with an electrospray (ESI) interface has been developed for the screening and confirmation of several anionic and non-ionic surfactants: linear alkylbenzenesulfonates (LAS), alkylsulfate (AS), alkylethersulfate (AES), dihexyl sulfosuccinate (DHSS), alcohol ethoxylates (AEOs), coconut diethanolamide (CDEA), nonylphenol ethoxylates (NPEOs), and their degradation products (nonylphenol carboxylate (NPEC), octylphenol carboxylate (OPEC), 4-nonylphenol (NP), 4-octylphenol (OP) and NPEO sulfate (NPEO-SO4). The developed methodology permits reliable quantification combined with a high accuracy confirmation based on the accurate mass of the (de)protonated molecules in the TOFMS mode. For further confirmation of the identity of the detected compounds the QqTOF mode was used. Accurate masses of product ions obtained by performing collision-induced dissociation (CID) of the (de)protonated molecules of parent compounds were matched with the ions obtained for a standard solution. The method was applied for the quantitative analysis and high accuracy confirmation of surfactants in complex mixtures in effluents from the textile industry. Positive identification of the target compounds was based on accurate mass measurement of the base peak, at least one product ion and the LC retention time of the analyte compared with that of a standard. The most frequently surfactants found in these textile effluents were NPEO and NPEO-SO4 in concentrations ranging from 0.93 to 5.68 mg/L for NPEO and 0.06 to 4.30 mg/L for NPEO-SO4. AEOs were also identified.

  5. A look at profiler performance

    NASA Technical Reports Server (NTRS)

    Kessler, E.; Eilts, M.; Thomas, K.

    1986-01-01

    Since about 1974, Doppler radars operating in UHF and VHF ranges have been used increasingly to study atmospheric winds. Historically, large systems capable of obtaining data from high altitudes have focused attention on the mesosphere and stratosphere, rather than on the troposphere wherein abides most of the weather considered by most meteorologists. Research address some questions the meteorologist must logically ask first, viz., what is the actual performance capability of these systems, how accurate is the wind data of interest to meteorologists, and from what altitudes in the troposphere are the data reliably obtained.

  6. Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks

    PubMed Central

    Khan, Komal Saifullah; Tariq, Muhammad

    2014-01-01

    Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739

  7. Performance evaluation using SYSTID time domain simulation. [computer-aid design and analysis for communication systems

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.

    1975-01-01

    This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.

  8. Gaze Behavior of Gymnastics Judges: Where Do Experienced Judges and Gymnasts Look While Judging?

    ERIC Educational Resources Information Center

    Pizzera, Alexandra; Möller, Carsten; Plessner, Henning

    2018-01-01

    Gymnastics judges and former gymnasts have been shown to be quite accurate in detecting errors and accurately judging performance. Purpose: The purpose of the current study was to examine if this superior judging performance is reflected in judges' gaze behavior. Method: Thirty-five judges were asked to judge 21 gymnasts who performed a skill on…

  9. Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.

    PubMed

    von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg

    2014-01-01

    Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Zheng, Bin

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Zheng, Bin

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  12. Solving the Schroedinger equation for helium atom and its isoelectronic ions with the free iterative complement interaction (ICI) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2007-12-14

    The Schroedinger equation was solved very accurately for helium atom and its isoelectronic ions (Z=1-10) with the free iterative complement interaction (ICI) method followed by the variational principle. We obtained highly accurate wave functions and energies of helium atom and its isoelectronic ions. For helium, the calculated energy was -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 a.u., correct over 40 digit accuracy, and for H{sup -}, it was -0.527 751 016 544 377 196 590 814 566 747 511 383 045 02 a.u. These results prove numerically that with the free ICImore » method, we can calculate the solutions of the Schroedinger equation as accurately as one desires. We examined several types of scaling function g and initial function {psi}{sub 0} of the free ICI method. The performance was good when logarithm functions were used in the initial function because the logarithm function is physically essential for three-particle collision area. The best performance was obtained when we introduce a new logarithm function containing not only r{sub 1} and r{sub 2} but also r{sub 12} in the same logarithm function.« less

  13. Toward seamless wearable sensing: Automatic on-body sensor localization for physical activity monitoring.

    PubMed

    Saeedi, Ramyar; Purath, Janet; Venkatasubramanian, Krishna; Ghasemzadeh, Hassan

    2014-01-01

    Mobile wearable sensors have demonstrated great potential in a broad range of applications in healthcare and wellness. These technologies are known for their potential to revolutionize the way next generation medical services are supplied and consumed by providing more effective interventions, improving health outcomes, and substantially reducing healthcare costs. Despite these potentials, utilization of these sensor devices is currently limited to lab settings and in highly controlled clinical trials. A major obstacle in widespread utilization of these systems is that the sensors need to be used in predefined locations on the body in order to provide accurate outcomes such as type of physical activity performed by the user. This has reduced users' willingness to utilize such technologies. In this paper, we propose a novel signal processing approach that leverages feature selection algorithms for accurate and automatic localization of wearable sensors. Our results based on real data collected using wearable motion sensors demonstrate that the proposed approach can perform sensor localization with 98.4% accuracy which is 30.7% more accurate than an approach without a feature selection mechanism. Furthermore, utilizing our node localization algorithm aids the activity recognition algorithm to achieve 98.8% accuracy (an increase from 33.6% for the system without node localization).

  14. Refined Dummy Atom Model of Mg(2+) by Simple Parameter Screening Strategy with Revised Experimental Solvation Free Energy.

    PubMed

    Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei

    2015-12-28

    Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.

  15. RFA Guardian: Comprehensive Simulation of Radiofrequency Ablation Treatment of Liver Tumors.

    PubMed

    Voglreiter, Philip; Mariappan, Panchatcharam; Pollari, Mika; Flanagan, Ronan; Blanco Sequeiros, Roberto; Portugaller, Rupert Horst; Fütterer, Jurgen; Schmalstieg, Dieter; Kolesnik, Marina; Moche, Michael

    2018-01-15

    The RFA Guardian is a comprehensive application for high-performance patient-specific simulation of radiofrequency ablation of liver tumors. We address a wide range of usage scenarios. These include pre-interventional planning, sampling of the parameter space for uncertainty estimation, treatment evaluation and, in the worst case, failure analysis. The RFA Guardian is the first of its kind that exhibits sufficient performance for simulating treatment outcomes during the intervention. We achieve this by combining a large number of high-performance image processing, biomechanical simulation and visualization techniques into a generalized technical workflow. Further, we wrap the feature set into a single, integrated application, which exploits all available resources of standard consumer hardware, including massively parallel computing on graphics processing units. This allows us to predict or reproduce treatment outcomes on a single personal computer with high computational performance and high accuracy. The resulting low demand for infrastructure enables easy and cost-efficient integration into the clinical routine. We present a number of evaluation cases from the clinical practice where users performed the whole technical workflow from patient-specific modeling to final validation and highlight the opportunities arising from our fast, accurate prediction techniques.

  16. Multiple-frequency continuous wave ultrasonic system for accurate distance measurement

    NASA Astrophysics Data System (ADS)

    Huang, C. F.; Young, M. S.; Li, Y. C.

    1999-02-01

    A highly accurate multiple-frequency continuous wave ultrasonic range-measuring system for use in air is described. The proposed system uses a method heretofore applied to radio frequency distance measurement but not to air-based ultrasonic systems. The method presented here is based upon the comparative phase shifts generated by three continuous ultrasonic waves of different but closely spaced frequencies. In the test embodiment to confirm concept feasibility, two low cost 40 kHz ultrasonic transducers are set face to face and used to transmit and receive ultrasound. Individual frequencies are transmitted serially, each generating its own phase shift. For any given frequency, the transmitter/receiver distance modulates the phase shift between the transmitted and received signals. Comparison of the phase shifts allows a highly accurate evaluation of target distance. A single-chip microcomputer-based multiple-frequency continuous wave generator and phase detector was designed to record and compute the phase shift information and the resulting distance, which is then sent to either a LCD or a PC. The PC is necessary only for calibration of the system, which can be run independently after calibration. Experiments were conducted to test the performance of the whole system. Experimentally, ranging accuracy was found to be within ±0.05 mm, with a range of over 1.5 m. The main advantages of this ultrasonic range measurement system are high resolution, low cost, narrow bandwidth requirements, and ease of implementation.

  17. Learning, memory, and the role of neural network architecture.

    PubMed

    Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M

    2011-06-01

    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  18. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  19. Development of ultra-high temperature material characterization capabilities using digital image correlation analysis

    NASA Astrophysics Data System (ADS)

    Cline, Julia Elaine

    2011-12-01

    Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.

  20. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  1. Simultaneous quantification of neuroactive dopamine serotonin and kynurenine pathway metabolites in gender-specific youth urine by ultra performance liquid chromatography tandem high resolution mass spectrometry.

    PubMed

    Lu, Haihua; Yu, Jing; Wang, Jun; Wu, Linlin; Xiao, Hang; Gao, Rong

    2016-04-15

    Neuroactive metabolites in dopamine, serotonin and kynurenine metabolic pathways play key roles in several physiological processes and their imbalances have been implicated in the pathophysiology of a wide range of disorders. The association of these metabolites' alterations with various pathologies has raised interest in analytical methods for accurate quantification in biological fluids. However, simultaneous measurement of various neuroactive metabolites represents great challenges due to their trace level, high polarity and instability. In this study, an analytical method was developed and validated for accurately quantifying 12 neuroactive metabolites covering three metabolic pathways in youth urine by ultra performance liquid chromatography coupled to electrospray tandem high resolution mass spectrometry (UPLC-ESI-HRMS/MS). The strategy of dansyl chloride derivatization followed by solid phase extraction on C18 cartridges were employed to reduce matrix interference and improve the extraction efficiency. The reverse phase chromatographic separation was achieved with a gradient elution program in 20 min. The high resolution mass spectrometer (Q Exactive) was employed, with confirmation and quantification by Target-MS/MS scan mode. Youth urine samples collected from 100 healthy volunteers (Female:Male=1:1) were analyzed to explore the differences in metabolite profile and their turnover between genders. The results demonstrated that the UPLC-ESI-HRMS/MS method is sensitive and robust, suitable for monitoring a large panel of metabolites and for discovering new biomarkers in the medical fields. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Validation of high-throughput methods for measuring blood urea nitrogen and urinary albumin concentrations in mice.

    PubMed

    Grindle, Susan; Garganta, Cheryl; Sheehan, Susan; Gile, Joe; Lapierre, Andree; Whitmore, Harry; Paigen, Beverly; DiPetrillo, Keith

    2006-12-01

    Chronic kidney disease is a substantial medical and economic burden. Animal models, including mice, are a crucial component of kidney disease research; however, recent studies disprove the ability of autoanalyzer methods to accurately quantify plasma creatinine levels, an established marker of kidney disease, in mice. Therefore, we validated autoanalyzer methods for measuring blood urea nitrogen (BUN) and urinary albumin concentrations, 2 common markers of kidney disease, in samples from mice. We used high-performance liquid chromatography to validate BUN concentrations measured using an autoanalyzer, and we utilized mouse albumin standards to determine the accuracy of the autoanalyzer over a wide range of albumin concentrations. We observed a significant, linear correlation between BUN concentrations measured by autoanalyzer and high-performance liquid chromatography. We also found a linear relationship between known and measured albumin concentrations, although the autoanalyzer method underestimated the known amount of albumin by 3.5- to 4-fold. We confirmed that plasma and urine constituents do not interfere with the autoanalyzer methods for measuring BUN and urinary albumin concentrations. In addition, we verified BUN and albuminuria as useful markers to detect kidney disease in aged mice and mice with 5/6-nephrectomy. We conclude that autoanalyzer methods are suitable for high-throughput analysis of BUN and albumin concentrations in mice. The autoanalyzer accurately quantifies BUN concentrations in mouse plasma samples and is useful for measuring urinary albumin concentrations when used with mouse albumin standards.

  3. Applications of high spectral resolution FTIR observations demonstrated by the radiometrically accurate ground-based AERI and the scanning HIS aircraft instruments

    NASA Astrophysics Data System (ADS)

    Revercomb, Henry E.; Knuteson, Robert O.; Best, Fred A.; Tobin, David C.; Smith, William L.; Feltz, Wayne F.; Petersen, Ralph A.; Antonelli, Paolo; Olson, Erik R.; LaPorte, Daniel D.; Ellington, Scott D.; Werner, Mark W.; Dedecker, Ralph G.; Garcia, Raymond K.; Ciganovich, Nick N.; Howell, H. Benjamin; Vinson, Kenneth; Ackerman, Steven A.

    2003-06-01

    Development in the mid 80s of the High-resolution Interferometer Sounder (HIS) for the high altitude NASA ER2 aircraft demonstrated the capability for advanced atmospheric temperature and water vapor sounding and set the stage for new satellite instruments that are now becoming a reality [AIRS (2002), CrIS (2006), IASI (2006), GIFTS (2005/6)]. Follow-on developments at the University of Wisconsin-Madison that employ interferometry for a wide range of Earth observations include the ground-based Atmospheric Emitted Radiance Interferometer (AERI) and the Scanning HIS aircraft instrument (S-HIS). The AERI was developed for the US DOE Atmospheric Radiation Measurement (ARM) Program, primarily to provide highly accurate radiance spectra for improving radiative transfer models. The continuously operating AERI soon demonstrated valuable new capabilities for sensing the rapidly changing state of the boundary layer and properties of the surface and clouds. The S-HIS is a smaller version of the original HIS that uses cross-track scanning to enhance spatial coverage. S-HIS and its close cousin, the NPOESS Airborne Sounder Testbed (NAST) operated by NASA Langley, are being used for satellite instrument validation and for atmospheric research. The calibration and noise performance of these and future satellite instruments is key to optimizing their remote sensing products. Recently developed techniques for improving effective radiometric performance by removing noise in post-processing is a primary subject of this paper.

  4. Microgravity Compatible Reagentless Instrumentation for Detection of Dissolved Organic Acids and Alcohols in Potable Water

    NASA Technical Reports Server (NTRS)

    Akse, James R.; Jan, Darrell L. (Technical Monitor)

    2002-01-01

    The Organic Acid and Alcohol Monitor (OAAM) program has resulted in the successful development of a computer controlled prototype analyzer capable of accurately determining aqueous organic acids and primary alcohol concentrations over a large dynamic range with high sensitivity. Formic, acetic, and propionic acid were accurately determined at concentrations as low as 5 to 10 micrograms/L in under 20 minutes, or as high as 10 to 20 mg/L in under 30 minutes. Methanol, ethanol, and propanol were determined at concentrations as low as 20 to 100 micrograms/L, or as high as 10 mg/L in under 30 minutes. Importantly for space based application, the OAAM requires no reagents or hazardous chemicals to perform these analyses needing only power, water, and CO2 free purge gas. The OAAM utilized two membrane processes to segregate organic acids from interfering ions. The organic acid concentration was then determined based upon the conductiometric signal. Separation of individual organic acids was accomplished using a chromatographic column. Alcohols are determined in a similar manner after conversion to organic acids by sequential biocatalytic and catalytic oxidation steps. The OAAM was designed to allow the early diagnosis of under performing or failing sub-systems within the Water Recovery System (WRS) baselined for the International Space Station (ISS). To achieve this goal, several new technologies were developed over the course of the OAAM program.

  5. Caustic Singularities Of High-Gain, Dual-Shaped Reflectors

    NASA Technical Reports Server (NTRS)

    Galindo, Victor; Veruttipong, Thavath W.; Imbriale, William A.; Rengarajan, Sambiam

    1991-01-01

    Report presents study of some sources of error in analysis, by geometric theory of diffraction (GTD), of performance of high-gain, dual-shaped antenna reflector. Study probes into underlying analytic causes of singularity, with view toward devising and testing practical methods to avoid problems caused by singularity. Hybrid physical optics (PO) approach used to study near-field spillover or noise-temperature characteristics of high-gain relector antenna efficiently and accurately. Report illustrates this approach and underlying principles by presenting numerical results, for both offset and symmetrical reflector systems, computed by GTD, PO, and PO/GO methods.

  6. Development of an Integrated Thermocouple for the Accurate Sample Temperature Measurement During High Temperature Environmental Scanning Electron Microscopy (HT-ESEM) Experiments.

    PubMed

    Podor, Renaud; Pailhon, Damien; Ravaux, Johann; Brau, Henri-Pierre

    2015-04-01

    We have developed two integrated thermocouple (TC) crucible systems that allow precise measurement of sample temperature when using a furnace associated with an environmental scanning electron microscope (ESEM). Sample temperatures measured with these systems are precise (±5°C) and reliable. The TC crucible systems allow working with solids and liquids (silicate melts or ionic liquids), independent of the gas composition and pressure. These sample holder designs will allow end users to perform experiments at high temperature in the ESEM chamber with high precision control of the sample temperature.

  7. SWOT Oceanography and Hydrology Data Product Simulators

    NASA Technical Reports Server (NTRS)

    Peral, Eva; Rodriguez, Ernesto; Fernandez, Daniel Esteban; Johnson, Michael P.; Blumstein, Denis

    2013-01-01

    The proposed Surface Water and Ocean Topography (SWOT) mission would demonstrate a new measurement technique using radar interferometry to obtain wide-swath measurements of water elevation at high resolution over ocean and land, addressing the needs of both the hydrology and oceanography science communities. To accurately evaluate the performance of the proposed SWOT mission, we have developed several data product simulators at different levels of fidelity and complexity.

  8. A transportable Paul-trap for levitation and accurate positioning of micron-scale particles in vacuum for laser-plasma experiments

    NASA Astrophysics Data System (ADS)

    Ostermayr, T. M.; Gebhard, J.; Haffa, D.; Kiefer, D.; Kreuzer, C.; Allinger, K.; Bömer, C.; Braenzel, J.; Schnürer, M.; Cermak, I.; Schreiber, J.; Hilz, P.

    2018-01-01

    We report on a Paul-trap system with large access angles that allows positioning of fully isolated micrometer-scale particles with micrometer precision as targets in high-intensity laser-plasma interactions. This paper summarizes theoretical and experimental concepts of the apparatus as well as supporting measurements that were performed for the trapping process of single particles.

  9. Molecular Dynamics Characterization of the Conformational Landscape of Small Peptides: A Series of Hands-On Collaborative Practical Sessions for Undergraduate Students

    ERIC Educational Resources Information Center

    Rodrigues, João P. G. L. M.; Melquiond, Adrien S. J.; Bonvin, Alexandre M. J. J.

    2016-01-01

    Molecular modelling and simulations are nowadays an integral part of research in areas ranging from physics to chemistry to structural biology, as well as pharmaceutical drug design. This popularity is due to the development of high-performance hardware and of accurate and efficient molecular mechanics algorithms by the scientific community. These…

  10. A Project Manager’s Personal Attributes as Predictors for Success

    DTIC Science & Technology

    2007-03-01

    Northouse (2004) explains that leadership is highly a researched topic with much written. Yet, a definitive description of this phenomenon is difficult to...express because of its complexity. Even though leadership has varied descriptions and conceptualizations, Northouse states that the concept of...characteristic of leadership is not an accurate predictor of performance. Leadership is a complex, multi-faceted attribute ( Northouse , 2004) and specific

  11. Accuracy of Surgery Clerkship Performance Raters.

    ERIC Educational Resources Information Center

    Littlefield, John H.; And Others

    1991-01-01

    Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)

  12. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  13. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  14. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  15. Rapid and reliable quantitation of amino acids and myo-inositol in mouse brain by high performance liquid chromatography and tandem mass spectrometry.

    PubMed

    Bathena, Sai P; Huang, Jiangeng; Epstein, Adrian A; Gendelman, Howard E; Boska, Michael D; Alnouti, Yazen

    2012-04-15

    Amino acids and myo-inositol have long been proposed as putative biomarkers for neurodegenerative diseases. Accurate measures and stability have precluded their selective use. To this end, a sensitive liquid chromatography tandem mass spectrometry (LC-MS/MS) method based on multiple reaction monitoring was developed to simultaneously quantify glutamine, glutamate, γ-aminobutyric acid (GABA), aspartic acid, N-acetyl aspartic acid, taurine, choline, creatine, phosphocholine and myo-inositol in mouse brain by methanol extractions. Chromatography was performed using a hydrophilic interaction chromatography silica column within in a total run time of 15 min. The validated method is selective, sensitive, accurate, and precise. The method has a limit of quantification ranging from 2.5 to 20 ng/ml for a range of analytes and a dynamic range from 2.5-20 to 500-4000 ng/ml. This LC-MS/MS method was validated for biomarker discovery in models of human neurological disorders. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. The road surveying system of the federal highway research institute - a performance evaluation of road segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Streiter, R.; Wanielik, G.

    2013-07-01

    The construction of highways and federal roadways is subject to many restrictions and designing rules. The focus is on safety, comfort and smooth driving. Unfortunately, the planning information for roadways and their real constitution, course and their number of lanes and lane widths is often unsure or not available. Due to digital map databases of roads raised much interest during the last years and became one major cornerstone of innovative Advanced Driving Assistance Systems (ADASs), the demand for accurate and detailed road information increases considerably. Within this project a measurement system for collecting high accurate road data was developed. This paper gives an overview about the sensor configuration within the measurement vehicle, introduces the implemented algorithms and shows some applications implemented in the post processing platform. The aim is to recover the origin parametric description of the roadway and the performance of the measurement system is being evaluated against several original road construction information.

  17. Detection of nitrogen dioxide by CW cavity-enhanced spectroscopy

    NASA Astrophysics Data System (ADS)

    Jie, Guo; Han, Ye-Xing; Yu, Zhi-Wei; Tang, Huai-Wu

    2016-11-01

    In the paper, an accurate and sensitive system was used to monitor the ambient atmospheric NO2 concentrations. This system utilizes cavity attenuated phase shift spectroscopy(CAPS), a technology related to cavity ring down spectroscopy(CRDS). Advantages of the CAPS system include such as: (1) cheap and easy to control the light source, (2) high accuracy, and (3) low detection limit. The performance of the CAPS system was evaluated by measuring of the stability and response of the system. The minima ( 0.08 ppb NO2) in the Allan plots show the optimum average time( 100s) for optimum detection performance of the CAPS system. Over a 20-day-long period of the ambient atmospheric NO2 concentrations monitoring, a comparison of the CAPS system with an extremely accurate and precise chemiluminescence-based NOx analyzer showed that the CAPS system was able to reliably and quantitatively measure both large and small fluctuations in the ambient nitrogen dioxide concentration. The experimental results show that the measuring instrument results correlation is 0.95.

  18. Properties of targeted preamplification in DNA and cDNA quantification.

    PubMed

    Andersson, Daniel; Akrap, Nina; Svec, David; Godfrey, Tony E; Kubista, Mikael; Landberg, Göran; Ståhlberg, Anders

    2015-01-01

    Quantification of small molecule numbers often requires preamplification to generate enough copies for accurate downstream enumerations. Here, we studied experimental parameters in targeted preamplification and their effects on downstream quantitative real-time PCR (qPCR). To evaluate different strategies, we monitored the preamplification reaction in real-time using SYBR Green detection chemistry followed by melting curve analysis. Furthermore, individual targets were evaluated by qPCR. The preamplification reaction performed best when a large number of primer pairs was included in the primer pool. In addition, preamplification efficiency, reproducibility and specificity were found to depend on the number of template molecules present, primer concentration, annealing time and annealing temperature. The amount of nonspecific PCR products could also be reduced about 1000-fold using bovine serum albumin, glycerol and formamide in the preamplification. On the basis of our findings, we provide recommendations how to perform robust and highly accurate targeted preamplification in combination with qPCR or next-generation sequencing.

  19. Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.

    PubMed

    Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab

    2009-02-01

    An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.

  20. Stability of Hydrocarbons of the Polyhedrane Family: Convergence of ab Initio Calculations and Corresponding Assessment of DFT Main Approximations.

    PubMed

    Sancho-García, J C

    2011-09-13

    Highly accurate coupled-cluster (CC) calculations with large basis sets have been performed to study the binding energy of the (CH)12, (CH)16, (CH)20, and (CH)24 polyhedral hydrocarbons in two, cage-like and planar, forms. We also considered the effect of other minor contributions: core-correlation, relativistic corrections, and extrapolations to the limit of the full CC expansion. Thus, chemically accurate values could be obtained for these complicated systems. These nearly exact results are used to evaluate next the performance of main approximations (i.e., pure, hybrid, and double-hybrid methods) within density functional theory (DFT) in a systematic fashion. Some commonly used functionals, including the B3LYP model, are affected by large errors, and only those having reduced self-interaction error (SIE), which includes the last family of conjectured expressions (double hybrids), are able to achieve reasonable low deviations of 1-2 kcal/mol especially when an estimate for dispersion interactions is also added.

  1. Theory of mind selectively predicts preschoolers’ knowledge-based selective word learning

    PubMed Central

    Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane

    2015-01-01

    Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory of mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children’s preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children’s developing social cognition and early learning. PMID:26211504

  2. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  3. Theory of mind selectively predicts preschoolers' knowledge-based selective word learning.

    PubMed

    Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane

    2015-11-01

    Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory-of-mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children's preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children's developing social cognition and early learning. © 2015 The British Psychological Society.

  4. A clinical algorithm identifies high risk pediatric oncology and bone marrow transplant patients likely to benefit from treatment of adenoviral infection.

    PubMed

    Williams, Kirsten Marie; Agwu, Allison L; Dabb, Alix A; Higman, Meghan A; Loeb, David M; Valsamakis, Alexandra; Chen, Allen R

    2009-11-01

    Adenoviral infections cause morbidity and mortality in blood and marrow transplantation and pediatric oncology patients. Cidofovir is active against adenovirus, but must be used judiciously because of its nephrotoxicity and unclear indications. Therefore, before introducing cidofovir use during an adenoviral outbreak, we developed a clinical algorithm to distinguish low risk patients from those who merited cidofovir therapy because of significant adenoviral disease and high risk for death. This study was conducted to determine whether the algorithm accurately predicted severe adenovirus disease and whether selective cidofovir treatment was beneficial. A retrospective analysis of a pediatric oncology/blood and marrow transplantation cohort prealgorithm and postalgorithm implementation was performed. Twenty patients with adenovirus infection were identified (14 high risk and 6 low risk). All low-risk patients cleared their infections without treatment. Before algorithm implementation, all untreated high-risk patients died, 4 out of 5 (80%), from adenoviral infection. In contrast, cidofovir reduced adenovirus-related mortality in the high-risk group postalgorithm implementation (9 patients treated, 1 patient died; RR 0.14, P<0.05) and all treated high-risk patients cleared their virus. The clinical algorithm accurately identified patients at high risk for severe fatal adenoviral disease who would benefit from selective use of cidofovir.

  5. Temperature and Humidity Calibration of a Low-Cost Wireless Dust Sensor for Real-Time Monitoring.

    PubMed

    Hojaiji, Hannaneh; Kalantarian, Haik; Bui, Alex A T; King, Christine E; Sarrafzadeh, Majid

    2017-03-01

    This paper introduces the design, calibration, and validation of a low-cost portable sensor for the real-time measurement of dust particles within the environment. The proposed design consists of low hardware cost and calibration based on temperature and humidity sensing to achieve accurate processing of airborne dust density. Using commercial particulate matter sensors, a highly accurate air quality monitoring sensor was designed and calibrated using real world variations in humidity and temperature for indoor and outdoor applications. Furthermore, to provide a low-cost secure solution for real-time data transfer and monitoring, an onboard Bluetooth module with AES data encryption protocol was implemented. The wireless sensor was tested against a Dylos DC1100 Pro Air Quality Monitor, as well as an Alphasense OPC-N2 optical air quality monitoring sensor for accuracy. The sensor was also tested for reliability by comparing the sensor to an exact copy of itself under indoor and outdoor conditions. It was found that accurate measurements under real-world humid and temperature varying and dynamically changing conditions were achievable using the proposed sensor when compared to the commercially available sensors. In addition to accurate and reliable sensing, this sensor was designed to be wearable and perform real-time data collection and transmission, making it easy to collect and analyze data for air quality monitoring and real-time feedback in remote health monitoring applications. Thus, the proposed device achieves high quality measurements at lower-cost solutions than commercially available wireless sensors for air quality.

  6. Z-scan theoretical and experimental studies for accurate measurements of the nonlinear refractive index and absorption of optical glasses near damage threshold

    NASA Astrophysics Data System (ADS)

    Olivier, Thomas; Billard, Franck; Akhouayri, Hassan

    2004-06-01

    Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.

  7. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  8. Neural correlates of effective learning in experienced medical decision-makers.

    PubMed

    Downar, Jonathan; Bhatt, Meghana; Montague, P Read

    2011-01-01

    Accurate associative learning is often hindered by confirmation bias and success-chasing, which together can conspire to produce or solidify false beliefs in the decision-maker. We performed functional magnetic resonance imaging in 35 experienced physicians, while they learned to choose between two treatments in a series of virtual patient encounters. We estimated a learning model for each subject based on their observed behavior and this model divided clearly into high performers and low performers. The high performers showed small, but equal learning rates for both successes (positive outcomes) and failures (no response to the drug). In contrast, low performers showed very large and asymmetric learning rates, learning significantly more from successes than failures; a tendency that led to sub-optimal treatment choices. Consistently with these behavioral findings, high performers showed larger, more sustained BOLD responses to failed vs. successful outcomes in the dorsolateral prefrontal cortex and inferior parietal lobule while low performers displayed the opposite response profile. Furthermore, participants' learning asymmetry correlated with anticipatory activation in the nucleus accumbens at trial onset, well before outcome presentation. Subjects with anticipatory activation in the nucleus accumbens showed more success-chasing during learning. These results suggest that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. The differential brain activations between high and low performers could potentially be developed into biomarkers to identify efficient learners on novel decision tasks, in medical or other contexts.

  9. Barcoding T Cell Calcium Response Diversity with Methods for Automated and Accurate Analysis of Cell Signals (MAAACS)

    PubMed Central

    Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick

    2013-01-01

    We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124

  10. Clinical application of antenatal genetic diagnosis of osteogenesis imperfecta type IV.

    PubMed

    Yuan, Jing; Li, Song; Xu, YeYe; Cong, Lin

    2015-04-02

    Clinical analysis and genetic testing of a family with osteogenesis imperfecta type IV were conducted, aiming to discuss antenatal genetic diagnosis of osteogenesis imperfecta type IV. Preliminary genotyping was performed based on clinical characteristics of the family members and then high-throughput sequencing was applied to rapidly and accurately detect the changes in candidate genes. Genetic testing of the III5 fetus and other family members revealed missense mutation in c.2746G>A, pGly916Arg in COL1A2 gene coding region and missense and synonymous mutation in COL1A1 gene coding region. Application of antenatal genetic diagnosis provides fast and accurate genetic counseling and eugenics suggestions for patients with osteogenesis imperfecta type IV and their families.

  11. Refractive indices of layers and optical simulations of Cu(In,Ga)Se2 solar cells

    PubMed Central

    Avancini, Enrico; Losio, Paolo A.; Figi, Renato; Schreiner, Claudia; Bürki, Melanie; Bourgeois, Emilie; Remes, Zdenek; Nesladek, Milos; Tiwari, Ayodhya N.

    2018-01-01

    Abstract Cu(In,Ga)Se2 based solar cells have reached efficiencies close to 23%. Further knowledge-driven improvements require accurate determination of the material properties. Here, we present refractive indices for all layers in Cu(In,Ga)Se2 solar cells with high efficiency. The optical bandgap of Cu(In,Ga)Se2 does not depend on the Cu content in the explored composition range, while the absorption coefficient value is primarily determined by the Cu content. An expression for the absorption spectrum is proposed, with Ga and Cu compositions as parameters. This set of parameters allows accurate device simulations to understand remaining absorption and carrier collection losses and develop strategies to improve performances. PMID:29785230

  12. Quantitative aspects of inductively coupled plasma mass spectrometry

    PubMed Central

    Wagner, Barbara

    2016-01-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971

  13. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.

  14. 3D printing functional materials and devices (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McAlpine, Michael C.

    2017-05-01

    The development of methods for interfacing high performance functional devices with biology could impact regenerative medicine, smart prosthetics, and human-machine interfaces. Indeed, the ability to three-dimensionally interweave biological and functional materials could enable the creation of devices possessing unique geometries, properties, and functionalities. Yet, most high quality functional materials are two dimensional, hard and brittle, and require high crystallization temperatures for maximal performance. These properties render the corresponding devices incompatible with biology, which is three-dimensional, soft, stretchable, and temperature sensitive. We overcome these dichotomies by: 1) using 3D printing and scanning for customized, interwoven, anatomically accurate device architectures; 2) employing nanotechnology as an enabling route for overcoming mechanical discrepancies while retaining high performance; and 3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This three-dimensional blending of functional materials and `living' platforms may enable next-generation 3D printed devices.

  15. Resolution factors in edgeline holography.

    PubMed

    Trolinger, J D; Gee, T H

    1971-06-01

    When an in-line Fresnel hologram of an object such as a projectile in flight is made, the reconstruction comprises an image of the outside edge of the object superimposed upon a Fresnel diffraction pattern of the edge and an unmodulated portion of the reconstruction beam. When the reconstructed image is bandpass filtered, the only remaining significant contribution is that of a diffraction pattern which is symmetrical about an edgeline gaussian image of the object. The present paper discusses the application of this type of holography in accurately locating the edge of a large dynamic object, the position of which is not accurately known in any dimension. A theoretical and experimental analysis was performed to study the effects of motion, hologram size, film type, and practical limitations upon the attainable resolution in the reconstructed image. The bandlimiting effect of motion is used to relate the motion effected resolution limit of holography to that of photography. The study shows that an edgeline can be accurately located even at high velocity normal to the edge.

  16. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  17. Improving Car Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  18. Accurate van der Waals force field for gas adsorption in porous materials.

    PubMed

    Sun, Lei; Yang, Li; Zhang, Ya-Dong; Shi, Qi; Lu, Rui-Feng; Deng, Wei-Qiao

    2017-09-05

    An accurate van der Waals force field (VDW FF) was derived from highly precise quantum mechanical (QM) calculations. Small molecular clusters were used to explore van der Waals interactions between gas molecules and porous materials. The parameters of the accurate van der Waals force field were determined by QM calculations. To validate the force field, the prediction results from the VDW FF were compared with standard FFs, such as UFF, Dreiding, Pcff, and Compass. The results from the VDW FF were in excellent agreement with the experimental measurements. This force field can be applied to the prediction of the gas density (H 2 , CO 2 , C 2 H 4 , CH 4 , N 2 , O 2 ) and adsorption performance inside porous materials, such as covalent organic frameworks (COFs), zeolites and metal organic frameworks (MOFs), consisting of H, B, N, C, O, S, Si, Al, Zn, Mg, Ni, and Co. This work provides a solid basis for studying gas adsorption in porous materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Computation of Dielectric Response in Molecular Solids for High Capacitance Organic Dielectrics.

    PubMed

    Heitzer, Henry M; Marks, Tobin J; Ratner, Mark A

    2016-09-20

    The dielectric response of a material is central to numerous processes spanning the fields of chemistry, materials science, biology, and physics. Despite this broad importance across these disciplines, describing the dielectric environment of a molecular system at the level of first-principles theory and computation remains a great challenge and is of importance to understand the behavior of existing systems as well as to guide the design and synthetic realization of new ones. Furthermore, with recent advances in molecular electronics, nanotechnology, and molecular biology, it has become necessary to predict the dielectric properties of molecular systems that are often difficult or impossible to measure experimentally. In these scenarios, it is would be highly desirable to be able to determine dielectric response through efficient, accurate, and chemically informative calculations. A good example of where theoretical modeling of dielectric response would be valuable is in the development of high-capacitance organic gate dielectrics for unconventional electronics such as those that could be fabricated by high-throughput printing techniques. Gate dielectrics are fundamental components of all transistor-based logic circuitry, and the combination high dielectric constant and nanoscopic thickness (i.e., high capacitance) is essential to achieving high switching speeds and low power consumption. Molecule-based dielectrics offer the promise of cheap, flexible, and mass producible electronics when used in conjunction with unconventional organic or inorganic semiconducting materials to fabricate organic field effect transistors (OFETs). The molecular dielectrics developed to date typically have limited dielectric response, which results in low capacitances, translating into poor performance of the resulting OFETs. Furthermore, the development of better performing dielectric materials has been hindered by the current highly empirical and labor-intensive pace of synthetic progress. An accurate and efficient theoretical computational approach could drastically decrease this time by screening potential dielectric materials and providing reliable design rules for future molecular dielectrics. Until recently, accurate calculation of dielectric responses in molecular materials was difficult and highly approximate. Most previous modeling efforts relied on classical formalisms to relate molecular polarizability to macroscopic dielectric properties. These efforts often vastly overestimated polarizability in the subject materials and ignored crucial material properties that can affect dielectric response. Recent advances in first-principles calculations via density functional theory (DFT) with periodic boundary conditions have allowed accurate computation of dielectric properties in molecular materials. In this Account, we outline the methodology used to calculate dielectric properties of molecular materials. We demonstrate the validity of this approach on model systems, capturing the frequency dependence of the dielectric response and achieving quantitative accuracy compared with experiment. This method is then used as a guide to new high-capacitance molecular dielectrics by determining what materials and chemical properties are important in maximizing dielectric response in self-assembled monolayers (SAMs). It will be seen that this technique is a powerful tool for understanding and designing new molecular dielectric systems, the properties of which are fundamental to many scientific areas.

  20. Components for IFOG based inertial measurement units using active and passive polymer materials

    NASA Astrophysics Data System (ADS)

    Ashley, Paul R.; Temmen, Mark G.; Diffey, William M.; Sanghadasa, Mohan; Bramson, Michael D.; Lindsay, Geoffrey A.; Guenthner, Andrew J.

    2006-08-01

    Highly accurate, compact, and low cost inertial measurement units (IMUs) are needed for precision guidance in navigation systems. Active and passive polymer materials have been successfully used in fabricating two of the key guided-wave components, the phase modulator and the optical transceiver, for IMUs based on the interferometric fiber optic gyroscope (IFOG) technology. Advanced hybrid waveguide fabrication processes and novel optical integration techniques have been introduced. Backscatter compensated low loss phase modulators with low half-wave drive voltage (V π) have been fabricated with CLD- and FTC- type high performance electro-optic chromophores. A silicon-bench architecture has been used in fabricating high gain low noise transceivers with high optical power while maintaining the spectral quality and long lifetime. Gyro bias stability of less than 0.02 deg/hr has been demonstrated with these components. A review of the novel concepts introduced, fabrication and integration techniques developed and performance achieved are presented.

  1. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  2. Performance of different reflectance and diffuse optical imaging tomographic approaches in fluorescence molecular imaging of small animals

    NASA Astrophysics Data System (ADS)

    Dinten, Jean-Marc; Petié, Philippe; da Silva, Anabela; Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Laidevant, Aurélie; Rizo, Philippe

    2006-03-01

    Optical imaging of fluorescent probes is an essential tool for investigation of molecular events in small animals for drug developments. In order to get localization and quantification information of fluorescent labels, CEA-LETI has developed efficient approaches in classical reflectance imaging as well as in diffuse optical tomographic imaging with continuous and temporal signals. This paper presents an overview of the different approaches investigated and their performances. High quality fluorescence reflectance imaging is obtained thanks to the development of an original "multiple wavelengths" system. The uniformity of the excitation light surface area is better than 15%. Combined with the use of adapted fluorescent probes, this system enables an accurate detection of pathological tissues, such as nodules, beneath the animal's observed area. Performances for the detection of ovarian nodules on a nude mouse are shown. In order to investigate deeper inside animals and get 3D localization, diffuse optical tomography systems are being developed for both slab and cylindrical geometries. For these two geometries, our reconstruction algorithms are based on analytical expression of light diffusion. Thanks to an accurate introduction of light/matter interaction process in the algorithms, high quality reconstructions of tumors in mice have been obtained. Reconstruction of lung tumors on mice are presented. By the use of temporal diffuse optical imaging, localization and quantification performances can be improved at the price of a more sophisticated acquisition system and more elaborate information processing methods. Such a system based on a pulsed laser diode and a time correlated single photon counting system has been set up. Performances of this system for localization and quantification of fluorescent probes are presented.

  3. Modeling flow and transport in fracture networks using graphs

    NASA Astrophysics Data System (ADS)

    Karra, S.; O'Malley, D.; Hyman, J. D.; Viswanathan, H. S.; Srinivasan, G.

    2018-03-01

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O (104) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.

  4. Modeling flow and transport in fracture networks using graphs.

    PubMed

    Karra, S; O'Malley, D; Hyman, J D; Viswanathan, H S; Srinivasan, G

    2018-03-01

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O(10^{4}) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.

  5. Modeling flow and transport in fracture networks using graphs

    DOE PAGES

    Karra, S.; O'Malley, D.; Hyman, J. D.; ...

    2018-03-09

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less

  6. Modeling flow and transport in fracture networks using graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karra, S.; O'Malley, D.; Hyman, J. D.

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less

  7. MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology

    PubMed Central

    Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota

    2015-01-01

    We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860

  8. Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.

    PubMed

    Kim, Dong Sik

    2016-08-01

    The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .

  9. Scanning electron microscope automatic defect classification of process induced defects

    NASA Astrophysics Data System (ADS)

    Wolfe, Scott; McGarvey, Steve

    2017-03-01

    With the integration of high speed Scanning Electron Microscope (SEM) based Automated Defect Redetection (ADR) in both high volume semiconductor manufacturing and Research and Development (R and D), the need for reliable SEM Automated Defect Classification (ADC) has grown tremendously in the past few years. In many high volume manufacturing facilities and R and D operations, defect inspection is performed on EBeam (EB), Bright Field (BF) or Dark Field (DF) defect inspection equipment. A comma separated value (CSV) file is created by both the patterned and non-patterned defect inspection tools. The defect inspection result file contains a list of the inspection anomalies detected during the inspection tools' examination of each structure, or the examination of an entire wafers surface for non-patterned applications. This file is imported into the Defect Review Scanning Electron Microscope (DRSEM). Following the defect inspection result file import, the DRSEM automatically moves the wafer to each defect coordinate and performs ADR. During ADR the DRSEM operates in a reference mode, capturing a SEM image at the exact position of the anomalies coordinates and capturing a SEM image of a reference location in the center of the wafer. A Defect reference image is created based on the Reference image minus the Defect image. The exact coordinates of the defect is calculated based on the calculated defect position and the anomalies stage coordinate calculated when the high magnification SEM defect image is captured. The captured SEM image is processed through either DRSEM ADC binning, exporting to a Yield Analysis System (YAS), or a combination of both. Process Engineers, Yield Analysis Engineers or Failure Analysis Engineers will manually review the captured images to insure that either the YAS defect binning is accurately classifying the defects or that the DRSEM defect binning is accurately classifying the defects. This paper is an exploration of the feasibility of the utilization of a Hitachi RS4000 Defect Review SEM to perform Automatic Defect Classification with the objective of the total automated classification accuracy being greater than human based defect classification binning when the defects do not require multiple process step knowledge for accurate classification. The implementation of DRSEM ADC has the potential to improve the response time between defect detection and defect classification. Faster defect classification will allow for rapid response to yield anomalies that will ultimately reduce the wafer and/or the die yield.

  10. Evaluation of the performance of the OneTouch Select Plus blood glucose test system against ISO 15197:2013.

    PubMed

    Setford, Steven; Smith, Antony; McColl, David; Grady, Mike; Koria, Krisna; Cameron, Hilary

    2015-01-01

    Assess laboratory and in-clinic performance of the OneTouch Select(®) Plus test system against ISO 15197:2013 standard for measurement of blood glucose. System performance assessed in laboratory against key patient, environmental and pharmacologic factors. User performance was assessed in clinic by system-naïve lay-users. Healthcare professionals assessed system accuracy on diabetes subjects in clinic. The system demonstrated high levels of performance, meeting ISO 15197:2013 requirements in laboratory testing (precision, linearity, hematocrit, temperature, humidity and altitude). System performance was tested against 28 interferents, with an adverse interfering effect only being recorded for pralidoxime iodide. Clinic user performance results fulfilled ISO 15197:2013 accuracy criteria. Subjects agreed that the color range indicator clearly showed if they were low, in-range or high and helped them better understand glucose results. The system evaluated is accurate and meets all ISO 15197:2013 requirements as per the tests described. The color range indicator helped subjects understand glucose results and supports patients in following healthcare professional recommendations on glucose targets.

  11. Measuring the Hydraulic Effectiveness of Low Impact Development Practices in a Heavily Urbanised Environment: A Case Study from London, UK

    NASA Astrophysics Data System (ADS)

    El Hattab, M. H.; Vernon, D.; Mijic, A.

    2017-12-01

    Low impact development practices (LID) are deemed to have a synergetic effect in mitigating urban storm water flooding. Designing and implementing effective LID practices require reliable real-life data about their performance in different applications; however, there are limited studies providing such data. In this study an innovative micro-monitoring system to assess the performance of porous pavement and rain gardens as retrofitting technologies was developed. Three pilot streets in London, UK were selected as part of Thames Water Utilities Limited's Counters Creek scheme. The system includes a V-notch weir installed at the outlet of each LID device to provide an accurate and reliable quantification over a wide range of discharges. In addition to, a low flow sensor installed downstream of the V-notch to cross-check the readings. Having a flow survey time-series of the pre-retrofitting conditions from the study streets, extensive laboratory calibrations under different flow conditions depicting the exact site conditions were performed prior to installing the devices in the field. The micro-monitoring system is well suited for high-resolution temporal monitoring and enables accurate long-term evaluation of LID components' performance. Initial results from the field validated the robustness of the system in fulfilling its requirements.

  12. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  13. High Accuracy Transistor Compact Model Calibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less

  14. High-resolution three-dimensional magnetic resonance imaging of mouse lung in situ.

    PubMed

    Scadeng, Miriam; Rossiter, Harry B; Dubowitz, David J; Breen, Ellen C

    2007-01-01

    This study establishes a method for high-resolution isotropic magnetic resonance (MR) imaging of mouse lungs using tracheal liquid-instillation to remove MR susceptibility artifacts. C57BL/6J mice were instilled sequentially with perfluorocarbon and phosphate-buffered saline to an airway pressure of 10, 20, or 30 cm H2O. Imaging was performed in a 7T MR scanner using a 2.5-cm Quadrature volume coil and a 3-dimensional (3D) FLASH imaging sequence. Liquid-instillation removed magnetic susceptibility artifacts and allowed lung structure to be viewed at an isotropic resolution of 78-90 microm. Instilled liquid and modeled lung volumes were well correlated (R = 0.92; P < 0.05) and differed by a constant tissue volume (220 +/- 92 microL). 3D image renderings allowed differences in structural dimensions (volumes and areas) to be accurately measured at each inflation pressure. These data demonstrate the efficacy of pulmonary liquid instillation for in situ high-resolution MR imaging of mouse lungs for accurate measurement of pulmonary airway, parenchymal, and vascular structures.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard, P.

    The study of inelastic collision phenomena with highly charged projectile ions and the interpretation of spectral features resulting from these collisions remain as the major focal points in the atomic physics research at the J.R. Macdonald Laboratory, Kansas State University, Manhattan, Kansas. The title of the research project, ``Atomic Physics with Highly Charged Ions,`` speaks to these points. The experimental work in the past few years has divided into collisions at high velocity using the primary beams from the tandem and LINAC accelerators and collisions at low velocity using the CRYEBIS facility. Theoretical calculations have been performed to accurately describemore » inelastic scattering processes of the one-electron and many-electron type, and to accurately predict atomic transition energies and intensities for x rays and Auger electrons. Brief research summaries are given for the following: (1) electron production in ion-atom collisions; (2) role of electron-electron interactions in two-electron processes; (3) multi-electron processes; (4) collisions with excited, aligned, Rydberg targets; (5) ion-ion collisions; (6) ion-molecule collisions; (7) ion-atom collision theory; and (8) ion-surface interactions.« less

  16. Gas Scintillation Proportional Counters for High-Energy X-ray Astronomy

    NASA Technical Reports Server (NTRS)

    Gubarev, Mikhail; Ramsey, Brian; Apple, Jeffery

    2003-01-01

    A focal plane array of high-pressure gas scintillation proportional counters (GSPC) for a balloon-borne hard-x-ray telescope is under development at the Marshall Space Flight Center. These detectors have an active area of approx. 20 sq cm, and are filled with a high pressure (10(exp 6) Pa) xenon-helium mixture. Imaging is via crossed-grid position-sensitive phototubes sensitive in the UV region. The performance of the GSPC is well matched to that of the telescopes x-ray optics which have response to 75 keV and a focal spot size of approx. 500 microns. The detector s energy resolution, 4% FWHM at 60 keV, is adequate for resolving the broad spectral lines of astrophysical importance and for accurate continuum measurements. Full details of the instrument and its performance will be provided.

  17. Centralized PI control for high dimensional multivariable systems based on equivalent transfer function.

    PubMed

    Luan, Xiaoli; Chen, Qiang; Liu, Fei

    2014-09-01

    This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Simulation study of a new inverse-pinch high Coulomb transfer switch

    NASA Technical Reports Server (NTRS)

    Choi, S. H.

    1984-01-01

    A simulation study of a simplified model of a high coulomb transfer switch is performed. The switch operates in an inverse pinch geometry formed by an all metal chamber, which greatly reduces hot spot formations on the electrode surfaces. Advantages of the switch over the conventional switches are longer useful life, higher current capability and lower inductance, which improves the characteristics required for a high repetition rate switch. The simulation determines the design parameters by analytical computations and comparison with the experimentally measured risetime, current handling capability, electrode damage, and hold-off voltages. The parameters of initial switch design can be determined for the anticipated switch performance. Results are in agreement with the experiment results. Although the model is simplified, the switch characteristics such as risetime, current handling capability, electrode damages, and hold-off voltages are accurately determined.

  19. High Sensitivity and Specificity of Clinical Microscopy in Rural Health Facilities in Western Kenya Under an External Quality Assurance Program

    PubMed Central

    Wafula, Rebeccah; Sang, Edna; Cheruiyot, Olympia; Aboto, Angeline; Menya, Diana; O'Meara, Wendy Prudhomme

    2014-01-01

    Microscopic diagnosis of malaria is a well-established and inexpensive technique that has the potential to provide accurate diagnosis of malaria infection. However, it requires both training and experience. Although it is considered the gold standard in research settings, the sensitivity and specificity of routine microscopy for clinical care in the primary care setting has been reported to be unacceptably low. We established a monthly external quality assurance program to monitor the performance of clinical microscopy in 17 rural health centers in western Kenya. The average sensitivity over the 12-month period was 96% and the average specificity was 88%. We identified specific contextual factors that contributed to inadequate performance. Maintaining high-quality malaria diagnosis in high-volume, resource-constrained health facilities is possible. PMID:24935953

  20. DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri

    2015-01-01

    A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.

  1. A Highly Sensitive Fiber Optic Sensor Based on Two-Core Fiber for Refractive Index Measurement

    PubMed Central

    Guzmán-Sepúlveda, José Rafael; Guzmán-Cabrera, Rafael; Torres-Cisneros, Miguel; Sánchez-Mondragón, José Javier; May-Arrioja, Daniel Alberto

    2013-01-01

    A simple and compact fiber optic sensor based on a two-core fiber is demonstrated for high-performance measurements of refractive indices (RI) of liquids. In order to demonstrate the suitability of the proposed sensor to perform high-sensitivity sensing in a variety of applications, the sensor has been used to measure the RI of binary liquid mixtures. Such measurements can accurately determine the salinity of salt water solutions, and detect the water content of adulterated alcoholic beverages. The largest sensitivity of the RI sensor that has been experimentally demonstrated is 3,119 nm per Refractive Index Units (RIU) for the RI range from 1.3160 to 1.3943. On the other hand, our results suggest that the sensitivity can be enhanced up to 3485.67 nm/RIU approximately for the same RI range. PMID:24152878

  2. Pointing and tracking space mechanism for laser communication

    NASA Technical Reports Server (NTRS)

    Brunschvig, A.; Deboisanger, M.

    1994-01-01

    Space optical communication is considered a promising technology regarding its high data rate and confidentiality capabilities. However, it requires today complex satellite systems involving highly accurate mechanisms. This paper aims to highlight the stringent requirements which had to be fulfilled for such a mechanism, the way an existing design has been adapted to meet these requirements, and the main technical difficulties which have been overcome thanks to extensive development tests throughout the C/D phase initiated in 1991. The expected on-orbit performance of this mechanism is also presented.

  3. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.

  4. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    NASA Technical Reports Server (NTRS)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  5. A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy.

    PubMed

    Miyashita, Kiyoteru; Oude Vrielink, Timo; Mylonas, George

    2018-05-01

    Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  6. High-resolution satellite imagery is an important yet underutilized resource in conservation biology.

    PubMed

    Boyle, Sarah A; Kennedy, Christina M; Torres, Julio; Colman, Karen; Pérez-Estigarribia, Pastor E; de la Sancha, Noé U

    2014-01-01

    Technological advances and increasing availability of high-resolution satellite imagery offer the potential for more accurate land cover classifications and pattern analyses, which could greatly improve the detection and quantification of land cover change for conservation. Such remotely-sensed products, however, are often expensive and difficult to acquire, which prohibits or reduces their use. We tested whether imagery of high spatial resolution (≤5 m) differs from lower-resolution imagery (≥30 m) in performance and extent of use for conservation applications. To assess performance, we classified land cover in a heterogeneous region of Interior Atlantic Forest in Paraguay, which has undergone recent and dramatic human-induced habitat loss and fragmentation. We used 4 m multispectral IKONOS and 30 m multispectral Landsat imagery and determined the extent to which resolution influenced the delineation of land cover classes and patch-level metrics. Higher-resolution imagery more accurately delineated cover classes, identified smaller patches, retained patch shape, and detected narrower, linear patches. To assess extent of use, we surveyed three conservation journals (Biological Conservation, Biotropica, Conservation Biology) and found limited application of high-resolution imagery in research, with only 26.8% of land cover studies analyzing satellite imagery, and of these studies only 10.4% used imagery ≤5 m resolution. Our results suggest that high-resolution imagery is warranted yet under-utilized in conservation research, but is needed to adequately monitor and evaluate forest loss and conversion, and to delineate potentially important stepping-stone fragments that may serve as corridors in a human-modified landscape. Greater access to low-cost, multiband, high-resolution satellite imagery would therefore greatly facilitate conservation management and decision-making.

  7. Accurate prediction of protein-protein interactions by integrating potential evolutionary information embedded in PSSM profile and discriminative vector machine classifier.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Li, Li-Ping; Huang, De-Shuang; Yan, Gui-Ying; Nie, Ru; Huang, Yu-An

    2017-04-04

    Identification of protein-protein interactions (PPIs) is of critical importance for deciphering the underlying mechanisms of almost all biological processes of cell and providing great insight into the study of human disease. Although much effort has been devoted to identifying PPIs from various organisms, existing high-throughput biological techniques are time-consuming, expensive, and have high false positive and negative results. Thus it is highly urgent to develop in silico methods to predict PPIs efficiently and accurately in this post genomic era. In this article, we report a novel computational model combining our newly developed discriminative vector machine classifier (DVM) and an improved Weber local descriptor (IWLD) for the prediction of PPIs. Two components, differential excitation and orientation, are exploited to build evolutionary features for each protein sequence. The main characteristics of the proposed method lies in introducing an effective feature descriptor IWLD which can capture highly discriminative evolutionary information from position-specific scoring matrixes (PSSM) of protein data, and employing the powerful and robust DVM classifier. When applying the proposed method to Yeast and H. pylori data sets, we obtained excellent prediction accuracies as high as 96.52% and 91.80%, respectively, which are significantly better than the previous methods. Extensive experiments were then performed for predicting cross-species PPIs and the predictive results were also pretty promising. To further validate the performance of the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier on Human data set. The experimental results obtained indicate that our method is highly effective for PPIs prediction and can be taken as a supplementary tool for future proteomics research.

  8. Control of vacuum induction brazing system for sealing of instrumentation feed-through

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung Ho Ahn; Jintae Hong; Chang Young Joung

    2015-07-01

    The integrity of instrumentation cables is an important performance parameter in addition to the sealing performance in the brazing process. An accurate brazing control was developed for the brazing of the instrumentation feed-through in the vacuum induction brazing system in this paper. The experimental results show that the accurate brazing temperature control performance is achieved by the developed control scheme. Consequently, the sealing performances of the instrumentation feed-through and the integrities of the instrumentation cables were satisfied after brazing. (authors)

  9. Control of Vacuum Induction Brazing System for Sealing of Instrumentation Feedthrough

    NASA Astrophysics Data System (ADS)

    Ahn, Sung Ho; Hong, Jintae; Joung, Chang Young; Heo, Sung Ho

    2017-04-01

    The integrity of instrumentation cables is an important performance parameter in the brazing process, along with the sealing performance. In this paper, an accurate control scheme for brazing of the instrumentation feedthrough in a vacuum induction brazing system was developed. The experimental results show that the accurate brazing temperature control performance is achieved by the developed control scheme. It is demonstrated that the sealing performances of the instrumentation feedthrough and the integrity of the instrumentation cables are to be acceptable after brazing.

  10. Improved Quantification of the Beta Cell Mass after Pancreas Visualization with 99mTc-demobesin-4 and Beta Cell Imaging with 111In-exendin-3 in Rodents.

    PubMed

    van der Kroon, Inge; Joosten, Lieke; Nock, Berthold A; Maina, Theodosia; Boerman, Otto C; Brom, Maarten; Gotthardt, Martin

    2016-10-03

    Accurate assessment of the 111 In-exendin-3 uptake within the pancreas requires exact delineation of the pancreas, which is highly challenging by MRI and CT in rodents. In this study, the pancreatic tracer 99m Tc-demobesin-4 was evaluated for accurate delineation of the pancreas to be able to accurately quantify 111 In-exendin-3 uptake within the pancreas. Healthy and alloxan-induced diabetic Brown Norway rats were injected with the pancreatic tracer 99m Tc-demobesin-4 ([ 99m Tc-N 4 -Pro 1 ,Tyr 4 ,Nle 14 ]bombesin) and the beta cell tracer 111 In-exendin-3 ([ 111 In-DTPA-Lys 40 ]exendin-3). After dual isotope acquisition of SPECT images, 99m Tc-demobesin-4 was used to define a volume of interest for the pancreas in SPECT images subsequently the 111 In-exendin-3 uptake within this region was quantified. Furthermore, biodistribution and autoradiography were performed in order to gain insight in the distribution of both tracers in the animals. 99m Tc-demobesin-4 showed high accumulation in the pancreas. The uptake was highly homogeneous throughout the pancreas, independent of diabetic status, as demonstrated by autoradiography, whereas 111 In-exendin-3 only accumulates in the islets of Langerhans. Quantification of both ex vivo and in vivo SPECT images resulted in an excellent linear correlation between the pancreatic uptake, determined with ex vivo counting and 111 In-exendin-3 uptake, determined from the quantitative analysis of the SPECT images (Pearson r = 0.97, Pearson r = 0.92). 99m Tc-demobesin-4 shows high accumulation in the pancreas of rats. It is a suitable tracer for accurate delineation of the pancreas and can be conveniently used for simultaneous acquisition with 111 In labeled exendin-3. This method provides a straightforward, reliable, and objective method for preclinical beta cell mass (BCM) quantification with 111 In-exendin-3.

  11. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  12. Design Methodology for Multi-Element High-Lift Systems on Subsonic Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pepper, R. S.; vanDam, C. P.

    1996-01-01

    The choice of a high-lift system is crucial in the preliminary design process of a subsonic civil transport aircraft. Its purpose is to increase the allowable aircraft weight or decrease the aircraft's wing area for a given takeoff and landing performance. However, the implementation of a high-lift system into a design must be done carefully, for it can improve the aerodynamic performance of an aircraft but may also drastically increase the aircraft empty weight. If designed properly, a high-lift system can improve the cost effectiveness of an aircraft by increasing the payload weight for a given takeoff and landing performance. This is why the design methodology for a high-lift system should incorporate aerodynamic performance, weight, and cost. The airframe industry has experienced rapid technological growth in recent years which has led to significant advances in high-lift systems. For this reason many existing design methodologies have become obsolete since they are based on outdated low Reynolds number wind-tunnel data and can no longer accurately predict the aerodynamic characteristics or weight of current multi-element wings. Therefore, a new design methodology has been created that reflects current aerodynamic, weight, and cost data and provides enough flexibility to allow incorporation of new data when it becomes available.

  13. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  14. Magnetohydrodynamics with GAMER

    NASA Astrophysics Data System (ADS)

    Zhang, Ui-Han; Schive, Hsi-Yu; Chiueh, Tzihong

    2018-06-01

    GAMER, a parallel Graphic-processing-unit-accelerated Adaptive-MEsh-Refinement (AMR) hydrodynamic code, has been extended to support magnetohydrodynamics (MHD) with both the corner-transport-upwind and MUSCL-Hancock schemes and the constraint transport technique. The divergent preserving operator for AMR has been applied to reinforce the divergence-free constraint on the magnetic field. GAMER-MHD has fully exploited the concurrent executions between the graphic process unit (GPU) MHD solver and other central processing unit computation pertinent to AMR. We perform various standard tests to demonstrate that GAMER-MHD is both second-order accurate and robust, producing results as accurate as those given by high-resolution uniform-grid runs. We also explore a new 3D MHD test, where the magnetic field assumes the Arnold–Beltrami–Childress configuration, temporarily becomes turbulent with current sheets, and finally settles to a lowest-energy equilibrium state. This 3D problem is adopted for the performance test of GAMER-MHD. The single-GPU performance reaches 1.2 × 108 and 5.5 × 107 cell updates per second for the single- and double-precision calculations, respectively, on Tesla P100. We also demonstrate a parallel efficiency of ∼70% for both weak and strong scaling using 1024 XK nodes on the Blue Waters supercomputers.

  15. Quasi-finite-time control for high-order nonlinear systems with mismatched disturbances via mapping filtered forwarding technique

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Huang, X. L.; Lu, H. Q.

    2017-02-01

    In this study, a quasi-finite-time control method for designing stabilising control laws is developed for high-order strict-feedback nonlinear systems with mismatched disturbances. By using mapping filtered forwarding technique, a virtual control is designed to force the off-the-manifold coordinate to converge to zero in quasi-finite time at each step of the design; at the same time, the manifold is rendered insensitive to time-varying, bounded and unknown disturbances. In terms of standard forwarding methodology, the algorithm proposed here not only does not require the Lyapunov function for controller design, but also avoids to calculate the derivative of sign function. As far as the dynamic performance of closed-loop systems is concerned, we essentially obtain the finite-time performances, which is typically reflected in the following aspects: fast and accurate responses, high tracking precision, and robust disturbance rejection. Spring, mass, and damper system and flexible joints robot are tested to demonstrate the proposed controller performance.

  16. [Determination of sugars, organic acids and alcohols in microbial consortium fermentation broth from cellulose using high performance liquid chromatography].

    PubMed

    Jiang, Yan; Fan, Guifang; Du, Ran; Li, Peipei; Jiang, Li

    2015-08-01

    A high performance liquid chromatographic method was established for the determination of metabolites (sugars, organic acids and alcohols) in microbial consortium fermentation broth from cellulose. Sulfate was first added in the samples to precipitate calcium ions in microbial consortium culture medium and lower the pH of the solution to avoid the dissociation of organic acids, then the filtrates were effectively separated using high performance liquid chromatography. Cellobiose, glucose, ethanol, butanol, glycerol, acetic acid and butyric acid were quantitatively analyzed. The detection limits were in the range of 0.10-2.00 mg/L. The linear correlation coefficients were greater than 0.999 6 in the range of 0.020 to 1.000 g/L. The recoveries were in the range of 85.41%-115.60% with the relative standard deviations of 0.22% -4.62% (n = 6). This method is accurate for the quantitative analysis of the alcohols, organic acids and saccharides in microbial consortium fermentation broth from cellulose.

  17. SSME Investment in Turbomachinery Inducer Impeller Design Tools and Methodology

    NASA Technical Reports Server (NTRS)

    Zoladz, Thomas; Mitchell, William; Lunde, Kevin

    2010-01-01

    Within the rocket engine industry, SSME turbomachines are the de facto standards of success with regard to meeting aggressive performance requirements under challenging operational environments. Over the Shuttle era, SSME has invested heavily in our national inducer impeller design infrastructure. While both low and high pressure turbopump failures/anomaly resolution efforts spurred some of these investments, the SSME program was a major benefactor of key areas of turbomachinery inducer-impeller research outside of flight manifest pressures. Over the past several decades, key turbopump internal environments have been interrogated via highly instrumented hot-fire and cold-flow testing. Likewise, SSME has sponsored the advancement of time accurate and cavitating inducer impeller computation fluid dynamics (CFD) tools. These investments together have led to a better understanding of the complex internal flow fields within aggressive high performing inducers and impellers. New design tools and methodologies have evolved which intend to provide confident blade designs which strike an appropriate balance between performance and self induced load management.

  18. Computer-Assisted Decision Support for Student Admissions Based on Their Predicted Academic Performance.

    PubMed

    Muratov, Eugene; Lewis, Margaret; Fourches, Denis; Tropsha, Alexander; Cox, Wendy C

    2017-04-01

    Objective. To develop predictive computational models forecasting the academic performance of students in the didactic-rich portion of a doctor of pharmacy (PharmD) curriculum as admission-assisting tools. Methods. All PharmD candidates over three admission cycles were divided into two groups: those who completed the PharmD program with a GPA ≥ 3; and the remaining candidates. Random Forest machine learning technique was used to develop a binary classification model based on 11 pre-admission parameters. Results. Robust and externally predictive models were developed that had particularly high overall accuracy of 77% for candidates with high or low academic performance. These multivariate models were highly accurate in predicting these groups to those obtained using undergraduate GPA and composite PCAT scores only. Conclusion. The models developed in this study can be used to improve the admission process as preliminary filters and thus quickly identify candidates who are likely to be successful in the PharmD curriculum.

  19. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  20. Burnout syndrome in nurses working in palliative care units: An analysis of associated factors.

    PubMed

    Rizo-Baeza, Mercedes; Mendiola-Infante, Susana Virginia; Sepehri, Armina; Palazón-Bru, Antonio; Gil-Guillén, Vicente Francisco; Cortés-Castell, Ernesto

    2018-01-01

    To analyse the association between psychological, labour and demographic factors and burnout in palliative care nursing. There is a lack of published research evaluating burnout in palliative care nursing. This observational cross-sectional study involved 185 palliative care nurses in Mexico. The primary variables were burnout defined by its three dimensions (emotional exhaustion, depersonalization and personal accomplishment). As secondary variables, psychological, labour and demographic factors were considered. A binary logistic regression model was constructed to determine factors associated with burnout. A total of 69 nurses experienced high emotional exhaustion (37.3%), 65 had high depersonalization (35.1%) and 70 had low personal performance (37.8%). A higher proportion of burnout was found in the participants who were single parents, working >8 hr per day, with a medium/high workload, a lack of a high professional quality of life and a self-care deficit. Our multivariate models were very accurate in explaining burnout in palliative care nurses. These models must be externally validated to predict burnout and prevent future complications of the syndrome accurately. Nurses who present the factors found should be the focus of interventions to reduce work stress. © 2017 John Wiley & Sons Ltd.

Top