Sample records for maintaining high accuracy

  1. Robust vehicle detection under various environments to realize road traffic flow surveillance using an infrared thermal camera.

    PubMed

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2015-01-01

    To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires' thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized.

  2. Robust Vehicle Detection under Various Environments to Realize Road Traffic Flow Surveillance Using an Infrared Thermal Camera

    PubMed Central

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2015-01-01

    To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires' thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized. PMID:25763384

  3. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    NASA Astrophysics Data System (ADS)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  4. Atropos: specific, sensitive, and speedy trimming of sequencing reads.

    PubMed

    Didion, John P; Martin, Marcel; Collins, Francis S

    2017-01-01

    A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.

  5. Atropos: specific, sensitive, and speedy trimming of sequencing reads

    PubMed Central

    Collins, Francis S.

    2017-01-01

    A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074

  6. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  7. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  8. Positive and Negative Consequences in Contingency Contracts: Their Relative Effectiveness on Arithmetic Performance.

    ERIC Educational Resources Information Center

    Kidd, Teresa A.; Saudargas, Richard A.

    1988-01-01

    The study with two elementary students who had low levels of completion and accuracy on daily arithmetic assignments found that a negative consequence was not necessary and that use of a positive component alone was sufficient to maintain high levels of completion and accuracy. (Author/DB)

  9. High temperature current mirror amplifier

    DOEpatents

    Patterson, III, Raymond B.

    1984-05-22

    A high temperature current mirror amplifier having biasing means in the transdiode connection of the input transistor for producing a voltage to maintain the base-collector junction reversed-biased and a current means for maintaining a current through the biasing means at high temperatures so that the base-collector junction of the input transistor remained reversed-biased. For accuracy, a second current mirror is provided with a biasing means and current means on the input leg.

  10. Use of topographic and climatological models in a geographical data base to improve Landsat MSS classification for Olympic National Park

    NASA Technical Reports Server (NTRS)

    Cibula, William G.; Nyquist, Maurice O.

    1987-01-01

    An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.

  11. The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.

    PubMed

    Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark

    2016-02-01

    We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.

  12. High temperature current mirror amplifier

    DOEpatents

    Patterson, R.B. III.

    1984-05-22

    Disclosed is a high temperature current mirror amplifier having biasing means in the transdiode connection of the input transistor for producing a voltage to maintain the base-collector junction reversed-biased and a current means for maintaining a current through the biasing means at high temperatures so that the base-collector junction of the input transistor remained reversed-biased. For accuracy, a second current mirror is provided with a biasing means and current means on the input leg. 2 figs.

  13. Brushless tachometer gives speed and direction

    NASA Technical Reports Server (NTRS)

    Nola, F. J.

    1977-01-01

    Brushless electronic tachometer measures rotational speed and rotational direction, maintaining accuracy at high or low speeds. Unit is particularly useful in vacuum environments requiring low friction.

  14. Design, fabrication, and testing of duralumin zoom mirror with variable thickness

    NASA Astrophysics Data System (ADS)

    Hui, Zhao; Xie, Xiaopeng; Xu, Liang; Ding, Jiaoteng; Shen, Le; Liu, Meiying; Gong, Jie

    2016-10-01

    Zoom mirror is a kind of active optical component that can change its curvature radius dynamically. Normally, zoom mirror is used to correct the defocus and spherical aberration caused by thermal lens effect to improve the beam quality of high power solid-state laser since that component was invented. Recently, the probable application of zoom mirror in realizing non-moving element optical zoom imaging in visible band has been paid much attention. With the help of optical leveraging effect, the slightly changed local optical power caused by curvature variation of zoom mirror could be amplified to generate a great alteration of system focal length without moving elements involved in, but in this application the shorter working wavelength and higher surface figure accuracy requirement make the design and fabrication of such a zoom mirror more difficult. Therefore, the key to realize non-moving element optical zoom imaging in visible band lies in zoom mirror which could provide a large enough saggitus variation while still maintaining a high enough surface figure. Although the annular force based actuation could deform a super-thin mirror having a constant thickness to generate curvature variation, it is quite difficult to maintain a high enough surface figure accuracy and this phenomenon becomes even worse when the diameter and the radius-thickness ratio become bigger. In this manuscript, by combing the pressurization based actuation with a variable thickness mirror design, the purpose of obtaining large saggitus variation and maintaining quite good surface figure accuracy at the same time could be achieved. A prototype zoom mirror with diameter of 120mm and central thickness of 8mm is designed, fabricated and tested. Experimental results demonstrate that the zoom mirror having an initial surface figure accuracy superior to 1/50λ could provide at least 21um saggitus variation and after finishing the curvature variation its surface figure accuracy could still be superior to 1/20λ, which proves that the effectiveness of the theoretical design.

  15. Distributed measurement of birefringence dispersion in polarization-maintaining fibers

    NASA Astrophysics Data System (ADS)

    Tang, Feng; Wang, Xiang-Zhao; Zhang, Yimo; Jing, Wencai

    2006-12-01

    A new method to measure the birefringence dispersion in high-birefringence polarization-maintaining fibers is presented using white-light interferometry. By analyzing broadening of low-coherence interferograms obtained in a scanning Michelson interferometer, the birefringence dispersion and its variation along different fiber sections are acquired with high sensitivity and accuracy. Birefringence dispersions of two PANDA fibers at their operation wavelength are measured to be 0.011 ps/(km nm) and 0.018 ps/(km nm), respectively. Distributed measurement capability of the method is also verified experimentally.

  16. Models in animal collective decision-making: information uncertainty and conflicting preferences

    PubMed Central

    Conradt, Larissa

    2012-01-01

    Collective decision-making plays a central part in the lives of many social animals. Two important factors that influence collective decision-making are information uncertainty and conflicting preferences. Here, I bring together, and briefly review, basic models relating to animal collective decision-making in situations with information uncertainty and in situations with conflicting preferences between group members. The intention is to give an overview about the different types of modelling approaches that have been employed and the questions that they address and raise. Despite the use of a wide range of different modelling techniques, results show a coherent picture, as follows. Relatively simple cognitive mechanisms can lead to effective information pooling. Groups often face a trade-off between decision accuracy and speed, but appropriate fine-tuning of behavioural parameters could achieve high accuracy while maintaining reasonable speed. The right balance of interdependence and independence between animals is crucial for maintaining group cohesion and achieving high decision accuracy. In conflict situations, a high degree of decision-sharing between individuals is predicted, as well as transient leadership and leadership according to needs and physiological status. Animals often face crucial trade-offs between maintaining group cohesion and influencing the decision outcome in their own favour. Despite the great progress that has been made, there remains one big gap in our knowledge: how do animals make collective decisions in situations when information uncertainty and conflict of interest operate simultaneously? PMID:23565335

  17. High order filtering methods for approximating hyberbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1990-01-01

    In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.

  18. Development of an embedded instrument for autofocus and polarization alignment of polarization maintaining fiber

    NASA Astrophysics Data System (ADS)

    Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang

    2017-12-01

    The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.

  19. Comparison of citrus orchard inventory using LISS-III and LISS-IV data

    NASA Astrophysics Data System (ADS)

    Singh, Niti; Chaudhari, K. N.; Manjunath, K. R.

    2016-04-01

    In India, in terms of area under cultivation, citrus is the third most cultivated fruit crop after Banana and Mango. Among citrus group, lime is one of the most important horticultural crops in India as the demand for its consumption is very high. Hence, preparing citrus crop inventories using remote sensing techniques would help in maintaining a record of its area and production statistics. This study shows how accurately citrus orchard can be classified using both IRS Resourcesat-2 LISS-III and LISS-IV data and depicts the optimum bio-widow for procuring satellite data to achieve high classification accuracy required for maintaining inventory of crop. Findings of the study show classification accuracy increased from 55% (using LISS-III) to 77% (using LISS-IV). Also, according to classified outputs and NDVI values obtained, April and May months were identified as optimum bio-window for citrus crop identification.

  20. Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings

    PubMed Central

    Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo

    2018-01-01

    In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393

  1. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  2. Remifentanil maintains lower initial delayed nonmatching-to-sample accuracy compared to food pellets in male rhesus monkeys.

    PubMed

    Hutsell, Blake A; Banks, Matthew L

    2017-12-01

    Emerging human laboratory and preclinical drug self-administration data suggest that a history of contingent abused drug exposure impairs performance in operant discrimination procedures, such as delayed nonmatching-to-sample (DNMTS), that are hypothesized to assess components of executive function. However, these preclinical discrimination studies have exclusively used food as the reinforcer and the effects of drugs as reinforcers in these operant procedures are unknown. The present study determined effects of contingent intravenous remifentanil injections on DNMTS performance hypothesized to assess 1 aspect of executive function, working memory. Daily behavioral sessions consisted of 2 components with sequential intravenous remifentanil (0, 0.01-1.0 μg/kg/injection) or food (0, 1-10 pellets) availability in nonopioid dependent male rhesus monkeys (n = 3). Remifentanil functioned as a reinforcer in the DNMTS procedure. Similar delay-dependent DNMTS accuracy was observed under both remifentanil- and food-maintained components, such that higher accuracies were maintained at shorter (0.1-1.0 s) delays and lower accuracies approaching chance performance were maintained at longer (10-32 s) delays. Remifentanil maintained significantly lower initial DNMTS accuracy compared to food. Reinforcer magnitude was not an important determinant of DNMTS accuracy for either remifentanil or food. These results extend the range of experimental procedures under which drugs function as reinforcers. Furthermore, the selective remifentanil-induced decrease in initial DNMTS accuracy is consistent with a selective impairment of attentional, but not memorial, processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Supervised segmentation of microelectrode recording artifacts using power spectral density.

    PubMed

    Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert

    2015-08-01

    Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.

  4. Comparison of spike-sorting algorithms for future hardware implementation.

    PubMed

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  5. Variable curvature mirror having variable thickness: design and fabrication

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Xie, Xiaopeng; Xu, Liang; Ding, Jiaoteng; Shen, Le; Gong, Jie

    2017-10-01

    Variable curvature mirror (VCM) can change its curvature radius dynamically and is usually used to correct the defocus and spherical aberration caused by thermal lens effect to improve the output beam quality of high power solid-state laser. Recently, the probable application of VCM in realizing non-moving element optical zoom imaging in visible band has been paid much attention. The basic requirement for VCM lies in that it should provide a large enough saggitus variation and still maintains a high enough surface figure at the same time. Therefore in this manuscript, by combing the pressurization based actuation with a variable thickness mirror design, the purpose of obtaining large saggitus variation and maintaining quite good surface figure accuracy at the same time could be achieved. A prototype zoom mirror with diameter of 120mm and central thickness of 8mm is designed, fabricated and tested. Experimental results demonstrate that the zoom mirror having an initial surface figure accuracy superior to 1/80λ could provide bigger than 36um saggitus variation and after finishing the curvature variation its surface figure accuracy could still be superior to 1/40λ with the spherical aberration removed, which proves that the effectiveness of the theoretical design.

  6. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  7. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    PubMed

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  8. Inertia Compensation While Scanning Screw Threads on Coordinate Measuring Machines

    NASA Astrophysics Data System (ADS)

    Kosarevsky, Sergey; Latypov, Viktor

    2010-01-01

    Usage of scanning coordinate-measuring machines for inspection of screw threads has become a common practice nowadays. Compared to touch trigger probing, scanning capabilities allow to speed up the measuring process while still maintaining high accuracy. However, in some cases accuracy drastically depends on the scanning speed. In this paper a compensation method is proposed allowing to reduce the influence of inertia of the probing system while scanning screw threads on coordinate-measuring machines.

  9. Time-optimized laser micro machining by using a new high dynamic and high precision galvo scanner

    NASA Astrophysics Data System (ADS)

    Jaeggi, Beat; Neuenschwander, Beat; Zimmermann, Markus; Zecherle, Markus; Boeckler, Ernst W.

    2016-03-01

    High accuracy, quality and throughput are key factors in laser micro machining. To obtain these goals the ablation process, the machining strategy and the scanning device have to be optimized. The precision is influenced by the accuracy of the galvo scanner and can further be enhanced by synchronizing the movement of the mirrors with the laser pulse train. To maintain a high machining quality i.e. minimum surface roughness, the pulse-to-pulse distance has also to be optimized. Highest ablation efficiency is obtained by choosing the proper laser peak fluence together with highest specific removal rate. The throughput can now be enhanced by simultaneously increasing the average power, the repetition rate as well as the scanning speed to preserve the fluence and the pulse-to-pulse distance. Therefore a high scanning speed is of essential importance. To guarantee the required excellent accuracy even at high scanning speeds a new interferometry based encoder technology was used, that provides a high quality signal for closed-loop control of the galvo scanner position. Low inertia encoder design enables a very dynamic scanner system, which can be driven to very high line speeds by a specially adapted control solution. We will present results with marking speeds up to 25 m/s using a f = 100 mm objective obtained with a new scanning system and scanner tuning maintaining a precision of about 5 μm. Further it will be shown that, especially for short line lengths, the machining time can be minimized by choosing the proper speed which has not to be the maximum one.

  10. A Novel Energy-Efficient Approach for Human Activity Recognition.

    PubMed

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru

    2017-09-08

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.

  11. Video-Based Intervention in Teaching Fraction Problem-Solving to Students with Autism Spectrum Disorder.

    PubMed

    Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin

    2015-09-01

    The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.

  12. Inferential Processing among Adequate and Struggling Adolescent Comprehenders and Relations to Reading Comprehension

    PubMed Central

    Barth, Amy E.; Barnes, Marcia; Francis, David J.; Vaughn, Sharon; York, Mary

    2015-01-01

    Separate mixed model analyses of variance (ANOVA) were conducted to examine the effect of textual distance on the accuracy and speed of text consistency judgments among adequate and struggling comprehenders across grades 6–12 (n = 1203). Multiple regressions examined whether accuracy in text consistency judgments uniquely accounted for variance in comprehension. Results suggest that there is considerable growth across the middle and high school years, particularly for adequate comprehenders in those text integration processes that maintain local coherence. Accuracy in text consistency judgments accounted for significant unique variance for passage-level, but not sentence-level comprehension, particularly for adequate comprehenders. PMID:26166946

  13. Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors

    NASA Astrophysics Data System (ADS)

    Holmes, C. S.; Headley, M.; Hart, P. W.

    2017-08-01

    Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.

  14. Numerical solution of the wave equation with variable wave speed on nonconforming domains by high-order difference potentials

    NASA Astrophysics Data System (ADS)

    Britt, S.; Tsynkov, S.; Turkel, E.

    2018-02-01

    We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.

  15. 14 CFR 414.17 - Maintaining the continued accuracy of the initial application.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Maintaining the continued accuracy of the initial application. 414.17 Section 414.17 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION LICENSING SAFETY APPROVALS Application Procedures...

  16. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  17. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  18. A Novel Energy-Efficient Approach for Human Activity Recognition

    PubMed Central

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin

    2017-01-01

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560

  19. Towards SSVEP-based, portable, responsive Brain-Computer Interface.

    PubMed

    Kaczmarek, Piotr; Salomon, Pawel

    2015-08-01

    A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.

  20. Adaptive Kalman filter for indoor localization using Bluetooth Low Energy and inertial measurement unit.

    PubMed

    Yoon, Paul K; Zihajehzadeh, Shaghayegh; Bong-Soo Kang; Park, Edward J

    2015-08-01

    This paper proposes a novel indoor localization method using the Bluetooth Low Energy (BLE) and an inertial measurement unit (IMU). The multipath and non-line-of-sight errors from low-power wireless localization systems commonly result in outliers, affecting the positioning accuracy. We address this problem by adaptively weighting the estimates from the IMU and BLE in our proposed cascaded Kalman filter (KF). The positioning accuracy is further improved with the Rauch-Tung-Striebel smoother. The performance of the proposed algorithm is compared against that of the standard KF experimentally. The results show that the proposed algorithm can maintain high accuracy for position tracking the sensor in the presence of the outliers.

  1. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    PubMed

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  2. The non-equilibrium and energetic cost of sensory adaptation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, G.; Sartori, Pablo; Tu, Y.

    2011-03-24

    Biological sensory systems respond to external signals in short time and adapt to permanent environmental changes over a longer timescale to maintain high sensitivity in widely varying environments. In this work we have shown how all adaptation dynamics are intrinsically non-equilibrium and free energy is dissipated. We show that the dissipated energy is utilized to maintain adaptation accuracy. A universal relation between the energy dissipation and the optimum adaptation accuracy is established by both a general continuum model and a discrete model i n the specific case of the well-known E. coli chemo-sensory adaptation. Our study suggests that cellular levelmore » adaptations are fueled by hydrolysis of high energy biomolecules, such as ATP. The relevance of this work lies on linking the functionality of a biological system (sensory adaptation) with a concept rooted in statistical physics (energy dissipation), by a mathematical law. This has been made possible by identifying a general sensory system with a non-equilibrium steady state (a stationary state in which the probability current is not zero, but its divergence is, see figure), and then numerically and analytically solving the Fokker-Planck and Master Equations which describe the sensory adaptive system. The application of our general results to the case of E. Coli has shed light on why this system uses the high energy SAM molecule to perform adaptation, since using the more common ATP would not suffice to obtain the required adaptation accuracy.« less

  3. A novel potential/viscous flow coupling technique for computing helicopter flow fields

    NASA Technical Reports Server (NTRS)

    Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul

    1993-01-01

    The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.

  4. Unification of some advection schemes in two dimensions

    NASA Technical Reports Server (NTRS)

    Sidilkover, D.; Roe, P. L.

    1995-01-01

    The relationship between two approaches towards construction of genuinely two-dimensional upwind advection schemes is established. One of these approaches is of the control volume type applicable on structured cartesian meshes. It resulted in the compact high resolution schemes capable of maintaining second order accuracy in both homogeneous and inhomogeneous cases. Another one is the fluctuation splitting approach, which is well suited for triangular (and possibly) unstructured meshes. Understanding the relationship between these two approaches allows us to formulate here a new fluctuation splitting high resolution (i.e. possible use of artificial compression, while maintaining positivity property) scheme. This scheme is shown to be linearity preserving in inhomogeneous as well as homogeneous cases.

  5. Examination of standardized patient performance: accuracy and consistency of six standardized patients over time.

    PubMed

    Erby, Lori A H; Roter, Debra L; Biesecker, Barbara B

    2011-11-01

    To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. Copyright © 2010. Published by Elsevier Ireland Ltd.

  6. Scanning Optical Head with Nontilted Reference Beam: Assuring Nanoradian Accuracy for a New Generation Surface Profiler in the Large-Slope Testing Range

    DOE PAGES

    Qian, Shinan

    2011-01-01

    Nmore » anoradian Surface Profilers (SPs) are required for state-of-the-art synchrotron radiation optics and high-precision optical measurements. ano-radian accuracy must be maintained in the large-angle test range. However, the beams' notable lateral motions during tests of most operating profilers, combined with the insufficiencies of their optical components, generate significant errors of ∼ 1  μ rad rms in the measurements. The solution to nano-radian accuracy for the new generation of surface profilers in this range is to apply a scanning optical head, combined with nontilted reference beam. I describe here my comparison of different scan modes and discuss some test results.« less

  7. Neural network control of focal position during time-lapse microscopy of cells.

    PubMed

    Wei, Ling; Roberts, Elijah

    2018-05-09

    Live-cell microscopy is quickly becoming an indispensable technique for studying the dynamics of cellular processes. Maintaining the specimen in focus during image acquisition is crucial for high-throughput applications, especially for long experiments or when a large sample is being continuously scanned. Automated focus control methods are often expensive, imperfect, or ill-adapted to a specific application and are a bottleneck for widespread adoption of high-throughput, live-cell imaging. Here, we demonstrate a neural network approach for automatically maintaining focus during bright-field microscopy. Z-stacks of yeast cells growing in a microfluidic device were collected and used to train a convolutional neural network to classify images according to their z-position. We studied the effect on prediction accuracy of the various hyperparameters of the neural network, including downsampling, batch size, and z-bin resolution. The network was able to predict the z-position of an image with ±1 μm accuracy, outperforming human annotators. Finally, we used our neural network to control microscope focus in real-time during a 24 hour growth experiment. The method robustly maintained the correct focal position compensating for 40 μm of focal drift and was insensitive to changes in the field of view. About ~100 annotated z-stacks were required to train the network making our method quite practical for custom autofocus applications.

  8. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  9. A compact, fast ozone UV photometer and sampling inlet for research aircraft

    NASA Astrophysics Data System (ADS)

    Gao, R. S.; Ballard, J.; Watts, L. A.; Thornberry, T. D.; Ciciora, S. J.; McLaughlin, R. J.; Fahey, D. W.

    2012-05-01

    In situ measurements of atmospheric ozone (O3) are performed routinely from many research aircraft platforms. The most common technique depends on the strong absorption of ultraviolet (UV) light by ozone. As atmospheric science advances to the widespread use of unmanned aircraft systems (UASs), there is an increasing requirement for minimizing instrument space, weight, and power while maintaining instrument accuracy, precision and time response. The design and use of a new, dual-beam, polarized, UV photometer instrument for in situ O3 measurements is described. The instrument has a fast sampling rate (2 Hz), high accuracy (3%), and precision (1.1 × 1010 O3 molecules cm-3). The size (36 l), weight (18 kg), and power (50-200 W) make the instrument suitable for many UAS and other airborne platforms. Inlet and exhaust configurations are also described for ambient sampling in the troposphere and lower stratosphere (1000-50 mb) that optimize the sample flow rate to increase time response while minimizing loss of precision due to induced turbulence in the sample cell. In-flight and laboratory intercomparisons with existing O3 instruments show that measurement accuracy is maintained in flight.

  10. Migrant Student Record Transfer System in New York State.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Migrant Education.

    In 1970, the Migrant Student Record Transfer System (MSRTS) was funded through Title I of the Elementary and Secondary Education Act. A single center at Little Rock (Arkansas) was designed to contain a profile on each migrant student enrolled. The Center's aim was to assure a high degree of accuracy while maintaining flexibility and ready access…

  11. The Role of Knowledge Visualisation in Supporting Postgraduate Dissertation Assessment

    ERIC Educational Resources Information Center

    Renaud, Karen; Van Biljon, Judy

    2017-01-01

    There has been a worldwide increase in the number of postgraduate students over the last few years and therefore some examiners struggle to maintain high standards of consistency, accuracy and fairness. This is especially true in developing countries where the increase is supervision capacity is not on a par with the growth in student numbers. The…

  12. Standard Reference Specimens in Quality Control of Engineering Surfaces

    PubMed Central

    Song, J. F.; Vorburger, T. V.

    1991-01-01

    In the quality control of engineering surfaces, we aim to understand and maintain a good relationship between the manufacturing process and surface function. This is achieved by controlling the surface texture. The control process involves: 1) learning the functional parameters and their control values through controlled experiments or through a long history of production and use; 2) maintaining high accuracy and reproducibility with measurements not only of roughness calibration specimens but also of real engineering parts. In this paper, the characteristics, utilizations, and limitations of different classes of precision roughness calibration specimens are described. A measuring procedure of engineering surfaces, based on the calibration procedure of roughness specimens at NIST, is proposed. This procedure involves utilization of check specimens with waveform, wavelength, and other roughness parameters similar to functioning engineering surfaces. These check specimens would be certified under standardized reference measuring conditions, or by a reference instrument, and could be used for overall checking of the measuring procedure and for maintaining accuracy and agreement in engineering surface measurement. The concept of “surface texture design” is also suggested, which involves designing the engineering surface texture, the manufacturing process, and the quality control procedure to meet the optimal functional needs. PMID:28184115

  13. Inflight calibration technique for onboard high-gain antenna pointing. [of Mariner 10 spacecraft in Venus and Mercury flyby mission

    NASA Technical Reports Server (NTRS)

    Ohtakay, H.; Hardman, J. M.

    1975-01-01

    The X-band radio frequency communication system was used for the first time in deep space planetary exploration by the Mariner 10 Venus and Mercury flyby mission. This paper presents the technique utilized for and the results of inflight calibration of high-gain antenna (HGA) pointing. Also discussed is pointing accuracy to maintain a high data transmission rate throughout the mission, including the performance of HGA pointing during the critical period of Mercury encounter.

  14. Training and quality assurance with the Structured Clinical Interview for DSM-IV (SCID-I/P).

    PubMed

    Ventura, J; Liberman, R P; Green, M F; Shaner, A; Mintz, J

    1998-06-15

    Accuracy in psychiatric diagnosis is critical for evaluating the suitability of the subjects for entry into research protocols and for establishing comparability of findings across study sites. However, training programs in the use of diagnostic instruments for research projects are not well systematized. Furthermore, little information has been published on the maintenance of interrater reliability of diagnostic assessments. At the UCLA Research Center for Major Mental Illnesses, a Training and Quality Assurance Program for SCID interviewers was used to evaluate interrater reliability and diagnostic accuracy. Although clinically experienced interviewers achieved better interrater reliability and overall diagnostic accuracy than neophyte interviewers, both groups were able to achieve and maintain high levels of interrater reliability, diagnostic accuracy, and interviewer skill. At the first quality assurance check after training, there were no significant differences between experienced and neophyte interviewers in interrater reliability or diagnostic accuracy. Standardization of training and quality assurance procedures within and across research projects may make research findings from study sites more comparable.

  15. Revisiting and Extending Interface Penalties for Multi-Domain Summation-by-Parts Operators

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David

    2007-01-01

    General interface coupling conditions are presented for multi-domain collocation methods, which satisfy the summation-by-parts (SBP) spatial discretization convention. The combined interior/interface operators are proven to be L2 stable, pointwise stable, and conservative, while maintaining the underlying accuracy of the interior SBP operator. The new interface conditions resemble (and were motivated by) those used in the discontinuous Galerkin finite element community, and maintain many of the same properties. Extensive validation studies are presented using two classes of high-order SBP operators: 1) central finite difference, and 2) Legendre spectral collocation.

  16. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    Treesearch

    Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...

  17. Technologies That Enable Accurate and Precise Nano- to Milliliter-Scale Liquid Dispensing of Aqueous Reagents Using Acoustic Droplet Ejection.

    PubMed

    Sackmann, Eric K; Majlof, Lars; Hahn-Windgassen, Annett; Eaton, Brent; Bandzava, Temo; Daulton, Jay; Vandenbroucke, Arne; Mock, Matthew; Stearns, Richard G; Hinkson, Stephen; Datwani, Sammy S

    2016-02-01

    Acoustic liquid handling uses high-frequency acoustic signals that are focused on the surface of a fluid to eject droplets with high accuracy and precision for various life science applications. Here we present a multiwell source plate, the Echo Qualified Reservoir (ER), which can acoustically transfer over 2.5 mL of fluid per well in 25-nL increments using an Echo 525 liquid handler. We demonstrate two Labcyte technologies-Dynamic Fluid Analysis (DFA) methods and a high-voltage (HV) grid-that are required to maintain accurate and precise fluid transfers from the ER at this volume scale. DFA methods were employed to dynamically assess the energy requirements of the fluid and adjust the acoustic ejection parameters to maintain a constant velocity droplet. Furthermore, we demonstrate that the HV grid enhances droplet velocity and coalescence at the destination plate. These technologies enabled 5-µL per destination well transfers to a 384-well plate, with accuracy and precision values better than 4%. Last, we used the ER and Echo 525 liquid handler to perform a quantitative polymerase chain reaction (qPCR) assay to demonstrate an application that benefits from the flexibility and larger volume capabilities of the ER. © 2015 Society for Laboratory Automation and Screening.

  18. SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.

    PubMed

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-07-15

    In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.

  19. Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model versus Logistic Regression. REL 2015-077

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2015-01-01

    The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by…

  20. Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node

    NASA Astrophysics Data System (ADS)

    Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi

    2015-03-01

    In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.

  1. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  2. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  3. Fabrication Of High-Tc Superconducting Integrated Circuits

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Warner, Joseph D.

    1992-01-01

    Microwave ring resonator fabricated to demonstrate process for fabrication of passive integrated circuits containing high-transition-temperature superconductors. Superconductors increase efficiencies of communication systems, particularly microwave communication systems, by reducing ohmic losses and dispersion of signals. Used to reduce sizes and masses and increase aiming accuracies and tracking speeds of millimeter-wavelength, electronically steerable antennas. High-Tc superconductors preferable for such applications because they operate at higher temperatures than low-Tc superconductors do, therefore, refrigeration systems needed to maintain superconductivity designed smaller and lighter and to consume less power.

  4. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  5. High order finite volume WENO schemes for the Euler equations under gravitational fields

    NASA Astrophysics Data System (ADS)

    Li, Gang; Xing, Yulong

    2016-07-01

    Euler equations with gravitational source terms are used to model many astrophysical and atmospheric phenomena. This system admits hydrostatic balance where the flux produced by the pressure is exactly canceled by the gravitational source term, and two commonly seen equilibria are the isothermal and polytropic hydrostatic solutions. Exact preservation of these equilibria is desirable as many practical problems are small perturbations of such balance. High order finite difference weighted essentially non-oscillatory (WENO) schemes have been proposed in [22], but only for the isothermal equilibrium state. In this paper, we design high order well-balanced finite volume WENO schemes, which can preserve not only the isothermal equilibrium but also the polytropic hydrostatic balance state exactly, and maintain genuine high order accuracy for general solutions. The well-balanced property is obtained by novel source term reformulation and discretization, combined with well-balanced numerical fluxes. Extensive one- and two-dimensional simulations are performed to verify well-balanced property, high order accuracy, as well as good resolution for smooth and discontinuous solutions.

  6. Fourier analysis for hydrostatic pressure sensing in a polarization-maintaining photonic crystal fiber.

    PubMed

    Childs, Paul; Wong, Allan C L; Fu, H Y; Liao, Yanbiao; Tam, Hwayaw; Lu, Chao; Wai, P K A

    2010-12-20

    We measured the hydrostatic pressure dependence of the birefringence and birefringent dispersion of a Sagnac interferometric sensor incorporating a length of highly birefringent photonic crystal fiber using Fourier analysis. Sensitivity of both the phase and chirp spectra to hydrostatic pressure is demonstrated. Using this analysis, phase-based measurements showed a good linearity with an effective sensitivity of 9.45 nm/MPa and an accuracy of ±7.8 kPa using wavelength-encoded data and an effective sensitivity of -55.7 cm(-1)/MPa and an accuracy of ±4.4 kPa using wavenumber-encoded data. Chirp-based measurements, though nonlinear in response, showed an improvement in accuracy at certain pressure ranges with an accuracy of ±5.5 kPa for the full range of measured pressures using wavelength-encoded data and dropping to within ±2.5 kPa in the range of 0.17 to 0.4 MPa using wavenumber-encoded data. Improvements of the accuracy demonstrated the usefulness of implementing chirp-based analysis for sensing purposes.

  7. Impact of different training strategies on the accuracy of a Bayesian network for predicting hospital admission.

    PubMed

    Leegon, Jeffrey; Aronsky, Dominik

    2006-01-01

    The healthcare environment is constantly changing. Probabilistic clinical decision support systems need to recognize and incorporate the changing patterns and adjust the decision model to maintain high levels of accuracy. Using data from >75,000 ED patients during a 19-month study period we examined the impact of various static and dynamic training strategies on a decision support system designed to predict hospital admission status for ED patients. Training durations ranged from 1 to 12 weeks. During the study period major institutional changes occurred that affected the system's performance level. The average area under the receiver operating characteristic curve was higher and more stable when longer training periods were used. The system showed higher accuracy when retrained an updated with more recent data as compared to static training period. To adjust for temporal trends the accuracy of decision support systems can benefit from longer training periods and retraining with more recent data.

  8. Iconic memory requires attention

    PubMed Central

    Persuh, Marjan; Genzer, Boris; Melara, Robert D.

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389

  9. Iconic memory requires attention.

    PubMed

    Persuh, Marjan; Genzer, Boris; Melara, Robert D

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.

  10. High-Capacity Communications from Martian Distances Part 4: Assessment of Spacecraft Pointing Accuracy Capabilities Required For Large Ka-Band Reflector Antennas

    NASA Technical Reports Server (NTRS)

    Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir

    2006-01-01

    Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.

  11. The Effect of Moderate and High-Intensity Fatigue on Groundstroke Accuracy in Expert and Non-Expert Tennis Players

    PubMed Central

    Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan

    2013-01-01

    Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809

  12. Fast retinal layer segmentation of spectral domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Zhang, Tianqiao; Song, Zhangjun; Wang, Xiaogang; Zheng, Huimin; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-09-01

    An approach to segment macular layer thicknesses from spectral domain optical coherence tomography has been proposed. The main contribution is to decrease computational costs while maintaining high accuracy via exploring Kalman filtering, customized active contour, and curve smoothing. Validation on 21 normal volumes shows that 8 layer boundaries could be segmented within 5.8 s with an average layer boundary error <2.35 μm. It has been compared with state-of-the-art methods for both normal and age-related macular degeneration cases to yield similar or significantly better accuracy and is 37 times faster. The proposed method could be a potential tool to clinically quantify the retinal layer boundaries.

  13. Dynamics and Control of Tethered Antennas/Reflectors in Orbit

    DTIC Science & Technology

    1992-02-01

    reflector system. The optimal linear quadratic Gaussian (LQG) digital con- trol of the orbiting tethered antenna/reflector system is analyzed. The...flexibility of both the antenna and the tether are included in this high order system model. With eight point actuators optimally positioned together with...able to maintain satisfactory pointing accuracy for low and moderate altitude orbits under the influence of solar pressure. For the higher altitudes a

  14. Seismic Motion Stability, Measurement and Precision Control.

    DTIC Science & Technology

    1979-12-01

    tiltmeter . Tilt was corrected by changing air pressure in one bank of isolators to maintain the reference tiltmeter at null well within the 0.1 arcsecond...frequency rotations (0-0.1 Hz), a high quality, two-axis tiltmeter is used. The azimuth orientation angle could be measured with a four-position gyro...compassing system with considerably less accuracy than the tiltmeters . However, it would provide a continuous automatic azimuth determination update every

  15. A system for the analysis of foot and ankle kinematics during gait.

    PubMed

    Kidder, S M; Abuzzahab, F S; Harris, G F; Johnson, J E

    1996-03-01

    A five-camera Vicon (Oxford Metrics, Oxford, England) motion analysis system was used to acquire foot and ankle motion data. Static resolution and accuracy were computed as 0.86 +/- 0.13 mm and 98.9%, while dynamic resolution and accuracy were 0.1 +/- 0.89 and 99.4% (sagittal plane). Spectral analysis revealed high frequency noise and the need for a filter (6 Hz Butterworth low-pass) as used in similar clinical situations. A four-segment rigid body model of the foot and ankle was developed. The four rigid body foot model segments were 1) tibia and fibula, 2) calcaneus, talus, and navicular, 3) cuneiforms, cuboid, and metatarsals, and 4) hallux. The Euler method for describing relative foot and ankle segment orientation was utilized in order to maintain accuracy and ease of clinical application. Kinematic data from a single test subject are presented.

  16. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  17. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  18. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  19. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    NASA Astrophysics Data System (ADS)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  20. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    PubMed

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  1. Natural language processing and inference rules as strategies for updating problem list in an electronic health record.

    PubMed

    Plazzotta, Fernando; Otero, Carlos; Luna, Daniel; de Quiros, Fernan Gonzalez Bernaldo

    2013-01-01

    Physicians do not always keep the problem list accurate, complete and updated. To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs. Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list. NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.

  2. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.

    PubMed

    Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D

    2017-11-13

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.

  3. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation

    NASA Astrophysics Data System (ADS)

    Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.

    2017-12-01

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.

  4. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    PubMed

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-10-11

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.

  5. High-resolution high-sensitivity and truly distributed optical frequency domain reflectometry for structural crack detection

    NASA Astrophysics Data System (ADS)

    Li, Wenhai; Bao, Xiaoyi; Chen, Liang

    2014-05-01

    Optical Frequency Domain Reflectometry (OFDR) with the use of polarization maintaining fiber (PMF) is capable of distinguishing strain and temperature, which is critical for successful field applications such as structural health monitoring (SHM) and smart material. Location-dependent measurement sensitivities along PMF are compensated by cross- and auto-correlations measurements of the spectra form a distributed parameter matrix. Simultaneous temperature and strain measurement accuracy of 1μstrain and 0.1°C is achieved with 2.5mm spatial resolution in over 180m range.

  6. One-laser-based generation/detection of Brillouin dynamic grating and its application to distributed discrimination of strain and temperature.

    PubMed

    Zou, Weiwen; He, Zuyuan; Hotate, Kazuo

    2011-01-31

    This paper presents a novel scheme to generate and detect Brillouin dynamic grating in a polarization-maintaining optical fiber based on one laser source. Precise measurement of Brillouin dynamic grating spectrum is achieved benefiting from that the pump, probe and readout waves are coherently originated from the same laser source. Distributed discrimination of strain and temperature is also achieved with high accuracy.

  7. Defense AT and L. Volume 38, Number 4

    DTIC Science & Technology

    2009-06-01

    accuracy at extended ranges. Today, Afghanistan- and Iraq-bound medics get realistic training on a Florida-based company’s Mini-Combat Trauma Patient ...school basketball team and drone on about how we miss 100 percent of the shots we don’t take. Fine. They may be right; failure might be good for us...be developed (or procured) that exhibits high inherent reliability and maintainability plus ad- vanced self- diagnostics . Do the ICD and Gate 1

  8. A discontinuous Galerkin method for gravity-driven viscous fingering instabilities in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, G.; Gerstenberger, A.; Collis, S. S.

    2013-01-01

    We present a new approach to the simulation of gravity-driven viscous fingering instabilities in porous media flow. These instabilities play a very important role during carbon sequestration processes in brine aquifers. Our approach is based on a nonlinear implementation of the discontinuous Galerkin method, and possesses a number of key features. First, the method developed is inherently high order, and is therefore well suited to study unstable flow mechanisms. Secondly, it maintains high-order accuracy on completely unstructured meshes. The combination of these two features makes it a very appealing strategy in simulating the challenging flow patterns and very complex geometriesmore » of actual reservoirs and aquifers. This article includes an extensive set of verification studies on the stability and accuracy of the method, and also features a number of computations with unstructured grids and non-standard geometries.« less

  9. Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    van de Walle, A.; Naets, F.; Desmet, W.

    2018-05-01

    This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.

  10. Optimization of the scan protocols for CT-based material extraction in small animal PET/CT studies

    NASA Astrophysics Data System (ADS)

    Yang, Ching-Ching; Yu, Jhih-An; Yang, Bang-Hung; Wu, Tung-Hsin

    2013-12-01

    We investigated the effects of scan protocols on CT-based material extraction to minimize radiation dose while maintaining sufficient image information in small animal studies. The phantom simulation experiments were performed with the high dose (HD), medium dose (MD) and low dose (LD) protocols at 50, 70 and 80 kVp with varying mA s. The reconstructed CT images were segmented based on Hounsfield unit (HU)-physical density (ρ) calibration curves and the dual-energy CT-based (DECT) method. Compared to the (HU;ρ) method performed on CT images acquired with the 80 kVp HD protocol, a 2-fold improvement in segmentation accuracy and a 7.5-fold reduction in radiation dose were observed when the DECT method was performed on CT images acquired with the 50/80 kVp LD protocol, showing the possibility to reduce radiation dose while achieving high segmentation accuracy.

  11. BBMerge – Accurate paired shotgun read merging via overlap

    DOE PAGES

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    2017-10-26

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  12. BBMerge – Accurate paired shotgun read merging via overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  13. The Control Point Library Building System. [for Landsat MSS and RBV geometric image correction

    NASA Technical Reports Server (NTRS)

    Niblack, W.

    1981-01-01

    The Earth Resources Observation System (EROS) Data Center in Sioux Falls, South Dakota distributes precision corrected Landsat MSS and RBV data. These data are derived from master data tapes produced by the Master Data Processor (MDP), NASA's system for computing and applying corrections to the data. Included in the MDP is the Control Point Library Building System (CPLBS), an interactive, menu-driven system which permits a user to build and maintain libraries of control points. The control points are required to achieve the high geometric accuracy desired in the output MSS and RBV data. This paper describes the processing performed by CPLBS, the accuracy of the system, and the host computer and special image viewing equipment employed.

  14. Efficiency and Accuracy in Thermal Simulation of Powder Bed Fusion of Bulk Metallic Glass

    NASA Astrophysics Data System (ADS)

    Lindwall, J.; Malmelöv, A.; Lundbäck, A.; Lindgren, L.-E.

    2018-05-01

    Additive manufacturing by powder bed fusion processes can be utilized to create bulk metallic glass as the process yields considerably high cooling rates. However, there is a risk that reheated material set in layers may become devitrified, i.e., crystallize. Therefore, it is advantageous to simulate the process to fully comprehend it and design it to avoid the aforementioned risk. However, a detailed simulation is computationally demanding. It is necessary to increase the computational speed while maintaining accuracy of the computed temperature field in critical regions. The current study evaluates a few approaches based on temporal reduction to achieve this. It is found that the evaluated approaches save a lot of time and accurately predict the temperature history.

  15. Design and evaluation of an optical fine-pointing control system for telescopes utilizing a digital star sensor

    NASA Technical Reports Server (NTRS)

    Ostroff, A. J.; Romanczyk, K. C.

    1973-01-01

    One of the most significant problems associated with the development of large orbiting astronomical telescopes is that of maintaining the very precise pointing accuracy required. A proposed solution to this problem utilizes dual-level pointing control. The primary control system maintains the telescope structure attitude stabilized within the field of view to the desired accuracy. In order to demonstrate the feasibility of optically stabilizing the star images to the desired accuracy a regulating system has been designed and evaluated. The control system utilizes a digital star sensor and an optical star image motion compensator, both of which have been developed for this application. These components have been analyzed mathematically, analytical models have been developed, and hardware has been built and tested.

  16. Accuracy of electronic implant torque controllers following time in clinical service.

    PubMed

    Mitrani, R; Nicholls, J I; Phillips, K M; Ma, T

    2001-01-01

    Tightening of the screws in implant-supported restorations has been reported to be problematic, in that if the applied torque is too low, screw loosening occurs. If the torque is too high, then screw fracture can take place. Thus, accuracy of the torque driver is of the utmost importance. This study evaluated 4 new electronic torque drivers (controls) and 10 test electronic torque drivers, which had been in clinical service for a minimum of 5 years. Torque values of the test drivers were measured and were compared with the control values using a 1-way analysis of variance. Torque delivery accuracy was measured using a technique that simulated the clinical situation. In vivo, the torque driver turns the screw until the selected tightening torque is reached. In this laboratory experiment, an implant, along with an attached abutment and abutment gold screw, was held firmly in a Tohnichi torque gauge. Calibration accuracy for the Tohnichi is +/- 3% of the scale value. During torque measurement, the gold screw turned a minimum of 180 degrees before contact was made between the screw and abutment. Three torque values (10, 20, and 32 N-cm) were evaluated, at both high- and low-speed settings. The recorded torque measurements indicated that the 10 test electronic torque drivers maintained a torque delivery accuracy equivalent to the 4 new (unused) units. Judging from the torque output values obtained from the 10 test units, the clinical use of the electronic torque driver suggests that accuracy did not change significantly over the 5-year period of clinical service.

  17. Phase noise cancellation in polarisation-maintaining fibre links

    NASA Astrophysics Data System (ADS)

    Rauf, B.; Vélez López, M. C.; Thoumany, P.; Pizzocaro, M.; Calonico, D.

    2018-03-01

    The distribution of ultra-narrow linewidth laser radiation is an integral part of many challenging metrological applications. Changes in the optical pathlength induced by environmental disturbances compromise the stability and accuracy of optical fibre networks distributing the laser light and call for active phase noise cancellation. Here we present a laboratory scale optical (at 578 nm) fibre network featuring all polarisation maintaining fibres in a setup with low optical powers available and tracking voltage-controlled oscillators implemented. The stability and accuracy of this system reach performance levels below 1 × 10-19 after 10 000 s of averaging.

  18. A compact, fast UV photometer for measurement of ozone from research aircraft

    NASA Astrophysics Data System (ADS)

    Gao, R. S.; Ballard, J.; Watts, L. A.; Thornberry, T. D.; Ciciora, S. J.; McLaughlin, R. J.; Fahey, D. W.

    2012-09-01

    In situ measurements of atmospheric ozone (O3) are performed routinely from many research aircraft platforms. The most common technique depends on the strong absorption of ultraviolet (UV) light by ozone. As atmospheric science advances to the widespread use of unmanned aircraft systems (UASs), there is an increasing requirement for minimizing instrument space, weight, and power while maintaining instrument accuracy, precision and time response. The design and use of a new, dual-beam, UV photometer instrument for in situ O3 measurements is described. A polarization optical-isolator configuration is utilized to fold the UV beam inside the absorption cells, yielding a 60-cm absorption length with a 30-cm cell. The instrument has a fast sampling rate (2 Hz at <200 hPa, 1 Hz at 200-500 hPa, and 0.5 Hz at ≥ 500 hPa), high accuracy (3% excluding operation in the 300-450 hPa range, where the accuracy may be degraded to about 5%), and excellent precision (1.1 × 1010 O3 molecules cm-3 at 2 Hz, which corresponds to 3.0 ppb at 200 K and 100 hPa, or 0.41 ppb at 273 K and 1013 hPa). The size (36 l), weight (18 kg), and power (50-200 W) make the instrument suitable for many UASs and other airborne platforms. Inlet and exhaust configurations are also described for ambient sampling in the troposphere and lower stratosphere (1000-50 hPa) that control the sample flow rate to maximize time response while minimizing loss of precision due to induced turbulence in the sample cell. In-flight and laboratory intercomparisons with existing O3 instruments show that measurement accuracy is maintained in flight.

  19. Preform For Producing An Optical Fiber And Method Therefor

    DOEpatents

    Kliner, Dahv A. V.; Koplow, Jeffery P.

    2004-08-10

    The present invention provides a simple method for fabricating fiber-optic glass preforms having complex refractive index configurations and/or dopant distributions in a radial direction with a high degree of accuracy and precision. The method teaches bundling together a plurality of glass rods of specific physical, chemical, or optical properties and wherein the rod bundle is fused in a manner that maintains the cross-sectional composition and refractive-index profiles established by the position of the rods.

  20. Preform For Producing An Optical Fiber And Method Therefor

    DOEpatents

    Kliner, Dahv A. V.; Koplow, Jeffery P.

    2005-04-19

    The present invention provides a simple method for fabricating fiber-optic glass preforms having complex refractive index configurations and/or dopant distributions in a radial direction with a high degree of accuracy and precision. The method teaches bundling together a plurality of glass rods of specific physical, chemical, or optical properties and wherein the rod bundle is fused in a manner that maintains the cross-sectional composition and refractive-index profiles established by the position of the rods.

  1. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  2. Statistical comparison of a hybrid approach with approximate and exact inference models for Fusion 2+

    NASA Astrophysics Data System (ADS)

    Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew

    2007-04-01

    One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.

  3. Relative Navigation of Formation Flying Satellites

    NASA Technical Reports Server (NTRS)

    Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, Russell; Gramling, Cheryl; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Guidance, Navigation, and Control Center (GNCC) at Goddard Space Flight Center (GSFC) has successfully developed high-accuracy autonomous satellite navigation systems using the National Aeronautics and Space Administration's (NASA's) space and ground communications systems and the Global Positioning System (GPS). In addition, an autonomous navigation system that uses celestial object sensor measurements is currently under development and has been successfully tested using real Sun and Earth horizon measurements.The GNCC has developed advanced spacecraft systems that provide autonomous navigation and control of formation flyers in near-Earth, high-Earth, and libration point orbits. To support this effort, the GNCC is assessing the relative navigation accuracy achievable for proposed formations using GPS, intersatellite crosslink, ground-to-satellite Doppler, and celestial object sensor measurements. This paper evaluates the performance of these relative navigation approaches for three proposed missions with two or more vehicles maintaining relatively tight formations. High-fidelity simulations were performed to quantify the absolute and relative navigation accuracy as a function of navigation algorithm and measurement type. Realistically-simulated measurements were processed using the extended Kalman filter implemented in the GPS Enhanced Inboard Navigation System (GEONS) flight software developed by GSFC GNCC. Solutions obtained by simultaneously estimating all satellites in the formation were compared with the results obtained using a simpler approach based on differencing independently estimated state vectors.

  4. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    PubMed Central

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  5. Comparison between the Juno Earth flyby magnetic measurements and the magnetometer package on the IRIS solar observatory

    NASA Astrophysics Data System (ADS)

    Merayo, J. M.; Connerney, J. E.; Joergensen, J. L.; Dougherty, B.

    2013-12-01

    In October 2013 the NASA's Juno New Frontier spacecraft will perform an Earth Flyby Gravity Assist. During this flyby, Juno will reach an altitude of about 600 km and the magnetometer experiment will measure the magnetic field with very high precision. In June 2013 the NASA's IRIS solar observatory was successfully launched. IRIS uses a very fine guiding telescope in order to maintain a high pointing accuracy, assisted by a very high accuracy star tracker and a science grade vector magnetometer. IRIS was placed into a Sun-synchronous orbit at about 600 km altitude by a Pegasus rocket from the Vandenberg Air Force Base in California. This platform will also allow to performing measurements of the Earth's magnetic field with very high precision, since it carries similar instrumentation as on the Swarm satellites (star trackers and magnetometer). The data recorded by the Juno magnetic experiment and the IRIS magnetometer will bring a very exciting opportunity for comparing the two experiments as well as for determining current structures during the flyby.

  6. Numerical method for high accuracy index of refraction estimation for spectro-angular surface plasmon resonance systems.

    PubMed

    Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G

    2008-11-24

    An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.

  7. Implementing Photodissociation in an Orbitrap Mass Spectrometer

    PubMed Central

    Vasicek, Lisa A.; Ledvina, Aaron R.; Shaw, Jared; Griep-Raming, Jens; Westphall, Michael S.; Coon, Joshua J.; Brodbelt, Jennifer S.

    2011-01-01

    We modified a dual pressure linear ion trap Orbitrap to permit infrared multiphoton dissociation (IRMPD) in the higher energy collisional dissociation (HCD) cell for high resolution analysis. A number of parameters, including the pressures of the C-trap and HCD cell, the radio frequency (rf) amplitude applied to the C-trap, and the HCD DC offset, were evaluated to optimize IRMPD efficiency and maintain a high signal-to-noise ratio. IRMPD was utilized for characterization of phosphopeptides, supercharged peptides, and N-terminal modified peptides, as well as for top-down protein analysis. The high resolution and high mass accuracy capabilities of the Orbitrap analyzer facilitated confident assignment of product ions arising from IRMPD. PMID:21953052

  8. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    PubMed

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  9. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  10. Mining HIV protease cleavage data using genetic programming with a sum-product function.

    PubMed

    Yang, Zheng Rong; Dalby, Andrew R; Qiu, Jing

    2004-12-12

    In order to design effective HIV inhibitors, studying and understanding the mechanism of HIV protease cleavage specification is critical. Various methods have been developed to explore the specificity of HIV protease cleavage activity. However, success in both extracting discriminant rules and maintaining high prediction accuracy is still challenging. The earlier study had employed genetic programming with a min-max scoring function to extract discriminant rules with success. However, the decision will finally be degenerated to one residue making further improvement of the prediction accuracy difficult. The challenge of revising the min-max scoring function so as to improve the prediction accuracy motivated this study. This paper has designed a new scoring function called a sum-product function for extracting HIV protease cleavage discriminant rules using genetic programming methods. The experiments show that the new scoring function is superior to the min-max scoring function. The software package can be obtained by request to Dr Zheng Rong Yang.

  11. Positioning accuracy in a registration-free CT-based navigation system

    NASA Astrophysics Data System (ADS)

    Brandenberger, D.; Birkfellner, W.; Baumann, B.; Messmer, P.; Huegli, R. W.; Regazzoni, P.; Jacob, A. L.

    2007-12-01

    In order to maintain overall navigation accuracy established by a calibration procedure in our CT-based registration-free navigation system, the CT scanner has to repeatedly generate identical volume images of a target at the same coordinates. We tested the positioning accuracy of the prototype of an advanced workplace for image-guided surgery (AWIGS) which features an operating table capable of direct patient transfer into a CT scanner. Volume images (N = 154) of a specialized phantom were analysed for translational shifting after various table translations. Variables included added weight and phantom position on the table. The navigation system's calibration accuracy was determined (bias 2.1 mm, precision ± 0.7 mm, N = 12). In repeated use, a bias of 3.0 mm and a precision of ± 0.9 mm (N = 10) were maintainable. Instances of translational image shifting were related to the table-to-CT scanner docking mechanism. A distance scaling error when altering the table's height was detected. Initial prototype problems visible in our study causing systematic errors were resolved by repeated system calibrations between interventions. We conclude that the accuracy achieved is sufficient for a wide range of clinical applications in surgery and interventional radiology.

  12. The space station: Human factors and productivity

    NASA Technical Reports Server (NTRS)

    Gillan, D. J.; Burns, M. J.; Nicodemus, C. L.; Smith, R. L.

    1986-01-01

    Human factor researchers and engineers are making inputs into the early stages of the design of the Space Station to improve both the quality of life and work on-orbit. Effective integration of the human factors information related to various Intravehicular Activity (IVA), Extravehicular Activity (EVA), and teletobotics systems during the Space Station design will result in increased productivity, increased flexibility of the Space Stations systems, lower cost of operations, improved reliability, and increased safety for the crew onboard the Space Station. The major features of productivity examined include the cognitive and physical effort involved in work, the accuracy of worker output and ability to maintain performance at a high level of accuracy, the speed and temporal efficiency with which a worker performs, crewmember satisfaction with their work environment, and the relation between performance and cost.

  13. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    PubMed

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  14. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1993-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.

  15. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    PubMed

    Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R

    2010-09-29

    Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account in future studies on speed-accuracy trade-offs.

  16. Method of bundling rods so as to form an optical fiber preform

    DOEpatents

    Kliner, Dahv A. V. [San Ramon, CA; Koplow, Jeffery P [Washington, DC

    2004-03-30

    The present invention provides a simple method for fabricating fiber-optic glass preforms having complex refractive index configurations and/or dopant distributions in a radial direction with a high degree of accuracy and precision. The method teaches bundling together a plurality of glass rods of specific physical, chemical, or optical properties and wherein the rod bundle is fused in a manner that maintains the cross-sectional composition and refractive-index profiles established by the position of the rods.

  17. Tidal disruption of viscous bodies

    NASA Technical Reports Server (NTRS)

    Sridhar, S.; Tremaine, S.

    1992-01-01

    Tidal disruptions are investigated in viscous-fluid planetesimals whose radius is small relative to the distance of closest (parabolic-orbit) approach to a planet. The planetesimal surface is in these conditions always ellipsoidal, facilitating treatment by coupled ODEs which are solvable with high accuracy. While the disrupted planetesimals evolve into needlelike ellipsoids, their density does not decrease. The validity of viscous fluid treatment holds for solid (ice or rock) planetesimals in cases where tidal stresses are greater than material strength, but integrity is maintained by self-gravity.

  18. Electronically-Scanned Pressure Sensors

    NASA Technical Reports Server (NTRS)

    Coe, C. F.; Parra, G. T.; Kauffman, R. C.

    1984-01-01

    Sensors not pneumatically switched. Electronic pressure-transducer scanning system constructed in modular form. Pressure transducer modules and analog to digital converter module small enough to fit within cavities of average-sized wind-tunnel models. All switching done electronically. Temperature controlled environment maintained within sensor modules so accuracy maintained while ambient temperature varies.

  19. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    PubMed

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during ribosomal decoding induced competence, while decreasing the error rate repressed competence. This pattern of regulation was promoted by the HtrA protease, which selectively repressed competence when translational fidelity was high but not when accuracy was low. Our findings demonstrate that this organism is able to monitor the accuracy of information used for protein biosynthesis and suggest that errors trigger a response addressing both the immediate challenge of misfolded proteins and, through genetic exchange, upstream coding errors that may underlie protein folding defects. This pathway may represent an evolutionary strategy for maintaining the coding integrity of the genome.

  20. Position Control of Tendon-Driven Fingers

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E.; Platt, Robert, Jr.; Hargrave, B.; Pementer, Frank

    2011-01-01

    Conventionally, tendon-driven manipulators implement some force control scheme based on tension feedback. This feedback allows the system to ensure that the tendons are maintained taut with proper levels of tensioning at all times. Occasionally, whether it is due to the lack of tension feedback or the inability to implement sufficiently high stiffnesses, a position control scheme is needed. This work compares three position controllers for tendon-driven manipulators. A new controller is introduced that achieves the best overall performance with regards to speed, accuracy, and transient behavior. To compensate for the lack of tension feedback, the controller nominally maintains the internal tension on the tendons by implementing a two-tier architecture with a range-space constraint. These control laws are validated experimentally on the Robonaut-2 humanoid hand. I

  1. Automated identification of drug and food allergies entered using non-standard terminology.

    PubMed

    Epstein, Richard H; St Jacques, Paul; Stockin, Michael; Rothman, Brian; Ehrenfeld, Jesse M; Denny, Joshua C

    2013-01-01

    An accurate computable representation of food and drug allergy is essential for safe healthcare. Our goal was to develop a high-performance, easily maintained algorithm to identify medication and food allergies and sensitivities from unstructured allergy entries in electronic health record (EHR) systems. An algorithm was developed in Transact-SQL to identify ingredients to which patients had allergies in a perioperative information management system. The algorithm used RxNorm and natural language processing techniques developed on a training set of 24 599 entries from 9445 records. Accuracy, specificity, precision, recall, and F-measure were determined for the training dataset and repeated for the testing dataset (24 857 entries from 9430 records). Accuracy, precision, recall, and F-measure for medication allergy matches were all above 98% in the training dataset and above 97% in the testing dataset for all allergy entries. Corresponding values for food allergy matches were above 97% and above 93%, respectively. Specificities of the algorithm were 90.3% and 85.0% for drug matches and 100% and 88.9% for food matches in the training and testing datasets, respectively. The algorithm had high performance for identification of medication and food allergies. Maintenance is practical, as updates are managed through upload of new RxNorm versions and additions to companion database tables. However, direct entry of codified allergy information by providers (through autocompleters or drop lists) is still preferred to post-hoc encoding of the data. Data tables used in the algorithm are available for download. A high performing, easily maintained algorithm can successfully identify medication and food allergies from free text entries in EHR systems.

  2. Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.

    ERIC Educational Resources Information Center

    Dekle, Dawn J.

    1997-01-01

    Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

  3. A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.

    PubMed

    Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine

    2014-04-01

    In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.

  4. A very low noise, high accuracy, programmable voltage source for low frequency noise measurements

    NASA Astrophysics Data System (ADS)

    Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine

    2014-04-01

    In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.

  5. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  6. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  7. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  8. Detection of Anomalies in Citrus Leaves Using Laser-Induced Breakdown Spectroscopy (LIBS).

    PubMed

    Sankaran, Sindhuja; Ehsani, Reza; Morgan, Kelly T

    2015-08-01

    Nutrient assessment and management are important to maintain productivity in citrus orchards. In this study, laser-induced breakdown spectroscopy (LIBS) was applied for rapid and real-time detection of citrus anomalies. Laser-induced breakdown spectroscopy spectra were collected from citrus leaves with anomalies such as diseases (Huanglongbing, citrus canker) and nutrient deficiencies (iron, manganese, magnesium, zinc), and compared with those of healthy leaves. Baseline correction, wavelet multivariate denoising, and normalization techniques were applied to the LIBS spectra before analysis. After spectral pre-processing, features were extracted using principal component analysis and classified using two models, quadratic discriminant analysis and support vector machine (SVM). The SVM resulted in a high average classification accuracy of 97.5%, with high average canker classification accuracy (96.5%). LIBS peak analysis indicated that high intensities at 229.7, 247.9, 280.3, 393.5, 397.0, and 769.8 nm were observed of 11 peaks found in all the samples. Future studies using controlled experiments with variable nutrient applications are required for quantification of foliar nutrients by using LIBS-based sensing.

  9. Time maintenance system for the BMDO MSX spacecraft

    NASA Technical Reports Server (NTRS)

    Hermes, Martin J.

    1994-01-01

    The Johns Hopkins University Applied Physics Laboratory (APL) is responsible for designing and implementing a clock maintenance system for the Ballistic Missile Defense Organizations (BMDO) Midcourse Space Experiment (MSX) spacecraft. The MSX spacecraft has an on-board clock that will be used to control execution of time-dependent commands and to time tag all science and housekeeping data received from the spacecraft. MSX mission objectives have dictated that this spacecraft time, UTC(MSX), maintain a required accuracy with respect to UTC(USNO) of +/- 10 ms with a +/- 1 ms desired accuracy. APL's atomic time standards and the downlinked spacecraft time were used to develop a time maintenance system that will estimate the current MSX clock time offset during an APL pass and make estimates of the clock's drift and aging using the offset estimates from many passes. Using this information, the clock's accuracy will be maintained by uplinking periodic clock correction commands. The resulting time maintenance system is a combination of offset measurement, command/telemetry, and mission planning hardware and computing assets. All assets provide necessary inputs for deciding when corrections to the MSX spacecraft clock must be made to maintain its required accuracy without inhibiting other mission objectives. The MSX time maintenance system is described as a whole and the clock offset measurement subsystem, a unique combination of precision time maintenance and measurement hardware controlled by a Macintosh computer, is detailed. Simulations show that the system estimates the MSX clock offset to less than+/- 33 microseconds.

  10. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records.

    PubMed

    Peissig, Peggy L; Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries.

  11. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    PubMed

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  12. Achieving a Linear Dose Rate Response in Pulse-Mode Silicon Photodiode Scintillation Detectors Over a Wide Range of Excitations

    NASA Astrophysics Data System (ADS)

    Carroll, Lewis

    2014-02-01

    We are developing a new dose calibrator for nuclear pharmacies that can measure radioactivity in a vial or syringe without handling it directly or removing it from its transport shield “pig”. The calibrator's detector comprises twin opposing scintillating crystals coupled to Si photodiodes and current-amplifying trans-resistance amplifiers. Such a scheme is inherently linear with respect to dose rate over a wide range of radiation intensities, but accuracy at low activity levels may be impaired, beyond the effects of meager photon statistics, by baseline fluctuation and drift inevitably present in high-gain, current-mode photodiode amplifiers. The work described here is motivated by our desire to enhance accuracy at low excitations while maintaining linearity at high excitations. Thus, we are also evaluating a novel “pulse-mode” analog signal processing scheme that employs a linear threshold discriminator to virtually eliminate baseline fluctuation and drift. We will show the results of a side-by-side comparison of current-mode versus pulse-mode signal processing schemes, including perturbing factors affecting linearity and accuracy at very low and very high excitations. Bench testing over a wide range of excitations is done using a Poisson random pulse generator plus an LED light source to simulate excitations up to ˜106 detected counts per second without the need to handle and store large amounts of radioactive material.

  13. Kinematic-PPP using Single/Dual Frequency Observations from (GPS, GLONASS and GPS/GLONASS) Constellations for Hydrography

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf

    2018-03-01

    Global Positioning System (GPS) technology is ideally suited for inshore and offshore positioning because of its high accuracy and the short observation time required for a position fix. Precise point positioning (PPP) is a technique used for position computation with a high accuracy using a single GNSS receiver. It relies on highly accurate satellite position and clock data that can be acquired from different sources such as the International GNSS Service (IGS). PPP precision varies based on positioning technique (static or kinematic), observations type (single or dual frequency) and the duration of observations among other factors. PPP offers comparable accuracy to differential GPS with safe in cost and time. For many years, PPP users depended on GPS (American system) which considered the solely reliable system. GLONASS's contribution in PPP techniques was limited due to fail in maintaining full constellation. Yet, GLONASS limited observations could be integrated into GPS-based PPP to improve availability and precision. As GLONASS reached its full constellation early 2013, there is a wide interest in PPP systems based on GLONASS only and independent of GPS. This paper investigates the performance of kinematic PPP solution for the hydrographic applications in the Nile river (Aswan, Egypt) based on GPS, GLONASS and GPS/GLONASS constellations. The study investigates also the effect of using two different observation types; single-frequency and dual frequency observations from the tested constellations.

  14. Pyranometer station for the assessment of solar energy influx in eastern New Mexico. Final report, December 12, 1976-June 1, 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sittler, O.D.; Agogino, M.M.

    1979-05-01

    This project was undertaken to improve the data base for estimating solar energy influx in eastern New Mexico. A precision pyranometer station has been established at Eastern New Mexico University in Portales. A program of careful calibration and data management procedures is conducted to maintain high standards of precision and accuracy. Data from the first year of operation were used to upgrade insolation data of moderate accuracy which had been obtained at this site with an inexpensive pyranograph. Although not as accurate as the data expected from future years of operation of this station, these upgraded pyranograph measurements show thatmore » eastern New Mexico receives somewhat less solar energy than would be expected from published data. A detailed summary of these upgraded insolation data is included.« less

  15. As- built inventory of the office building with the use of terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Przyborski, Marek; Tysiąc, Paweł

    2018-01-01

    Terrestrial Laser Scanning (TLS) is an efficient tool for building inventories. Based on the red- laser beam technology it is possible to provide the high accuracy data with complete spatial information about a scanned object. In this article, authors present the solution of use a TLS in as-built inventory of the office building. Based on the provided data, it is possible to evaluate the correctness of built details of a building and provide information for further construction works, for example an area needed for Styrofoam installation. The biggest problem in this research is that an error which equals over 1cm could generate costs, which could be a problem to cover by a constructor. Based on a complicated place of the construction works (centre of a city) it was a challenge to maintain the accuracy.

  16. A second-order cell-centered Lagrangian ADER-MOOD finite volume scheme on multidimensional unstructured meshes for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri

    2018-04-01

    In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.

  17. Uprated fine guidance sensor study

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Future orbital observatories will require star trackers of extremely high precision. These sensors must maintain high pointing accuracy and pointing stability simultaneously with a low light level signal from a guide star. To establish the fine guidance sensing requirements and to evaluate candidate fine guidance sensing concepts, the Space Telescope Optical Telescope Assembly was used as the reference optical system. The requirements review was separated into three areas: Optical Telescope Assembly (OTA), Fine Guidance Sensing and astrometry. The results show that the detectors should be installed directly onto the focal surface presented by the optics. This would maximize throughput and minimize point stability error by not incoporating any additional optical elements.

  18. Digital phase-locked-loop speed sensor for accuracy improvement in analog speed controls. [feedback control and integrated circuits

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.

    1975-01-01

    A digital speed control that can be combined with a proportional analog controller is described. The stability and transient response of the analog controller were retained and combined with the long-term accuracy of a crystal-controlled integral controller. A relatively simple circuit was developed by using phase-locked-loop techniques and total error storage. The integral digital controller will maintain speed control accuracy equal to that of the crystal reference oscillator.

  19. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  20. Reliability over time of EEG-based mental workload evaluation during Air Traffic Management (ATM) tasks.

    PubMed

    Arico, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Graziani, Ilenia; Imbert, Jean-Paul; Granger, Geraud; Benhacene, Railene; Terenzi, Michela; Pozzi, Simone; Babiloni, Fabio

    2015-08-01

    Machine-learning approaches for mental workload (MW) estimation by using the user brain activity went through a rapid expansion in the last decades. In fact, these techniques allow now to measure the MW with a high time resolution (e.g. few seconds). Despite such advancements, one of the outstanding problems of these techniques regards their ability to maintain a high reliability over time (e.g. high accuracy of classification even across consecutive days) without performing any recalibration procedure. Such characteristic will be highly desirable in real world applications, in which human operators could use such approach without undergo a daily training of the device. In this work, we reported that if a simple classifier is calibrated by using a low number of brain spectral features, between those ones strictly related to the MW (i.e. Frontal and Occipital Theta and Parietal Alpha rhythms), those features will make the classifier performance stable over time. In other words, the discrimination accuracy achieved by the classifier will not degrade significantly across different days (i.e. until one week). The methodology has been tested on twelve Air Traffic Controls (ATCOs) trainees while performing different Air Traffic Management (ATM) scenarios under three different difficulty levels.

  1. Audit of accuracy of clinical coding in oral surgery.

    PubMed

    Naran, S; Hudovsky, A; Antscherl, J; Howells, S; Nouraei, S A R

    2014-10-01

    We aimed to study the accuracy of clinical coding within oral surgery and to identify ways in which it can be improved. We undertook did a multidisciplinary audit of a sample of 646 day case patients who had had oral surgery procedures between 2011 and 2012. We compared the codes given with their case notes and amended any discrepancies. The accuracy of coding was assessed for primary and secondary diagnoses and procedures, and for health resource groupings (HRGs). The financial impact of coding Subjectivity, Variability and Error (SVE) was assessed by reference to national tariffs. The audit resulted in 122 (19%) changes to primary diagnoses. The codes for primary procedures changed in 224 (35%) cases; 310 (48%) morbidities and complications had been missed, and 266 (41%) secondary procedures had been missed or were incorrect. This led to at least one change of coding in 496 (77%) patients, and to the HRG changes in 348 (54%) patients. The financial impact of this was £114 in lost revenue per patient. There is a high incidence of coding errors in oral surgery because of the large number of day cases, a lack of awareness by clinicians of coding issues, and because clinical coders are not always familiar with the large number of highly specialised abbreviations used. Accuracy of coding can be improved through the use of a well-designed proforma, and standards can be maintained by the use of an ongoing data quality assurance programme. Copyright © 2014. Published by Elsevier Ltd.

  2. New BRDF Model for Desert and Gobi Using Equivalent Mirror Plane Method, Establishment and Validation

    NASA Astrophysics Data System (ADS)

    Li, Y.; Rong, Z.

    2017-12-01

    The surface Bidirectional Reflectance Distribution Function (BRDF) is a key parameter that affects the vicarious calibration accuracy of visible channel remote sensing instrument. In the past 30 years, many studies have been made and a variety of models have been established. Among them, the Ross-li model was highly approved and widely used. Unfortunately, the model doesn't suitable for desert and Gobi quite well because of the scattering kernel it contained, needs the factors such as plant height and plant spacing. A new BRDF model for surface without vegetation, which is mainly used in remote sensing vicarious calibration, is established. That was called Equivalent Mirror Plane (EMP) BRDF. It is used to characterize the bidirectional reflectance of the near Lambertian surface. The accuracy of the EMP BRDF model is validated by the directional reflectance data measured on the Dunhuang Gobi and compared to the Ross-li model. Results show that the regression accuracy of the new model is 0.828, which is similar to the Ross-li model (0.825). Because of the simple form (contains only four polynomials) and simple principle (derived by the Fresnel reflection principle, don't include any vegetation parameters), it is more suitable for near Lambertian surface, such as Gobi, desert, Lunar and reference panel. Results also showed that the new model could also maintain a high accuracy and stability in sparse observation, which is very important for the retrieval requirements of daily updating BRDF remote sensing products.

  3. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  4. Dose accuracy of a durable insulin pen with memory function, before and after simulated lifetime use and under stress conditions.

    PubMed

    Xue, Ligang; Mikkelsen, Kristian Handberg

    2013-03-01

    The objective of this study was to assess the dose accuracy of NovoPen® 5 in delivering low, medium and high doses of insulin before and after simulated lifetime use. A secondary objective was to evaluate the durability of the pen and its memory function under various stress conditions designed to simulate conditions that may be encountered in everyday use of an insulin pen. All testing was conducted according to International Organization for Standardization guideline 11608-1, 2000 for pen injectors. Dose accuracy was measured for the delivery of 1 unit (U) (10 mg), 30 U (300 mg) and 60 U (600 mg) test medium in standard, cool and hot conditions and before and after simulated lifetime use. Dose accuracy was also tested after preconditioning in dry heat storage; cold storage; damp cyclical heat; shock, bump and vibration; free fall and after electrostatic charge and radiated field test. Memory function was tested under all temperature and physical conditions. NovoPen 5 maintained dosing accuracy and memory function at minimum, medium and maximum doses in standard, cool and hot conditions, stress tests and simulated lifetime use. The pens remained intact and retained dosing accuracy and a working memory function at all doses after exposure to variations in temperature and after physical challenge. NovoPen 5 was accurate at all doses tested and under various functionality tests. Its durable design ensured that the dose accuracy and memory function were retained under conditions of stress likely to be encountered in everyday use.

  5. Space Domain Awareness

    DTIC Science & Technology

    2012-09-01

    the Space Surveillance Network has been tracking orbital objects and maintaining a catalog that allows space operators to safely operate satellites ...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...Distribution Unlimited) backward) in time , but the accuracy degrades as the amount of propagation time increases. Thus, the need to maintain a

  6. The Impact of Moderate and High Intensity Total Body Fatigue on Passing Accuracy in Expert and Novice Basketball Players

    PubMed Central

    Lyons, Mark; Al-Nakeeb, Yahya; Nevill, Alan

    2006-01-01

    Despite the acknowledged importance of fatigue on performance in sport, ecologically sound studies investigating fatigue and its effects on sport-specific skills are surprisingly rare. The aim of this study was to investigate the effect of moderate and high intensity total body fatigue on passing accuracy in expert and novice basketball players. Ten novice basketball players (age: 23.30 ± 1.05 yrs) and ten expert basketball players (age: 22.50 ± 0.41 yrs) volunteered to participate in the study. Both groups performed the modified AAHPERD Basketball Passing Test under three different testing conditions: rest, moderate intensity and high intensity total body fatigue. Fatigue intensity was established using a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant (F 2,36 = 5.252, p = 0.01) level of fatigue by level of skill interaction. On examination of the mean scores it is clear that following high intensity total body fatigue there is a significant detriment in the passing performance of both novice and expert basketball players when compared to their resting scores. Fundamentally however, the detrimental impact of fatigue on passing performance is not as steep in the expert players compared to the novice players. The results suggest that expert or skilled players are better able to cope with both moderate and high intensity fatigue conditions and maintain a higher level of performance when compared to novice players. The findings of this research therefore, suggest the need for trainers and conditioning coaches in basketball to include moderate, but particularly high intensity exercise into their skills sessions. This specific training may enable players at all levels of the game to better cope with the demands of the game on court and maintain a higher standard of play. Key Points Aim: to investigate the effect of moderate and high intensity total body fatigue on basketball-passing accuracy in expert and novice basketball players. Fatigue intensity was set as a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant level of fatigue by level of skill interaction. Despite a significant detriment in passing-performance in both novice and expert players following high intensity total body fatigue, this detriment was not as steep in the expert players when compared to the novice players PMID:24259994

  7. LINE-ABOVE-GROUND ATTENUATOR

    DOEpatents

    Wilds, R.B.; Ames, J.R.

    1957-09-24

    The line-above-ground attenuator provides a continuously variable microwave attenuator for a coaxial line that is capable of high attenuation and low insertion loss. The device consists of a short section of the line-above- ground plane type transmission lime, a pair of identical rectangular slabs of lossy material like polytron, whose longitudinal axes are parallel to and indentically spaced away from either side of the line, and a geared mechanism to adjust amd maintain this spaced relationship. This device permits optimum fineness and accuracy of attenuator control which heretofore has been difficult to achieve.

  8. DSCOVR Satellite Deploy & Light Test

    NASA Image and Video Library

    2014-11-24

    Workers deploy the solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  9. DSCOVR Satellite Deploy & Light Test

    NASA Image and Video Library

    2014-11-24

    The solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, are unfurled in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  10. Improving Prediction Accuracy of “Central Line-Associated Blood Stream Infections” Using Data Mining Models

    PubMed Central

    Noaman, Amin Y.; Jamjoom, Arwa; Al-Abdullah, Nabeela; Nasir, Mahreen; Ali, Anser G.

    2017-01-01

    Prediction of nosocomial infections among patients is an important part of clinical surveillance programs to enable the related personnel to take preventive actions in advance. Designing a clinical surveillance program with capability of predicting nosocomial infections is a challenging task due to several reasons, including high dimensionality of medical data, heterogenous data representation, and special knowledge required to extract patterns for prediction. In this paper, we present details of six data mining methods implemented using cross industry standard process for data mining to predict central line-associated blood stream infections. For our study, we selected datasets of healthcare-associated infections from US National Healthcare Safety Network and consumer survey data from Hospital Consumer Assessment of Healthcare Providers and Systems. Our experiments show that central line-associated blood stream infections (CLABSIs) can be successfully predicted using AdaBoost method with an accuracy up to 89.7%. This will help in implementing effective clinical surveillance programs for infection control, as well as improving the accuracy detection of CLABSIs. Also, this reduces patients' hospital stay cost and maintains patients' safety. PMID:29085836

  11. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  12. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  13. Noradrenergic activation of the basolateral amygdala maintains hippocampus-dependent accuracy of remote memory.

    PubMed

    Atucha, Erika; Vukojevic, Vanja; Fornari, Raquel V; Ronzoni, Giacomo; Demougin, Philippe; Peter, Fabian; Atsak, Piray; Coolen, Marcel W; Papassotiropoulos, Andreas; McGaugh, James L; de Quervain, Dominique J-F; Roozendaal, Benno

    2017-08-22

    Emotional enhancement of memory by noradrenergic mechanisms is well-described, but the long-term consequences of such enhancement are poorly understood. Over time, memory traces are thought to undergo a neural reorganization, that is, a systems consolidation, during which they are, at least partly, transferred from the hippocampus to neocortical networks. This transfer is accompanied by a decrease in episodic detailedness. Here we investigated whether norepinephrine (NE) administration into the basolateral amygdala after training on an inhibitory avoidance discrimination task, comprising two distinct training contexts, alters systems consolidation dynamics to maintain episodic-like accuracy and hippocampus dependency of remote memory. At a 2-d retention test, both saline- and NE-treated rats accurately discriminated the training context in which they had received footshock. Hippocampal inactivation with muscimol before retention testing disrupted discrimination of the shock context in both treatment groups. At 28 d, saline-treated rats showed hippocampus-independent retrieval and lack of discrimination. In contrast, NE-treated rats continued to display accurate memory of the shock-context association. Hippocampal inactivation at this remote retention test blocked episodic-like accuracy and induced a general memory impairment. These findings suggest that the NE treatment altered systems consolidation dynamics by maintaining hippocampal involvement in the memory. This shift in systems consolidation was paralleled by time-regulated DNA methylation and transcriptional changes of memory-related genes, namely Reln and Pkm ζ, in the hippocampus and neocortex. The findings provide evidence suggesting that consolidation of emotional memories by noradrenergic mechanisms alters systems consolidation dynamics and, as a consequence, influences the maintenance of long-term episodic-like accuracy of memory.

  14. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system

    PubMed Central

    Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron

    2011-01-01

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867

  15. A Coupled Surface Nudging Scheme for use in Retrospective ...

    EPA Pesticide Factsheets

    A surface analysis nudging scheme coupling atmospheric and land surface thermodynamic parameters has been implemented into WRF v3.8 (latest version) for use with retrospective weather and climate simulations, as well as for applications in air quality, hydrology, and ecosystem modeling. This scheme is known as the flux-adjusting surface data assimilation system (FASDAS) developed by Alapaty et al. (2008). This scheme provides continuous adjustments for soil moisture and temperature (via indirect nudging) and for surface air temperature and water vapor mixing ratio (via direct nudging). The simultaneous application of indirect and direct nudging maintains greater consistency between the soil temperature–moisture and the atmospheric surface layer mass-field variables. The new method, FASDAS, consistently improved the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as well as for high resolution regional climate predictions. This new capability has been released in WRF Version 3.8 as option grid_sfdda = 2. This new capability increased the accuracy of atmospheric inputs for use air quality, hydrology, and ecosystem modeling research to improve the accuracy of respective end-point research outcome. IMPACT: A new method, FASDAS, was implemented into the WRF model to consistently improve the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as wel

  16. Implementation of Motor Imagery during Specific Aerobic Training Session in Young Tennis Players

    PubMed Central

    Guillot, Aymeric; Di Rienzo, Franck; Pialoux, Vincent; Simon, Germain; Skinner, Sarah; Rogowski, Isabelle

    2015-01-01

    The aim of this study was to investigate the effects of implementing motor imagery (MI) during specific tennis high intensity intermittent training (HIIT) sessions on groundstroke performance in young elite tennis players. Stroke accuracy and ball velocity of forehand and backhand drives were evaluated in ten young tennis players, immediately before and after having randomly performed two HIIT sessions. One session included MI exercises during the recovery phases, while the other included verbal encouragements for physical efforts and served as control condition. Results revealed that similar cardiac demand was observed during both sessions, while implementing MI maintained groundstroke accuracy. Embedding MI during HIIT enabled the development of physical fitness and the preservation of stroke performance. These findings bring new insight to tennis and conditioning coaches in order to fulfil the benefits of specific playing HIIT sessions, and therefore to optimise the training time. PMID:26580804

  17. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1992-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  18. Modeling hemoglobin at optical frequency using the unconditionally stable fundamental ADI-FDTD method.

    PubMed

    Heh, Ding Yu; Tan, Eng Leong

    2011-04-12

    This paper presents the modeling of hemoglobin at optical frequency (250 nm - 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin.

  19. Modeling hemoglobin at optical frequency using the unconditionally stable fundamental ADI-FDTD method

    PubMed Central

    Heh, Ding Yu; Tan, Eng Leong

    2011-01-01

    This paper presents the modeling of hemoglobin at optical frequency (250 nm – 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin. PMID:21559129

  20. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    USGS Publications Warehouse

    Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.

  1. Well-balanced Arbitrary-Lagrangian-Eulerian finite volume schemes on moving nonconforming meshes for the Euler equations of gas dynamics with gravity

    NASA Astrophysics Data System (ADS)

    Gaburro, Elena; Castro, Manuel J.; Dumbser, Michael

    2018-06-01

    In this work, we present a novel second-order accurate well-balanced arbitrary Lagrangian-Eulerian (ALE) finite volume scheme on moving nonconforming meshes for the Euler equations of compressible gas dynamics with gravity in cylindrical coordinates. The main feature of the proposed algorithm is the capability of preserving many of the physical properties of the system exactly also on the discrete level: besides being conservative for mass, momentum and total energy, also any known steady equilibrium between pressure gradient, centrifugal force, and gravity force can be exactly maintained up to machine precision. Perturbations around such equilibrium solutions are resolved with high accuracy and with minimal dissipation on moving contact discontinuities even for very long computational times. This is achieved by the novel combination of well-balanced path-conservative finite volume schemes, which are expressly designed to deal with source terms written via non-conservative products, with ALE schemes on moving grids, which exhibit only very little numerical dissipation on moving contact waves. In particular, we have formulated a new HLL-type and a novel Osher-type flux that are both able to guarantee the well balancing in a gas cloud rotating around a central object. Moreover, to maintain a high level of quality of the moving mesh, we have adopted a nonconforming treatment of the sliding interfaces that appear due to the differential rotation. A large set of numerical tests has been carried out in order to check the accuracy of the method close and far away from the equilibrium, both, in one- and two-space dimensions.

  2. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records

    PubMed Central

    Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    Objective There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. Materials and methods We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. Results An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. Discussion A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. Conclusion We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries. PMID:22319176

  3. PHYSICAL-CONSTRAINT-PRESERVING CENTRAL DISCONTINUOUS GALERKIN METHODS FOR SPECIAL RELATIVISTIC HYDRODYNAMICS WITH A GENERAL EQUATION OF STATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kailiang; Tang, Huazhong, E-mail: wukl@pku.edu.cn, E-mail: hztang@math.pku.edu.cn

    The ideal gas equation of state (EOS) with a constant adiabatic index is a poor approximation for most relativistic astrophysical flows, although it is commonly used in relativistic hydrodynamics (RHD). This paper develops high-order accurate, physical-constraints-preserving (PCP), central, discontinuous Galerkin (DG) methods for the one- and two-dimensional special RHD equations with a general EOS. It is built on our theoretical analysis of the admissible states for RHD and the PCP limiting procedure that enforce the admissibility of central DG solutions. The convexity, scaling invariance, orthogonal invariance, and Lax–Friedrichs splitting property of the admissible state set are first proved with themore » aid of its equivalent form. Then, the high-order central DG methods with the PCP limiting procedure and strong stability-preserving time discretization are proved, to preserve the positivity of the density, pressure, specific internal energy, and the bound of the fluid velocity, maintain high-order accuracy, and be L {sup 1}-stable. The accuracy, robustness, and effectiveness of the proposed methods are demonstrated by several 1D and 2D numerical examples involving large Lorentz factor, strong discontinuities, or low density/pressure, etc.« less

  4. Mathematical support for surveying measurements in order to obtain the draft tube three-dimensional model

    NASA Astrophysics Data System (ADS)

    Gridan, Maria-Roberta; Herban, Sorin; Grecea, Oana

    2017-07-01

    Nowadays, the engineering companies and contractors are facing challenges never experienced before. They are being charged with - and being held liable for - the health of the structures they create and maintain. To surmount these challenges, engineers need to be able to measure structural movements up to millimetre level accuracy. Accurate and timely information on the status of a structure is highly valuable to engineers. It enables them to compare the real world behaviour of a structure against the design and theoretical models. When empowered by such data, engineers can effectively and cost efficiently measure and maintain the health of vital infrastructure. This paper presents the interpretation of the draft tube topographical measurements in order to obtain its 3D model. Based on the documents made available by the beneficiary and the data obtained in situ, the modernization conclusions were presented.

  5. Combining Physicochemical and Evolutionary Information for Protein Contact Prediction

    PubMed Central

    Schneider, Michael; Brock, Oliver

    2014-01-01

    We introduce a novel contact prediction method that achieves high prediction accuracy by combining evolutionary and physicochemical information about native contacts. We obtain evolutionary information from multiple-sequence alignments and physicochemical information from predicted ab initio protein structures. These structures represent low-energy states in an energy landscape and thus capture the physicochemical information encoded in the energy function. Such low-energy structures are likely to contain native contacts, even if their overall fold is not native. To differentiate native from non-native contacts in those structures, we develop a graph-based representation of the structural context of contacts. We then use this representation to train an support vector machine classifier to identify most likely native contacts in otherwise non-native structures. The resulting contact predictions are highly accurate. As a result of combining two sources of information—evolutionary and physicochemical—we maintain prediction accuracy even when only few sequence homologs are present. We show that the predicted contacts help to improve ab initio structure prediction. A web service is available at http://compbio.robotics.tu-berlin.de/epc-map/. PMID:25338092

  6. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less

  7. Multi-class computational evolution: development, benchmark evaluation and application to RNA-Seq biomarker discovery.

    PubMed

    Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I

    2017-01-01

    A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.

  8. Atomic data from the IRON Project. XXXII. On the accuracy of the effective collision strength for the electron impact excitation of the quadrupole transition in AR III

    NASA Astrophysics Data System (ADS)

    Galavís, M. E.; Mendoza, C.; Zeippen, C. J.

    1998-12-01

    Since te[Burgess et al. (1997)]{bur97} have recently questioned the accuracy of the effective collision strength calculated in the IRON Project for the electron impact excitation of the 3ssp23p sp4 \\ sp1 D -sp1 S quadrupole transition in Ar iii, an extended R-matrix calculation has been performed for this transition. The original 24-state target model was maintained, but the energy regime was increased to 100 Ryd. It is shown that in order to ensure convergence of the partial wave expansion at such energies, it is necessary to take into account partial collision strengths up to L=30 and to ``top-up'' with a geometric series procedure. By comparing effective collision strengths, it is found that the differences from the original calculation are not greater than 25% around the upper end of the common temperature range and that they are much smaller than 20% over most of it. This is consistent with the accuracy rating (20%) previously assigned to transitions in this low ionisation system. Also the present high-temperature limit agrees fairly well (15%) with the Coulomb-Born limit estimated by Burgess et al., thus confirming our previous accuracy rating. It appears that Burgess et al., in their data assessment, have overextended the low-energy behaviour of our reduced effective collision strength to obtain an extrapolated high-temperature limit that appeared to be in error by a factor of 2.

  9. Human motor transfer is determined by the scaling of size and accuracy of movement.

    PubMed

    Kwon, Oh-Sang; Zelaznik, Howard N; Chiu, George; Pizlo, Zygmunt

    2011-01-01

    A transfer of training design was used to examine the role of the Index of Difficulty (ID) on transfer of learning in a sequential Fitts's law task. Specifically, the role of the ratio between the accuracy and size of movement (ID) in transfer was examined. Transfer of skilled movement is better when both the size and accuracy of movement are changed by the same factor (ID is constant) than when only size or accuracy is changed. The authors infer that the size-accuracy ratio is capturing the control strategies employed during practice and thus promotes efficient transfer. Furthermore, efficient transfer is not dependent on maintaining relative timing invariance and thus the authors provide further evidence that relative timing is not an essential feature of movement control.

  10. Noradrenergic activation of the basolateral amygdala maintains hippocampus-dependent accuracy of remote memory

    PubMed Central

    Atucha, Erika; Vukojevic, Vanja; Fornari, Raquel V.; Ronzoni, Giacomo; Demougin, Philippe; Peter, Fabian; Atsak, Piray; Coolen, Marcel W.; Papassotiropoulos, Andreas; McGaugh, James L.; de Quervain, Dominique J.-F.; Roozendaal, Benno

    2017-01-01

    Emotional enhancement of memory by noradrenergic mechanisms is well-described, but the long-term consequences of such enhancement are poorly understood. Over time, memory traces are thought to undergo a neural reorganization, that is, a systems consolidation, during which they are, at least partly, transferred from the hippocampus to neocortical networks. This transfer is accompanied by a decrease in episodic detailedness. Here we investigated whether norepinephrine (NE) administration into the basolateral amygdala after training on an inhibitory avoidance discrimination task, comprising two distinct training contexts, alters systems consolidation dynamics to maintain episodic-like accuracy and hippocampus dependency of remote memory. At a 2-d retention test, both saline- and NE-treated rats accurately discriminated the training context in which they had received footshock. Hippocampal inactivation with muscimol before retention testing disrupted discrimination of the shock context in both treatment groups. At 28 d, saline-treated rats showed hippocampus-independent retrieval and lack of discrimination. In contrast, NE-treated rats continued to display accurate memory of the shock–context association. Hippocampal inactivation at this remote retention test blocked episodic-like accuracy and induced a general memory impairment. These findings suggest that the NE treatment altered systems consolidation dynamics by maintaining hippocampal involvement in the memory. This shift in systems consolidation was paralleled by time-regulated DNA methylation and transcriptional changes of memory-related genes, namely Reln and Pkmζ, in the hippocampus and neocortex. The findings provide evidence suggesting that consolidation of emotional memories by noradrenergic mechanisms alters systems consolidation dynamics and, as a consequence, influences the maintenance of long-term episodic-like accuracy of memory. PMID:28790188

  11. Does wearing a surgical facemask or N95-respirator impair radio communication?

    PubMed

    Thomas, Frank; Allen, Craig; Butts, William; Rhoades, Carol; Brandon, Cynthia; Handrahan, Diana L

    2011-01-01

    This study evaluated the impact wearing a surgical facemask or N95 air purifying respirator (N95) has on radio reception. We compared the ability of a flight crewmember and a layperson sitting in a Bell 407 crew compartment and a dispatcher sitting in a communication center to accurately record 20 randomized aviation terms transmitted over the radio by a helicopter emergency medical services (HEMS) pilot wearing a surgical facemask and six different N95s with and without the aircraft engine operating. With the aircraft engine off, all terms (100% accuracy) were correctly identified, regardless of the absence or presence of the surgical facemask or N95 studied. With the aircraft engine on, the surgical facemask (3M-1826) and two N95 respirators (3M-1860, Safe Life Corp-150) maintained 100% accuracy. Remaining N95 accuracy was as follows: 3M-8511 and Kimberly-Clark PFR95 (98%), Inoyel-3212 (97%), and 3M-1870 (93%). In general, despite wearing a facemask, radio reception accuracy is high (>90%). However, aircraft engine noise and N95 type do appear to adversely affect the accuracy of radio reception. All HEMS pilots and crewmembers should be aware of these radio reception findings when using an N95 respirator during transport. A brief review of the surgical facemask and N95 effectiveness to prevent viral respiratory infections is provided. Copyright © 2011 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  12. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1973-01-01

    With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.

  13. Criterion values for urine-specific gravity and urine color representing adequate water intake in healthy adults.

    PubMed

    Perrier, E T; Bottin, J H; Vecchio, M; Lemetais, G

    2017-04-01

    Growing evidence suggests a distinction between water intake necessary for maintaining a euhydrated state, and water intake considered to be adequate from a perspective of long-term health. Previously, we have proposed that maintaining a 24-h urine osmolality (U Osm ) of ⩽500 mOsm/kg is a desirable target for urine concentration to ensure sufficient urinary output to reduce renal health risk and circulating vasopressin. In clinical practice and field monitoring, the measurement of U Osm is not practical. In this analysis, we calculate criterion values for urine-specific gravity (U SG ) and urine color (U Col ), two measures which have broad applicability in clinical and field settings. A receiver operating characteristic curve analysis performed on 817 urine samples demonstrates that a U SG ⩾1.013 detects U Osm >500 mOsm/kg with very high accuracy (AUC 0.984), whereas a subject-assessed U Col ⩾4 offers high sensitivity and moderate specificity (AUC 0.831) for detecting U Osm >500 m Osm/kg.

  14. Advanced lattice Boltzmann scheme for high-Reynolds-number magneto-hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    De Rosis, Alessandro; Lévêque, Emmanuel; Chahine, Robert

    2018-06-01

    Is the lattice Boltzmann method suitable to investigate numerically high-Reynolds-number magneto-hydrodynamic (MHD) flows? It is shown that a standard approach based on the Bhatnagar-Gross-Krook (BGK) collision operator rapidly yields unstable simulations as the Reynolds number increases. In order to circumvent this limitation, it is here suggested to address the collision procedure in the space of central moments for the fluid dynamics. Therefore, an hybrid lattice Boltzmann scheme is introduced, which couples a central-moment scheme for the velocity with a BGK scheme for the space-and-time evolution of the magnetic field. This method outperforms the standard approach in terms of stability, allowing us to simulate high-Reynolds-number MHD flows with non-unitary Prandtl number while maintaining accuracy and physical consistency.

  15. Enabling Chemistry of Gases and Aerosols for Assessment of Short-Lived Climate Forcers: Improving Solar Radiation Modeling in the DOE-ACME and CESM models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prather, Michael

    This proposal seeks to maintain the DOE-ACME (offshoot of CESM) as one of the leading CCMs to evaluate near-term climate mitigation. It will implement, test, and optimize the new UCI photolysis codes within CESM CAM5 and new CAM versions in ACME. Fast-J is a high-order-accuracy (8 stream) code for calculating solar scattering and absorption in a single column atmosphere containing clouds, aerosols, and gases that was developed at UCI and implemented in CAM5 under the previous BER/SciDAC grant.

  16. Online Knowledge-Based Model for Big Data Topic Extraction.

    PubMed

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.

  17. DSCOVR Spacecraft Arrival, Offload, & Unpacking

    NASA Image and Video Library

    2014-11-20

    Workers remove the plastic cover from NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, in the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  18. DSCOVR Satellite Deploy & Light Test

    NASA Image and Video Library

    2014-11-24

    Workers conduct a light test on the solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  19. DSCOVR Spacecraft Arrival, Offload, & Unpacking

    NASA Image and Video Library

    2014-11-20

    NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, has been uncovered and is ready for processing in the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  20. Field programmable gate array-assigned complex-valued computation and its limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com; Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien; Zwick, Wolfgang

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  1. Reducing Bolt Preload Variation with Angle-of-Twist Bolt Loading

    NASA Technical Reports Server (NTRS)

    Thompson, Bryce; Nayate, Pramod; Smith, Doug; McCool, Alex (Technical Monitor)

    2001-01-01

    Critical high-pressure sealing joints on the Space Shuttle reusable solid rocket motor require precise control of bolt preload to ensure proper joint function. As the reusable solid rocket motor experiences rapid internal pressurization, correct bolt preloads maintain the sealing capability and structural integrity of the hardware. The angle-of-twist process provides the right combination of preload accuracy, reliability, process control, and assembly-friendly design. It improves significantly over previous methods. The sophisticated angle-of-twist process controls have yielded answers to all discrepancies encountered while the simplicity of the root process has assured joint preload reliability.

  2. Modernization of Koesters interferometer and high accuracy calibration gauge blocks

    NASA Astrophysics Data System (ADS)

    França, R. S.; Silva, I. L. M.; Couceiro, I. B.; Torres, M. A. C.; Bessa, M. S.; Costa, P. A.; Oliveira, W., Jr.; Grieneisen, H. P. H.

    2016-07-01

    The Optical Metrology Division (Diopt) of Inmetro is responsible for maintaining the national reference of the length unit according to International System of Units (SI) definitions. The length unit is realized by interferometric techniques and is disseminated to the dimensional community through calibrations of gauge blocks. Calibration of large gauge blocks from 100 mm to 1000 mm has been performed by Diopt with a Koesters interferometer with reference to spectral lines of a krypton discharge lamp. Replacement of this lamp by frequency stabilized lasers, traceable now to the time and frequency scale, is described and the first results are reported.

  3. Accuracy requirements and uncertainties in radiotherapy: a report of the International Atomic Energy Agency.

    PubMed

    van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed

    2017-01-01

    Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately educated and adequate staffing levels should be maintained. (7) For reporting purposes, uncertainties should be presented. (8) Manufacturers should provide training on all equipment. (9) Research should aid in improving the accuracy of radiotherapy. Some example research projects are suggested.

  4. A high-voltage supply used on miniaturized RLG

    NASA Astrophysics Data System (ADS)

    Miao, Zhifei; Fan, Mingming; Wang, Yuepeng; Yin, Yan; Wang, Dongmei

    2016-01-01

    A high voltage power supply used in laser gyro is proposed in this paper. The power supply which uses a single DC 15v input and fly-back topology is adopted in the main circuit. The output of the power supply achieve high to 3.3kv voltage in order to light the RLG. The PFM control method is adopted to realize the rapid switching between the high voltage state and the maintain state. The resonant chip L6565 is used to achieve the zero voltage switching(ZVS), so the consumption is reduced and the power efficiency is improved more than 80%. A special circuit is presented in the control portion to ensure symmetry of the two RLG's arms current. The measured current accuracy is higher than 5‰ and the current symmetry of the two RLG's arms up to 99.2%.

  5. Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.

    2002-01-01

    We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.

  6. Comparison of modal identification techniques using a hybrid-data approach

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.

    1986-01-01

    Modal identification of seemingly simple structures, such as the generic truss is often surprisingly difficult in practice due to high modal density, nonlinearities, and other nonideal factors. Under these circumstances, different data analysis techniques can generate substantially different results. The initial application of a new hybrid-data method for studying the performance characteristics of various identification techniques with such data is summarized. This approach offers new pieces of information for the system identification researcher. First, it allows actual experimental data to be used in the studies, while maintaining the traditional advantage of using simulated data. That is, the identification technique under study is forced to cope with the complexities of real data, yet the performance can be measured unquestionably for the artificial modes because their true parameters are known. Secondly, the accuracy achieved for the true structural modes in the data can be estimated from the accuracy achieved for the artificial modes if the results show similar characteristics. This similarity occurred in the study, for example, for a weak structural mode near 56 Hz. It may even be possible--eventually--to use the error information from the artificial modes to improve the identification accuracy for the structural modes.

  7. An efficient fully-implicit multislope MUSCL method for multiphase flow with gravity in discrete fractured media

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-06-01

    The first-order methods commonly employed in reservoir simulation for computing the convective fluxes introduce excessive numerical diffusion leading to severe smoothing of displacement fronts. We present a fully-implicit cell-centered finite-volume (CCFV) framework that can achieve second-order spatial accuracy on smooth solutions, while at the same time maintain robustness and nonlinear convergence performance. A novel multislope MUSCL method is proposed to construct the required values at edge centroids in a straightforward and effective way by taking advantage of the triangular mesh geometry. In contrast to the monoslope methods in which a unique limited gradient is used, the multislope concept constructs specific scalar slopes for the interpolations on each edge of a given element. Through the edge centroids, the numerical diffusion caused by mesh skewness is reduced, and optimal second order accuracy can be achieved. Moreover, an improved smooth flux-limiter is introduced to ensure monotonicity on non-uniform meshes. The flux-limiter provides high accuracy without degrading nonlinear convergence performance. The CCFV framework is adapted to accommodate a lower-dimensional discrete fracture-matrix (DFM) model. Several numerical tests with discrete fractured system are carried out to demonstrate the efficiency and robustness of the numerical model.

  8. A Blade Tip Timing Method Based on a Microwave Sensor

    PubMed Central

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-01-01

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy. PMID:28492469

  9. A Blade Tip Timing Method Based on a Microwave Sensor.

    PubMed

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-05-11

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  10. PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.

    PubMed

    Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming

    2016-05-31

    Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.

  11. Fixed-head star tracker magnitude calibration on the solar maximum mission

    NASA Technical Reports Server (NTRS)

    Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.

    1990-01-01

    The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.

  12. Double ion production in mercury thrusters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Peters, R. R.

    1976-01-01

    The development of a model which predicts doubly charged ion density is discussed. The accuracy of the model is shown to be good for two different thruster sizes and a total of 11 different cases. The model indicates that in most cases more than 80% of the doubly charged ions are produced from singly charged ions. This result can be used to develop a much simpler model which, along with correlations of the average plasma properties, can be used to determine the doubly charged ion density in ion thrusters with acceptable accuracy. Two different techniques which can be used to reduce the doubly charged ion density while maintaining good thruster operation, are identified as a result of an examination of the simple model. First, the electron density can be reduced and the thruster size then increased to maintain the same propellant utilization. Second, at a fixed thruster size, the plasma density, temperature and energy can be reduced and then to maintain a constant propellant utilization the open area of the grids to neutral propellant loss can be reduced through the use of a small hole accelerator grid.

  13. 15 CFR 200.102 - Types of calibration and test services.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... plant standards for other organizations. Accuracy is maintained by stability checks, by comparison with... should describe clearly the measurement desired. Indication of the scientific or economic basis for the...

  14. Holding the Edge: Maintaining the Defense Technology Base

    DTIC Science & Technology

    1989-04-01

    report. OTA assumes full responsibility for the report and the accuracy of its contents . tv OTA Project Staff-Defense Technology Base Lionel S. Johns...necessarily approve, disapprove, or endorse this report. OTA assumes full responsibility for the report and the accuracy of its contents . vii • L ,1 Contents ...along with original research contractors) having to buy from companies that and analysis . Moreover, while DoD management do not need its business

  15. The Effectiveness of Vowel Production Training with Real-Time Spectrographic Displays for Children with Profound Hearing Impairment.

    NASA Astrophysics Data System (ADS)

    Ertmer, David Joseph

    1994-01-01

    The effectiveness of vowel production training which incorporated direct instruction in combination with spectrographic models and feedback was assessed for two children who exhibited profound hearing impairment. A multiple-baseline design across behaviors, with replication across subjects was implemented to determine if vowel production accuracy improved following the introduction of treatment. Listener judgments of vowel correctness were obtained during the baseline, training, and follow-up phases of the study. Data were analyzed through visual inspection of changes in levels of accuracy, changes in trends of accuracy, and changes in variability of accuracy within and across phases. One subject showed significant improvement of all three trained vowel targets; the second subject for the first trained target only (Kolmogorov-Smirnov Two Sample Test). Performance trends during training sessions suggest that continued treatment would have resulted in further improvement for both subjects. Vowel duration, fundamental frequency, and the frequency locations of the first and second formants were measured before and after training. Acoustic analysis revealed highly individualized changes in the frequency locations of F1 and F2. Vowels which received the most training were maintained at higher levels than those which were introduced later in training, Some generalization of practiced vowel targets to untrained words was observed in both subjects. A bias towards judging productions as "correct" was observed for both subjects during self-evaluation tasks using spectrographic feedback.

  16. High quality optically polished aluminum mirror and process for producing

    NASA Technical Reports Server (NTRS)

    Lyons, III, James J. (Inventor); Zaniewski, John J. (Inventor)

    2005-01-01

    A new technical advancement in the field of precision aluminum optics permits high quality optical polishing of aluminum monolith, which, in the field of optics, offers numerous benefits because of its machinability, lightweight, and low cost. This invention combines diamond turning and conventional polishing along with india ink, a newly adopted material, for the polishing to accomplish a significant improvement in surface precision of aluminum monolith for optical purposes. This invention guarantees the precise optical polishing of typical bare aluminum monolith to surface roughness of less than about 30 angstroms rms and preferably about 5 angstroms rms while maintaining a surface figure accuracy in terms of surface figure error of not more than one-fifteenth of wave peak-to-valley.

  17. High quality optically polished aluminum mirror and process for producing

    NASA Technical Reports Server (NTRS)

    Lyons, III, James J. (Inventor); Zaniewski, John J. (Inventor)

    2002-01-01

    A new technical advancement in the field of precision aluminum optics permits high quality optical polishing of aluminum monolith, which, in the field of optics, offers numerous benefits because of its machinability, lightweight, and low cost. This invention combines diamond turning and conventional polishing along with india ink, a newly adopted material, for the polishing to accomplish a significant improvement in surface precision of aluminum monolith for optical purposes. This invention guarantees the precise optical polishing of typical bare aluminum monolith to surface roughness of less than about 30 angstroms rms and preferably about 5 angstroms rms while maintaining a surface figure accuracy in terms of surface figure error of not more than one-fifteenth of wave peak-to-valley.

  18. DOTD standards for GPS data collection accuracy : research project capsule.

    DOT National Transportation Integrated Search

    2013-12-01

    Global Navigational Satellite Systems (GNSS), which includes GPS technologies : maintained by the United States, are used extensively throughout government : and industry. These technologies continue to revolutionize positional data : collection acti...

  19. A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Krebs, C. R.; Li, Ling; Wolberg, Alisa S.; Oldenburg, Amy L.

    2015-07-01

    Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young's modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa-27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young's modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring.

  20. Contributions for the next generation of 3D metal printing machines

    NASA Astrophysics Data System (ADS)

    Pereira, M.; Thombansen, U.

    2015-03-01

    The 3D metal printing processes are key technologies for the new industry manufacturing requirements, as small lot production associated with high design complexity and high flexibility are needed towards personalization and customization. The main challenges for these processes are associated to increasing printing volumes, maintaining the relative accuracy level and reducing the global manufacturing time. Through a review on current technologies and solutions proposed by global patents new design solutions for 3D metal printing machines can be suggested. This paper picks up current technologies and trends in SLM and suggests some design approaches to overcome these challenges. As the SLM process is based on laser scanning, an increase in printing volume requires moving the scanner over the work surface by motion systems if printing accuracy has to be kept constant. This approach however does not contribute to a reduction in manufacturing time, as only one laser source will be responsible for building the entire work piece. With given technology limits in galvo based laser scanning systems, the most obvious solution consists in using multiple beam delivery systems in series, in parallel or both. Another concern is related to the weight of large work pieces. A new powder recoater can control the layer thickness and uniformity and eliminate or diminish fumes. To improve global accuracy, the use of a pair of high frequency piezoelectric actuators can help in positioning the laser beam. The implementation of such suggestions can contribute to SLM productivity. To do this, several research activities need to be accomplished in areas related to design, control, software and process fundamentals.

  1. Rice Crop Monitoring and Yield Assessment with MODIS 250m Gridded Vegetation Products: A Case Study of Sa Kaeo Province, Thailand

    NASA Astrophysics Data System (ADS)

    Wijesingha, J. S. J.; Deshapriya, N. L.; Samarakoon, L.

    2015-04-01

    Billions of people in the world depend on rice as a staple food and as an income-generating crop. Asia is the leader in rice cultivation and it is necessary to maintain an up-to-date rice-related database to ensure food security as well as economic development. This study investigates general applicability of high temporal resolution Moderate Resolution Imaging Spectroradiometer (MODIS) 250m gridded vegetation product for monitoring rice crop growth, mapping rice crop acreage and analyzing crop yield, at the province-level. The MODIS 250m Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) time series data, field data and crop calendar information were utilized in this research in Sa Kaeo Province, Thailand. The following methodology was used: (1) data pre-processing and rice plant growth analysis using Vegetation Indices (VI) (2) extraction of rice acreage and start-of-season dates from VI time series data (3) accuracy assessment, and (4) yield analysis with MODIS VI. The results show a direct relationship between rice plant height and MODIS VI. The crop calendar information and the smoothed NDVI time series with Whittaker Smoother gave high rice acreage estimation (with 86% area accuracy and 75% classification accuracy). Point level yield analysis showed that the MODIS EVI is highly correlated with rice yield and yield prediction using maximum EVI in the rice cycle predicted yield with an average prediction error 4.2%. This study shows the immense potential of MODIS gridded vegetation product for keeping an up-to-date Geographic Information System of rice cultivation.

  2. A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy.

    PubMed

    Krebs, C R; Li, Ling; Wolberg, Alisa S; Oldenburg, Amy L

    2015-07-01

    Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young's modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa-27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young's modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring.

  3. FTIR characterization of animal lung cells: normal and precancerous modified e10 cell line

    NASA Astrophysics Data System (ADS)

    Zezell, D. M.; Pereira, T. M.; Mennecier, G.; Bachmann, L.; Govone, A. B.; Dagli, M. L. Z.

    2012-06-01

    The chemical carcinogens from tobacco are related to over 90% of lung cancers around the world. The risk of death of this kind of cancer is high because the diagnosis usually is made only in advanced stages. Therefore, it is necessary to develop new diagnostic methods for detecting the lung cancer in earlier stages. The Fourier Transform Infrared Spectroscopy (FTIR) can offer high sensibility and accuracy to detect the minimal chemical changes into the biological sample. The aim of this study is to evaluate the differences on infrared spectra between normal lung cells and precancerous lung cells transformed by NNK. Non-cancerous lung cell line e10 (ATCC) and NNK-transformed e10 cell lines were maintained in complete culture medium (1:1 mixture of Dulbecco's modified Eagle's medium and Ham's F12 [DMEM/Ham's F12], supplemented with 100 ng/ml cholera enterotoxin, 10 lg/ml insulin, 0.5 lg/ml. hydrocortisol, 20 ng/ml epidermal growth factor, and 5% horse serum. The cultures were maintained in alcohol 70%. The infrared spectra were acquired on ATR-FTIR Nicolet 6700 spectrophotometer at 4 cm-1 resolution, 30 scans, in the 1800-900 cm-1 spectral range. Each sample had 3 spectra recorded, 30 infrared spectra were obtained from each cell line. The second derivate of spectra indicates that there are displacement in 1646 cm-1 (amine I) and 1255 cm-1(DNA), allowing the possibility to differentiate the two king of cells, with accuracy of 89,9%. These preliminary results indicate that ATR-FTIR is useful to differentiate normal e10 lung cells from precancerous e10 transformed by NNK.

  4. Predictive performance of genomic selection methods for carcass traits in Hanwoo beef cattle: impacts of the genetic architecture.

    PubMed

    Mehrban, Hossein; Lee, Deuk Hwan; Moradi, Mohammad Hossein; IlCho, Chung; Naserkheil, Masoumeh; Ibáñez-Escriche, Noelia

    2017-01-04

    Hanwoo beef is known for its marbled fat, tenderness, juiciness and characteristic flavor, as well as for its low cholesterol and high omega 3 fatty acid contents. As yet, there has been no comprehensive investigation to estimate genomic selection accuracy for carcass traits in Hanwoo cattle using dense markers. This study aimed at evaluating the accuracy of alternative statistical methods that differed in assumptions about the underlying genetic model for various carcass traits: backfat thickness (BT), carcass weight (CW), eye muscle area (EMA), and marbling score (MS). Accuracies of direct genomic breeding values (DGV) for carcass traits were estimated by applying fivefold cross-validation to a dataset including 1183 animals and approximately 34,000 single nucleotide polymorphisms (SNPs). Accuracies of BayesC, Bayesian LASSO (BayesL) and genomic best linear unbiased prediction (GBLUP) methods were similar for BT, EMA and MS. However, for CW, DGV accuracy was 7% higher with BayesC than with BayesL and GBLUP. The increased accuracy of BayesC, compared to GBLUP and BayesL, was maintained for CW, regardless of the training sample size, but not for BT, EMA, and MS. Genome-wide association studies detected consistent large effects for SNPs on chromosomes 6 and 14 for CW. The predictive performance of the models depended on the trait analyzed. For CW, the results showed a clear superiority of BayesC compared to GBLUP and BayesL. These findings indicate the importance of using a proper variable selection method for genomic selection of traits and also suggest that the genetic architecture that underlies CW differs from that of the other carcass traits analyzed. Thus, our study provides significant new insights into the carcass traits of Hanwoo cattle.

  5. Improved Short-Term Clock Prediction Method for Real-Time Positioning.

    PubMed

    Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan

    2017-06-06

    The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.

  6. Development and Testing of a High Level Axial Array Duct Sound Source for the NASA Flow Impedance Test Facility

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)

    2000-01-01

    In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).

  7. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    PubMed Central

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-01-01

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction. PMID:26540059

  8. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform.

    PubMed

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-11-03

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction.

  9. Performance of a new test strip for freestyle blood glucose monitoring systems.

    PubMed

    Lock, John Paul; Brazg, Ronald; Bernstein, Robert M; Taylor, Elizabeth; Patel, Mona; Ward, Jeanne; Alva, Shridhara; Chen, Ting; Welsh, Zoë; Amor, Walter; Bhogal, Claire; Ng, Ronald

    2011-01-01

    a new strip, designed to enhance the ease of use and minimize interference of non-glucose sugars, has been developed to replace the current FreeStyle (Abbott Diabetes Care, Alameda, CA) blood glucose test strip. We evaluated the performance of this new strip. laboratory evaluation included precision, linearity, dynamic range, effects of operating temperature, humidity, altitude, hematocrit, interferents, and blood reapplication. System accuracy, lay user performance, and ease of use for finger capillary blood testing and accuracy for venous blood testing were evaluated at clinics. Lay users also compared the speed and ease of use between the new strip and the current FreeStyle strip. for glucose concentrations <75 mg/dL, 73%, 100%, and 100% of the individual capillary blood glucose results obtained by lay users fell within ± 5, 10, and 15 mg/dL, respectively, of the reference. For glucose concentrations ≥75 mg/dL, 68%, 95%, 99%, and 99% of the lay user results fell within  ±  5%, 10%, 15%, and 20%, respectively, of the reference. Comparable accuracy was obtained in the venous blood study. Lay users found the new test strip easy to use and faster and easier to use than the current FreeStyle strip. The new strip maintained accuracy under various challenging conditions, including high concentrations of various interferents, sample reapplication up to 60 s, and extremes in hematocrit, altitude, and operating temperature and humidity. our results demonstrated excellent accuracy of the new FreeStyle test strip and validated the improvements in minimizing interference and enhancing ease of use.

  10. Calibration Assessment of Uncooled Thermal Cameras for Deployment on UAV platforms

    NASA Astrophysics Data System (ADS)

    Aragon, B.; Parkes, S. D.; Lucieer, A.; Turner, D.; McCabe, M.

    2017-12-01

    In recent years an array of miniaturized sensors have been developed and deployed on Unmanned Aerial Vehicles (UAVs). Prior to gaining useful data from these integrations, it is vitally important to quantify sensor accuracy, precision and cross-sensitivity of retrieved measurements on environmental variables. Small uncooled thermal frame cameras provide a novel solution to monitoring surface temperatures from UAVs with very high spatial resolution, with retrievals being used to investigate heat stress or evapotranspiration. For these studies, accuracies of a few degrees are generally required. Although radiometrically calibrated thermal cameras have recently become commercially available, confirmation of the accuracy of these sensors is required. Here we detail a system for investigating the accuracy and precision, start up stabilisation time, dependence of retrieved temperatures on ambient temperatures and image vignetting. The calibration system uses a relatively inexpensive blackbody source deployed with the sensor inside an environmental chamber to maintain and control the ambient temperature. Calibration of a number of different thermal sensors commonly used for UAV deployment was investigated. Vignetting was shown to be a major limitation on sensor accuracy, requiring characterization through measuring a spatially uniform temperature target such as the blackbody. Our results also showed that a stabilization period is required after powering on the sensors and before conducting an aerial survey. Through use of the environmental chamber it was shown the ambient temperature influenced the temperatures retrieved by the different sensors. This study illustrates the importance of determining the calibration and cross-sensitivities of thermal sensors to obtain accurate thermal maps that can be used to study crop ecosystems.

  11. Membrane triangles with corner drilling freedoms. III - Implementation and performance evaluation

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Alexander, Scott

    1992-01-01

    This paper completes a three-part series on the formulation of 3-node, 9-dof membrane triangles with corner drilling freedoms based on parametrized variational principles. The first four sections cover element implementation details including determination of optimal parameters and treatment of distributed loads. Then three elements of this type, labeled ALL, FF and EFF-ANDES, are tested on standard plane stress problems. ALL represents numerically integrated versions of Allman's 1988 triangle; FF is based on the free formulation triangle presented by Bergan and Felippa in 1985; and EFF-ANDES represent two different formulations of the optimal triangle derived in Parts I and II. The numerical studies indicate that the ALL, FF and EFF-ANDES elements are comparable in accuracy for elements of unitary aspect ratios. The ALL elements are found to stiffen rapidly in inplane bending for high aspect ratios, whereas the FF and EFF elements maintain accuracy. The EFF and ANDES implementations have a moderate edge in formation speed over the FF.

  12. Product Development and its Comparative Analysis by SLA, SLS and FDM Rapid Prototyping Processes

    NASA Astrophysics Data System (ADS)

    Choudhari, C. M.; Patil, V. D.

    2016-09-01

    To grab market and meeting deadlines has increased the scope of new methods in product design and development. Industries continuously strive to optimize the development cycles with high quality and cost efficient products to maintain market competitiveness. Thus the need of Rapid Prototyping Techniques (RPT) has started to play pivotal role in rapid product development cycle for complex product. Dimensional accuracy and surface finish are the corner stone of Rapid Prototyping (RP) especially if they are used for mould development. The paper deals with the development of part made with the help of Selective Laser Sintering (SLS), Stereo-lithography (SLA) and Fused Deposition Modelling (FDM) processes to benchmark and investigate on various parameters like material shrinkage rate, dimensional accuracy, time, cost and surface finish. This helps to conclude which processes can be proved to be effective and efficient in mould development. In this research work the emphasis was also given to the design stage of a product development to obtain an optimum design solution for an existing product.

  13. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  14. Nonlocal Intracranial Cavity Extraction

    PubMed Central

    Manjón, José V.; Eskildsen, Simon F.; Coupé, Pierrick; Romero, José E.; Collins, D. Louis; Robles, Montserrat

    2014-01-01

    Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and followup of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal intersubject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of prelabeled brain images to capture the large variability of brain shapes. To this end, an improved nonlocal label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy and reproducibility while maintaining a reduced computational burden. PMID:25328511

  15. Photoacoustic-based sO2 estimation through excised bovine prostate tissue with interstitial light delivery.

    PubMed

    Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard

    2017-09-01

    Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.

  16. Brain collection, standardized neuropathologic assessment, and comorbidity in ADNI participants

    PubMed Central

    Franklin, Erin E.; Perrin, Richard J.; Vincent, Benjamin; Baxter, Michael; Morris, John C.; Cairns, Nigel J.

    2015-01-01

    Introduction The Alzheimer’s Disease Neuroimaging Initiative Neuropathology Core (ADNI-NPC) facilitates brain donation, ensures standardized neuropathologic assessments, and maintains a tissue resource for research. Methods The ADNI-NPC coordinates with performance sites to promote autopsy consent, facilitate tissue collection and autopsy administration, and arrange sample delivery to the NPC, for assessment using NIA-AA neuropathologic diagnostic criteria. Results The ADNI-NPC has obtained 45 participant specimens and neuropathologic assessments have been completed in 36 to date. Challenges in obtaining consent at some sites have limited the voluntary autopsy rate to 58%. Among assessed cases, clinical diagnostic accuracy for Alzheimer disease (AD) is 97%; however, 58% show neuropathologic comorbidities. Discussion Challenges facing autopsy consent and coordination are largely resource-related. The neuropathologic assessments indicate that ADNI’s clinical diagnostic accuracy for AD is high; however, many AD cases have comorbidities that may impact the clinical presentation, course, and imaging and biomarker results. These neuropathologic data permit multimodal and genetic studies of these comorbidities to improve diagnosis and provide etiologic insights. PMID:26194314

  17. Active Guidance of a Handheld Micromanipulator using Visual Servoing.

    PubMed

    Becker, Brian C; Voros, Sandrine; Maclachlan, Robert A; Hager, Gregory D; Riviere, Cameron N

    2009-05-12

    In microsurgery, a surgeon often deals with anatomical structures of sizes that are close to the limit of the human hand accuracy. Robotic assistants can help to push beyond the current state of practice by integrating imaging and robot-assisted tools. This paper demonstrates control of a handheld tremor reduction micromanipulator with visual servo techniques, aiding the operator by providing three behaviors: snap-to, motion-scaling, and standoff-regulation. A stereo camera setup viewing the workspace under high magnification tracks the tip of the micromanipulator and the desired target object being manipulated. Individual behaviors activate in task-specific situations when the micromanipulator tip is in the vicinity of the target. We show that the snap-to behavior can reach and maintain a position at a target with an accuracy of 17.5 ± 0.4μm Root Mean Squared Error (RMSE) distance between the tip and target. Scaling the operator's motions and preventing unwanted contact with non-target objects also provides a larger margin of safety.

  18. Atomic Cholesky decompositions: a route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency.

    PubMed

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-21

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  19. Atomic Cholesky decompositions: A route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-01

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  20. Global and critical test of the perturbation density-functional theory based on extensive simulation of Lennard-Jones fluid near an interface and in confined systems.

    PubMed

    Zhou, Shiqi; Jamnik, Andrej

    2005-09-22

    The structure of a Lennard-Jones (LJ) fluid subjected to diverse external fields maintaining the equilibrium with the bulk LJ fluid is studied on the basis of the third-order+second-order perturbation density-functional approximation (DFA). The chosen density and potential parameters for the bulk fluid correspond to the conditions situated at "dangerous" regions of the phase diagram, i.e., near the critical temperature or close to the gas-liquid coexistence curve. The accuracy of DFA predictions is tested against the results of a grand canonical ensemble Monte Carlo simulation. It is found that the DFA theory presented in this work performs successfully for the nonuniform LJ fluid only on the condition of high accuracy of the required bulk second-order direct correlation function. The present report further indicates that the proposed perturbation DFA is efficient and suitable for both supercritical and subcritical temperatures.

  1. IoT for Real-Time Measurement of High-Throughput Liquid Dispensing in Laboratory Environments.

    PubMed

    Shumate, Justin; Baillargeon, Pierre; Spicer, Timothy P; Scampavia, Louis

    2018-04-01

    Critical to maintaining quality control in high-throughput screening is the need for constant monitoring of liquid-dispensing fidelity. Traditional methods involve operator intervention with gravimetric analysis to monitor the gross accuracy of full plate dispenses, visual verification of contents, or dedicated weigh stations on screening platforms that introduce potential bottlenecks and increase the plate-processing cycle time. We present a unique solution using open-source hardware, software, and 3D printing to automate dispenser accuracy determination by providing real-time dispense weight measurements via a network-connected precision balance. This system uses an Arduino microcontroller to connect a precision balance to a local network. By integrating the precision balance as an Internet of Things (IoT) device, it gains the ability to provide real-time gravimetric summaries of dispensing, generate timely alerts when problems are detected, and capture historical dispensing data for future analysis. All collected data can then be accessed via a web interface for reviewing alerts and dispensing information in real time or remotely for timely intervention of dispense errors. The development of this system also leveraged 3D printing to rapidly prototype sensor brackets, mounting solutions, and component enclosures.

  2. PARAGON: A Systematic, Integrated Approach to Aerosol Observation and Modeling

    NASA Technical Reports Server (NTRS)

    Diner, David J.; Kahn, Ralph A.; Braverman, Amy J.; Davies, Roger; Martonchik, John V.; Menzies, Robert T.; Ackerman, Thomas P.; Seinfeld, John H.; Anderson, Theodore L.; Charlson, Robert J.; hide

    2004-01-01

    Aerosols are generated and transformed by myriad processes operating across many spatial and temporal scales. Evaluation of climate models and their sensitivity to changes, such as in greenhouse gas abundances, requires quantifying natural and anthropogenic aerosol forcings and accounting for other critical factors, such as cloud feedbacks. High accuracy is required to provide sufficient sensitivity to perturbations, separate anthropogenic from natural influences, and develop confidence in inputs used to support policy decisions. Although many relevant data sources exist, the aerosol research community does not currently have the means to combine these diverse inputs into an integrated data set for maximum scientific benefit. Bridging observational gaps, adapting to evolving measurements, and establishing rigorous protocols for evaluating models are necessary, while simultaneously maintaining consistent, well understood accuracies. The Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON) concept represents a systematic, integrated approach to global aerosol Characterization, bringing together modern measurement and modeling techniques, geospatial statistics methodologies, and high-performance information technologies to provide the machinery necessary for achieving a comprehensive understanding of how aerosol physical, chemical, and radiative processes impact the Earth system. We outline a framework for integrating and interpreting observations and models and establishing an accurate, consistent and cohesive long-term data record.

  3. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  4. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  5. Sparse reconstruction of breast MRI using homotopic L0 minimization in a regional sparsified domain.

    PubMed

    Wong, Alexander; Mishra, Akshaya; Fieguth, Paul; Clausi, David A

    2013-03-01

    The use of MRI for early breast examination and screening of asymptomatic women has become increasing popular, given its ability to provide detailed tissue characteristics that cannot be obtained using other imaging modalities such as mammography and ultrasound. Recent application-oriented developments in compressed sensing theory have shown that certain types of magnetic resonance images are inherently sparse in particular transform domains, and as such can be reconstructed with a high level of accuracy from highly undersampled k-space data below Nyquist sampling rates using homotopic L0 minimization schemes, which holds great potential for significantly reducing acquisition time. An important consideration in the use of such homotopic L0 minimization schemes is the choice of sparsifying transform. In this paper, a regional differential sparsifying transform is investigated for use within a homotopic L0 minimization framework for reconstructing breast MRI. By taking local regional characteristics into account, the regional differential sparsifying transform can better account for signal variations and fine details that are characteristic of breast MRI than the popular finite differential transform, while still maintaining strong structure fidelity. Experimental results show that good breast MRI reconstruction accuracy can be achieved compared to existing methods.

  6. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less

  7. Five- and six-electron harmonium atoms: Highly accurate electronic properties and their application to benchmarking of approximate 1-matrix functionals

    NASA Astrophysics Data System (ADS)

    Cioslowski, Jerzy; Strasburger, Krzysztof

    2018-04-01

    Electronic properties of several states of the five- and six-electron harmonium atoms are obtained from large-scale calculations employing explicitly correlated basis functions. The high accuracy of the computed energies (including their components), natural spinorbitals, and their occupation numbers makes them suitable for testing, calibration, and benchmarking of approximate formalisms of quantum chemistry and solid state physics. In the case of the five-electron species, the availability of the new data for a wide range of the confinement strengths ω allows for confirmation and generalization of the previously reached conclusions concerning the performance of the presently known approximations for the electron-electron repulsion energy in terms of the 1-matrix that are at heart of the density matrix functional theory (DMFT). On the other hand, the properties of the three low-lying states of the six-electron harmonium atom, computed at ω = 500 and ω = 1000, uncover deficiencies of the 1-matrix functionals not revealed by previous studies. In general, the previously published assessment of the present implementations of DMFT being of poor accuracy is found to hold. Extending the present work to harmonically confined systems with even more electrons is most likely counterproductive as the steep increase in computational cost required to maintain sufficient accuracy of the calculated properties is not expected to be matched by the benefits of additional information gathered from the resulting benchmarks.

  8. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  9. Nonlinear temperature dependence of glue-induced birefringence in polarization maintaining FBG sensors

    NASA Astrophysics Data System (ADS)

    Hopf, Barbara; Koch, Alexander W.; Roths, Johannes

    2016-05-01

    Glue-induced stresses decrease the accuracy of surface-mounted fiber Bragg gratings (FBG). Significant temperature dependent glue-induced birefringence was verified when a thermally cured epoxy-based bonding technique had been used. Determining the peak separation of two azimuthally aligned FBGs in PM fibers combined with a polarization resolved measurement set-up in a temperature range between -30°C and 150°C revealed high glue-induced stresses at low temperatures. Peak separations of about 60 pm and a nonlinear temperature dependence of the glue-induced birefringence due to stress relaxation processes and a visco-elastic behavior of the used adhesive have been shown.

  10. DSCOVR Spacecraft Arrival, Offload, & Unpacking

    NASA Image and Video Library

    2014-11-20

    NOAA’s newly arrived Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, is delivered to the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  11. DSCOVR Spacecraft Arrival, Offload, & Unpacking

    NASA Image and Video Library

    2014-11-20

    Workers are on hand to receive NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, into the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  12. DSCOVR Spacecraft Arrival, Offload, & Unpacking

    NASA Image and Video Library

    2014-11-20

    Workers transfer NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, from the airlock of Building 2 to the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  13. A universal deep learning approach for modeling the flow of patients under different severities.

    PubMed

    Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L

    2018-02-01

    The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Innovative use of global navigation satellite systems for flight inspection

    NASA Astrophysics Data System (ADS)

    Kim, Eui-Ho

    The International Civil Aviation Organization (ICAO) mandates flight inspection in every country to provide safety during flight operations. Among many criteria of flight inspection, airborne inspection of Instrument Landing Systems (ILS) is very important because the ILS is the primary landing guidance system worldwide. During flight inspection of the ILS, accuracy in ILS landing guidance is checked by using a Flight Inspection System (FIS). Therefore, a flight inspection system must have high accuracy in its positioning capability to detect any deviation so that accurate guidance of the ILS can be maintained. Currently, there are two Automated Flight Inspection Systems (AFIS). One is called Inertial-based AFIS, and the other one is called Differential GPS-based (DGPS-based) AFIS. The Inertial-based AFIS enables efficient flight inspection procedures, but its drawback is high cost because it requires a navigation-grade Inertial Navigation System (INS). On the other hand, the DGPS-based AFIS has relatively low cost, but flight inspection procedures require landing and setting up a reference receiver. Most countries use either one of the systems based on their own preferences. There are around 1200 ILS in the U.S., and each ILS must be inspected every 6 to 9 months. Therefore, it is important to manage the airborne inspection of the ILS in a very efficient manner. For this reason, the Federal Aviation Administration (FAA) mainly uses the Inertial-based AFIS, which has better efficiency than the DGPS-based AFIS in spite of its high cost. Obviously, the FAA spends tremendous resources on flight inspection. This thesis investigates the value of GPS and the FAA's augmentation to GPS for civil aviation called the Wide Area Augmentation System (or WAAS) for flight inspection. Because standard GPS or WAAS position outputs cannot meet the required accuracy for flight inspection, in this thesis, various algorithms are developed to improve the positioning ability of Flight Inspection Systems (FIS) by using GPS and WAAS in novel manners. The algorithms include Adaptive Carrier Smoothing (ACS), optimizing WAAS accuracy and stability, and reference point-based precise relative positioning for real-time and near-real-time applications. The developed systems are WAAS-aided FIS, WAAS-based FIS, and stand-alone GPS-based FIS. These systems offer both high efficiency and low cost, and they have different advantages over one another in terms of accuracy, integrity, and worldwide availability. The performance of each system is tested with experimental flight test data and shown to have accuracy that is sufficient for flight inspection and superior to the current Inertial-based AFIS.

  15. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  16. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  17. An Improved Treatment of External Boundary for Three-Dimensional Flow Computations

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.; Vatsa, Veer N.

    1997-01-01

    We present an innovative numerical approach for setting highly accurate nonlocal boundary conditions at the external computational boundaries when calculating three-dimensional compressible viscous flows over finite bodies. The approach is based on application of the difference potentials method by V. S. Ryaben'kii and extends our previous technique developed for the two-dimensional case. The new boundary conditions methodology has been successfully combined with the NASA-developed code TLNS3D and used for the analysis of wing-shaped configurations in subsonic and transonic flow regimes. As demonstrated by the computational experiments, the improved external boundary conditions allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable speedup of convergence of the multigrid iterations.

  18. Boundary Condition for Modeling Semiconductor Nanostructures

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Oyafuso, Fabiano; von Allmen, Paul; Klimeck, Gerhard

    2006-01-01

    A recently proposed boundary condition for atomistic computational modeling of semiconductor nanostructures (particularly, quantum dots) is an improved alternative to two prior such boundary conditions. As explained, this boundary condition helps to reduce the amount of computation while maintaining accuracy.

  19. TH-A-9A-10: Prostate SBRT Delivery with Flattening-Filter-Free Mode: Benefit and Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Yuan, L; Sheng, Y

    Purpose: Flattening-filter-free (FFF) beam mode offered on TrueBeam™ linac enables delivering IMRT at 2400 MU/min dose rate. This study investigates the benefit and delivery accuracy of using high dose rate in the context of prostate SBRT. Methods: 8 prostate SBRT patients were retrospectively studied. In 5 cases treated with 600-MU/min dose rate, continuous prostate motion data acquired during radiation-beam-on was used to analyze motion range. In addition, the initial 1/3 of prostate motion trajectories during each radiation-beam-on was separated to simulate motion range if 2400-MU/min were used. To analyze delivery accuracy in FFF mode, MLC trajectory log files from anmore » additional 3 cases treated at 2400-MU/min were acquired. These log files record MLC expected and actual positions every 20ms, and therefore can be used to reveal delivery accuracy. Results: (1) Benefit. On average treatment at 600-MU/min takes 30s per beam; whereas 2400-MU/min requires only 11s. When shortening delivery time to ~1/3, the prostate motion range was significantly smaller (p<0.001). Largest motion reduction occurred in Sup-Inf direction, from [−3.3mm, 2.1mm] to [−1.7mm, 1.7mm], followed by reduction from [−2.1mm, 2.4mm] to [−1.0mm, 2.4mm] in Ant-Pos direction. No change observed in LR direction [−0.8mm, 0.6mm]. The combined motion amplitude (vector norm) confirms that average motion and ranges are significantly smaller when beam-on was limited to the 1st 1/3 of actual delivery time. (2) Accuracy. Trajectory log file analysis showed excellent delivery accuracy with at 2400 MU/min. Most leaf deviations during beam-on were within 0.07mm (99-percentile). Maximum leaf-opening deviations during each beam-on were all under 0.1mm for all leaves. Dose-rate was maintained at 2400-MU/min during beam-on without dipping. Conclusion: Delivery prostate SBRT with 2400 MU/min is both beneficial and accurate. High dose rates significantly reduced both treatment time and intra-beam prostate motion range. Excellent delivery accuracy was confirmed with very small leaf motion deviation.« less

  20. Study on data acquisition system for living environmental information for biofication of living spaces

    NASA Astrophysics Data System (ADS)

    Shimoyama, Norihisa; Mita, Akira

    2008-03-01

    In Japan's rapidly aging society, the number of elderly people living alone increases every year. Theses elderly people require more and more to maintain as independent a life as possible in their own homes. It is necessary to make living spaces that assist in providing safe and comfortable lives. "Biofication of Living Spaces" is proposed with the concept of creating save and pleasant living environments. It implies learning from biological systems, and applying to living spaces features such as high adaptability and excellent tolerance to environmental changes. As a first step towards realizing "Biofied Spaces", a system for acquisition and storing information must be developed. This system is similar to the five human senses. The information acquired includes environmental information such as temperature, human behavior, psychological state and location of furniture. This study addresses human behavior as it is the most important factor in design of a living space. In the present study, pyroelectric infrared sensors were chosen for human behavior recognition. The pyroelectric infrared sensor is advantageous in that it has no limitation on the number of sensors put in a single space because sensors do not interfere with each other. Wavelet analysis was applied to the output time histories of the pyroelectric infrared sensors. The system successfully classified walking patterns with 99.5% accuracy of walking direction (from right or left) and 85.7% accuracy of distance for 440 patterns pre-learned and an accuracy of over 80% accuracy of walking direction for 720 non-learned patterns.

  1. Robust vehicle detection under various environmental conditions using an infrared thermal camera and its application to road traffic flow monitoring.

    PubMed

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-06-17

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.

  2. Effectiveness of glucose monitoring systems modified for the visually impaired.

    PubMed

    Bernbaum, M; Albert, S G; Brusca, S; McGinnis, J; Miller, D; Hoffmann, J W; Mooradian, A D

    1993-10-01

    To compare three glucose meters modified for use by individuals with diabetes and visual impairment regarding accuracy, precision, and clinical reliability. Ten subjects with diabetes and visual impairment performed self-monitoring of blood glucose using each of the three commercially available blood glucose meters modified for visually impaired users (the AccuChek Freedom [Boehringer Mannheim, Indianapolis, IN], the Diascan SVM [Home Diagnostics, Eatontown, NJ], and the One Touch [Lifescan, Milpitas, CA]). The meters were independently evaluated by a laboratory technologist for precision and accuracy determinations. Only two meters were acceptable with regard to laboratory precision (coefficient of variation < 10%)--the Accuchek and the One Touch. The Accuchek and the One Touch did not differ significantly with regard to laboratory estimates of accuracy. A great discrepancy of the clinical reliability results was observed between these two meters. The Accuchek maintained a high degree of reliability (y = 0.99X + 0.44, r = 0.97, P = 0.001). The visually impaired subjects were unable to perform reliable testing using the One Touch system because of a lack of appropriate tactile landmarks and auditory signals. In addition to laboratory assessments of glucose meters, monitoring systems designed for the visually impaired must include adequate tactile and audible feedback features to allow for the acquisition and placement of appropriate blood samples.

  3. Methodology for rheological testing of engineered biomaterials at low audio frequencies

    NASA Astrophysics Data System (ADS)

    Titze, Ingo R.; Klemuk, Sarah A.; Gray, Steven

    2004-01-01

    A commercial rheometer (Bohlin CVO120) was used to mechanically test materials that approximate vocal-fold tissues. Application is to frequencies in the low audio range (20-150 Hz). Because commercial rheometers are not specifically designed for this frequency range, a primary problem is maintaining accuracy up to (and beyond) the mechanical resonance frequency of the rotating shaft assembly. A standard viscoelastic material (NIST SRM 2490) has been used to calibrate the rheometric system for an expanded frequency range. Mathematically predicted response curves are compared to measured response curves, and an error analysis is conducted to determine the accuracy to which the elastic modulus and the shear modulus can be determined in the 20-150-Hz region. Results indicate that the inertia of the rotating assembly and the gap between the plates need to be known (or determined empirically) to a high precision when the measurement frequency exceeds the resonant frequency. In addition, a phase correction is needed to account for the magnetic inertia (inductance) of the drag cup motor. Uncorrected, the measured phase can go below the theoretical limit of -π. This can produce large errors in the viscous modulus near and above the resonance frequency. With appropriate inertia and phase corrections, +/-10% accuracy can be obtained up to twice the resonance frequency.

  4. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  5. Design of fluidic self-assembly bonds for precise component positioning

    NASA Astrophysics Data System (ADS)

    Ramadoss, Vivek; Crane, Nathan B.

    2008-02-01

    Self Assembly is a promising alternative to conventional pick and place robotic assembly of micro components. Its benefits include parallel integration of parts with low equipment costs. Various approaches to self assembly have been demonstrated, yet demanding applications like assembly of micro-optical devices require increased positioning accuracy. This paper proposes a new method for design of self assembly bonds that addresses this need. Current methods have zero force at the desired assembly position and low stiffness. This allows small disturbance forces to create significant positioning errors. The proposed method uses a substrate assembly feature to provide a high accuracy alignment guide to the part. The capillary bond region of the part and substrate are then modified to create a non-zero positioning force to maintain the part in the desired assembly position. Capillary force models show that this force aligns the part to the substrate assembly feature and reduces sensitivity of part position to process variation. Thus, the new configuration can substantially improve positioning accuracy of capillary self-assembly. This will result in a dramatic decrease in positioning errors in the micro parts. Various binding site designs are analyzed and guidelines are proposed for the design of an effective assembly bond using this new approach.

  6. Cardiac risk stratification in renal transplantation using a form of artificial intelligence.

    PubMed

    Heston, T F; Norman, D J; Barry, J M; Bennett, W M; Wilson, R A

    1997-02-15

    The purpose of this study was to determine if an expert network, a form of artificial intelligence, could effectively stratify cardiac risk in candidates for renal transplant. Input into the expert network consisted of clinical risk factors and thallium-201 stress test data. Clinical risk factor screening alone identified 95 of 189 patients as high risk. These 95 patients underwent thallium-201 stress testing, and 53 had either reversible or fixed defects. The other 42 patients were classified as low risk. This algorithm made up the "expert system," and during the 4-year follow-up period had a sensitivity of 82%, specificity of 77%, and accuracy of 78%. An artificial neural network was added to the expert system, creating an expert network. Input into the neural network consisted of both clinical variables and thallium-201 stress test data. There were 5 hidden nodes and the output (end point) was cardiac death. The expert network increased the specificity of the expert system alone from 77% to 90% (p < 0.001), the accuracy from 78% to 89% (p < 0.005), and maintained the overall sensitivity at 88%. An expert network based on clinical risk factor screening and thallium-201 stress testing had an accuracy of 89% in predicting the 4-year cardiac mortality among 189 renal transplant candidates.

  7. Extended Kalman filtering for continuous volumetric MR-temperature imaging.

    PubMed

    Denis de Senneville, Baudouin; Roujol, Sébastien; Hey, Silke; Moonen, Chrit; Ries, Mario

    2013-04-01

    Real time magnetic resonance (MR) thermometry has evolved into the method of choice for the guidance of high-intensity focused ultrasound (HIFU) interventions. For this role, MR-thermometry should preferably have a high temporal and spatial resolution and allow observing the temperature over the entire targeted area and its vicinity with a high accuracy. In addition, the precision of real time MR-thermometry for therapy guidance is generally limited by the available signal-to-noise ratio (SNR) and the influence of physiological noise. MR-guided HIFU would benefit of the large coverage volumetric temperature maps, including characterization of volumetric heating trajectories as well as near- and far-field heating. In this paper, continuous volumetric MR-temperature monitoring was obtained as follows. The targeted area was continuously scanned during the heating process by a multi-slice sequence. Measured data and a priori knowledge of 3-D data derived from a forecast based on a physical model were combined using an extended Kalman filter (EKF). The proposed reconstruction improved the temperature measurement resolution and precision while maintaining guaranteed output accuracy. The method was evaluated experimentally ex vivo on a phantom, and in vivo on a porcine kidney, using HIFU heating. On the in vivo experiment, it allowed the reconstruction from a spatio-temporally under-sampled data set (with an update rate for each voxel of 1.143 s) to a 3-D dataset covering a field of view of 142.5×285×54 mm(3) with a voxel size of 3×3×6 mm(3) and a temporal resolution of 0.127 s. The method also provided noise reduction, while having a minimal impact on accuracy and latency.

  8. A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy

    PubMed Central

    Krebs, C. R.; Li, Ling; Wolberg, Alisa S.; Oldenburg, Amy L.

    2015-01-01

    Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young’s modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa–27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young’s modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring. PMID:26233406

  9. Construction and Use of Resting 12-Lead High Fidelity ECG "SuperScores" in Screening for Heart Disease

    NASA Technical Reports Server (NTRS)

    Schlegel, T. T.; Arenare, B.; Greco, E. C.; DePalma, J. L.; Starc, V.; Nunez, T.; Medina, R.; Jugo, D.; Rahman, M.A.; Delgado, R.

    2007-01-01

    We investigated the accuracy of several conventional and advanced resting ECG parameters for identifying obstructive coronary artery disease (CAD) and cardiomyopathy (CM). Advanced high-fidelity 12-lead ECG tests (approx. 5-min supine) were first performed on a "training set" of 99 individuals: 33 with ischemic or dilated CM and low ejection fraction (EF less than 40%); 33 with catheterization-proven obstructive CAD but normal EF; and 33 age-/gender-matched healthy controls. Multiple conventional and advanced ECG parameters were studied for their individual and combined retrospective accuracies in detecting underlying disease, the advanced parameters falling within the following categories: 1) Signal averaged ECG, including 12-lead high frequency QRS (150-250 Hz) plus multiple filtered and unfiltered parameters from the derived Frank leads; 2) 12-lead P, QRS and T-wave morphology via singular value decomposition (SVD) plus signal averaging; 3) Multichannel (12-lead, derived Frank lead, SVD lead) beat-to-beat QT interval variability; 4) Spatial ventricular gradient (and gradient component) variability; and 5) Heart rate variability. Several multiparameter ECG SuperScores were derivable, using stepwise and then generalized additive logistic modeling, that each had 100% retrospective accuracy in detecting underlying CM or CAD. The performance of these same SuperScores was then prospectively evaluated using a test set of another 120 individuals (40 new individuals in each of the CM, CAD and control groups, respectively). All 12-lead ECG SuperScores retrospectively generated for CM continued to perform well in prospectively identifying CM (i.e., areas under the ROC curve greater than 0.95), with one such score (containing just 4 components) maintaining 100% prospective accuracy. SuperScores retrospectively generated for CAD performed somewhat less accurately, with prospective areas under the ROC curve typically in the 0.90-0.95 range. We conclude that resting 12-lead high-fidelity ECG employing and combining the results of several advanced ECG software techniques shows great promise as a rapid and inexpensive tool for screening of heart disease.

  10. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  11. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    PubMed Central

    Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM

    2013-01-01

    Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866

  12. Assessment of Different Discrete Particle Methods Ability To Predict Gas-Particle Flow in a Small-Scale Fluidized Bed

    DOE PAGES

    Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane

    2017-06-21

    Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less

  13. Assessment of Different Discrete Particle Methods Ability To Predict Gas-Particle Flow in a Small-Scale Fluidized Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane

    Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less

  14. Accurate, consistent, and fast droplet splitting and dispensing in electrowetting on dielectric digital microfluidics

    NASA Astrophysics Data System (ADS)

    Nikapitiya, N. Y. Jagath B.; Nahar, Mun Mun; Moon, Hyejin

    2017-12-01

    This letter reports two novel electrode design considerations to satisfy two very important aspects of EWOD operation—(1) Highly consistent volume of generated droplets and (2) Highly improved accuracy in the generated droplet volume. Considering the design principles investigated two novel designs were proposed; L-junction electrode design to offer high throughput droplet generation and Y-junction electrode design to split a droplet very fast while maintaining equal volume of each part. Devices of novel designs were fabricated and tested, and the results are compared with those of conventional approach. It is demonstrated that inaccuracy and inconsistency of droplet volume dispensed in the device with novel electrode designs are as low as 0.17 and 0.10%, respectively, while those of conventional approach are 25 and 0.76%, respectively. The dispensing frequency is enhanced from 4 to 9 Hz by using the novel design.

  15. Detection of dysregulated protein-association networks by high-throughput proteomics predicts cancer vulnerabilities.

    PubMed

    Lapek, John D; Greninger, Patricia; Morris, Robert; Amzallag, Arnaud; Pruteanu-Malinici, Iulian; Benes, Cyril H; Haas, Wilhelm

    2017-10-01

    The formation of protein complexes and the co-regulation of the cellular concentrations of proteins are essential mechanisms for cellular signaling and for maintaining homeostasis. Here we use isobaric-labeling multiplexed proteomics to analyze protein co-regulation and show that this allows the identification of protein-protein associations with high accuracy. We apply this 'interactome mapping by high-throughput quantitative proteome analysis' (IMAHP) method to a panel of 41 breast cancer cell lines and show that deviations of the observed protein co-regulations in specific cell lines from the consensus network affects cellular fitness. Furthermore, these aberrant interactions serve as biomarkers that predict the drug sensitivity of cell lines in screens across 195 drugs. We expect that IMAHP can be broadly used to gain insight into how changing landscapes of protein-protein associations affect the phenotype of biological systems.

  16. Algorithms for spacecraft formation flying navigation based on wireless positioning system measurements

    NASA Astrophysics Data System (ADS)

    Goh, Shu Ting

    Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due to the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft's range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method's error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.

  17. Bench performance of ventilators during simulated paediatric ventilation.

    PubMed

    Park, M A J; Freebairn, R C; Gomersall, C D

    2013-05-01

    This study compares the accuracy and capabilities of various ventilators using a paediatric acute respiratory distress syndrome lung model. Various compliance settings and respiratory rate settings were used. The study was done in three parts: tidal volume and FiO2 accuracy; pressure control accuracy and positive end-expiratory pressure (PEEP) accuracy. The parameters set on the ventilator were compared with either or both of the measured parameters by the test lung and the ventilator. The results revealed that none of the ventilators could consistently deliver tidal volumes within 1 ml/kg of the set tidal volume, and the discrepancy between the delivered volume and the volume measured by the ventilator varied greatly. The target tidal volume was 8 ml/kg, but delivered tidal volumes ranged from 3.6-11.4 ml/kg and the volumes measured by the ventilator ranged from 4.1-20.6 ml/kg. All the ventilators maintained pressure within 20% of the set pressure, except one ventilator which delivered pressures of up to 27% higher than the set pressure. Two ventilators maintained PEEP within 10% of the prescribed PEEP. The majority of the readings were also within 10%. However, three ventilators delivered, at times, PEEPs over 20% higher. In conclusion, as lung compliance decreases, especially in paediatric patients, some ventilators perform better than others. This study highlights situations where ventilators may not be able to deliver, nor adequately measure, set tidal volumes, pressure, PEEP or FiO2.

  18. The evolution of ageing and longevity.

    PubMed

    Kirkwood, T B; Holliday, R

    1979-09-21

    Ageing is not adaptive since it reduces reproductive potential, and the argument that it evolved to provide offspring with living space is hard to sustain for most species. An alternative theory is based on the recognition that the force of natural selection declines with age, since in most environments individuals die from predation, disease or starvation. Ageing could therefore be the combined result of late-expressed deleterious genes which are beyond the reach of effective negative selection. However, this argument is circular, since the concept of 'late expression' itself implies the prior existence of adult age-related physiological processes. Organisms that do not age are essentially in a steady state in which chronologically young and old individuals are physiologically the same. In this situation the synthesis of macromolecules must be sufficiently accurate to prevent error feedback and the development of lethal 'error catastrophes'. This involves the expenditure of energy, which is required for both kinetic proof-reading and other accuracy promoting devices. It may be selectively advantageous for higher organisms to adopt an energy saving strategy of reduced accuracy in somatic cells to accelerate development and reproduction, but the consequence will be eventual deterioration and death. This 'disposable soma' theory of the evolution of ageing also proposes that a high level of accuracy is maintained in immortal germ line cells, or alternatively, that any defective germ cells are eliminated. The evolution of an increase in longevity in mammals may be due to a concomitant reduction in the rates of growth and reproduction and an increase in the accuracy of synthesis of macromolecules. The theory can be tested by measuring accuracy in germ line and somatic cells and also by comparing somatic cells from mammals with different longevities.

  19. Task-relevant cognitive and motor functions are prioritized during prolonged speed-accuracy motor task performance.

    PubMed

    Solianik, Rima; Satas, Andrius; Mickeviciene, Dalia; Cekanauskaite, Agne; Valanciene, Dovile; Majauskiene, Daiva; Skurvydas, Albertas

    2018-06-01

    This study aimed to explore the effect of prolonged speed-accuracy motor task on the indicators of psychological, cognitive, psychomotor and motor function. Ten young men aged 21.1 ± 1.0 years performed a fast- and accurate-reaching movement task and a control task. Both tasks were performed for 2 h. Despite decreased motivation, and increased perception of effort as well as subjective feeling of fatigue, speed-accuracy motor task performance improved during the whole period of task execution. After the motor task, the increased working memory function and prefrontal cortex oxygenation at rest and during conflict detection, and the decreased efficiency of incorrect response inhibition and visuomotor tracking were observed. The speed-accuracy motor task increased the amplitude of motor-evoked potentials, while grip strength was not affected. These findings demonstrate that to sustain the performance of 2-h speed-accuracy task under conditions of self-reported fatigue, task-relevant functions are maintained or even improved, whereas less critical functions are impaired.

  20. The Influence of Motor Skills on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Brychta, Petr; Sadílek, Marek; Brychta, Josef

    2016-10-01

    This innovative study trying to do interdisciplinary interface at first view different ways fields: kinantropology and mechanical engineering. A motor skill is described as an action which involves the movement of muscles in a body. Gross motor skills permit functions as a running, jumping, walking, punching, lifting and throwing a ball, maintaining a body balance, coordinating etc. Fine motor skills captures smaller neuromuscular actions, such as holding an object between the thumb and a finger. In mechanical inspection, the accuracy of measurement is most important aspect. The accuracy of measurement to some extent is also dependent upon the sense of sight or sense of touch associated with fine motor skills. It is therefore clear that the level of motor skills will affect the precision and accuracy of measurement in metrology. Aim of this study is literature review to find out fine motor skills level of individuals and determine the potential effect of different fine motor skill performance on precision and accuracy of mechanical engineering measuring.

  1. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    NASA Astrophysics Data System (ADS)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  2. A characteristics-based method for solving the transport equation and its application to the process of mantle differentiation and continental root growth

    NASA Astrophysics Data System (ADS)

    de Smet, Jeroen H.; van den Berg, Arie P.; Vlaar, Nico J.; Yuen, David A.

    2000-03-01

    Purely advective transport of composition is of major importance in the Geosciences, and efficient and accurate solution methods are needed. A characteristics-based method is used to solve the transport equation. We employ a new hybrid interpolation scheme, which allows for the tuning of stability and accuracy through a threshold parameter ɛth. Stability is established by bilinear interpolations, and bicubic splines are used to maintain accuracy. With this scheme, numerical instabilities can be suppressed by allowing numerical diffusion to work in time and locally in space. The scheme can be applied efficiently for preliminary modelling purposes. This can be followed by detailed high-resolution experiments. First, the principal effects of this hybrid interpolation method are illustrated and some tests are presented for numerical solutions of the transport equation. Second, we illustrate that this approach works successfully for a previously developed continental evolution model for the convecting upper mantle. In this model the transport equation contains a source term, which describes the melt production in pressure-released partial melting. In this model, a characteristic phenomenon of small-scale melting diapirs is observed (De Smet et al.1998; De Smet et al. 1999). High-resolution experiments with grid cells down to 700m horizontally and 515m vertically result in highly detailed observations of the diapiric melting phenomenon.

  3. Effect of Heart rate on Basketball Three-Point Shot Accuracy

    PubMed Central

    Ardigò, Luca P.; Kuvacic, Goran; Iacono, Antonio D.; Dascanio, Giacomo; Padulo, Johnny

    2018-01-01

    The three-point shot (3S) is a fundamental basketball skill used frequently during a game, and is often a main determinant of the final result. The aim of the study was to investigate the effect of different metabolic conditions, in terms of heart rates, on 3S accuracy (3S%) in 24 male (Under 17) basketball players (age 16.3 ± 0.6 yrs). 3S performance was specifically investigated at different heart rates. All sessions consisted of 10 consecutive 3Ss from five different significant field spots just beyond the FIBA three-point line, i.e., about 7 m from the basket (two counter-clockwise “laps”) at different heart rates: rest (0HR), after warm-up (50%HRMAX [50HR]), and heart rate corresponding to 80% of its maximum value (80%HRMAX [80HR]). We found that 50HR does not significantly decrease 3S% (−15%, P = 0.255), while 80HR significantly does when compared to 0HR (−28%, P = 0.007). Given that 50HR does not decrease 3S% compared to 0HR, we believe that no preliminary warm-up is needed before entering a game in order to specifically achieve a high 3S%. Furthermore, 3S training should be performed in conditions of moderate-to-high fatigued state so that a high 3S% can be maintained during game-play. PMID:29467676

  4. Wavelet data compression for archiving high-resolution icosahedral model data

    NASA Astrophysics Data System (ADS)

    Wang, N.; Bao, J.; Lee, J.

    2011-12-01

    With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.

  5. Work Performance Ratings: Measurement Test Bed for Validity and Accuracy Research

    DTIC Science & Technology

    1989-02-01

    Department: To maximize the financial status of the organization. This department is responsible for maintaining a balance between accounts receivable and...Commission guidelines. This department attempts to increase employee productivity and improve the quality of worklife thereby increasing satisfaction and

  6. What Top Management Expects from the Communicator.

    ERIC Educational Resources Information Center

    Fegley, Robert L.

    Top corporate management requires communications departments that maintain credibility with the public by developing the following qualities: integrity established through consistent and honest messages; accuracy based on solid research; authority derived from an understanding of the subject and from drawing on appropriate expertise; a…

  7. 42 CFR 412.610 - Assessment schedule.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... section. (e) Accuracy of the patient assessment data. The encoded patient assessment data must accurately... instrument record retention. An inpatient rehabilitation facility must maintain all patient assessment data sets completed on Medicare Part A fee-for-service patients within the previous 5 years and Medicare...

  8. QUALITY ASSURANCE HANDBOOK FOR AIR POLLUTION MEASUREMENT SYSTEMS: VOLUME IV - METEOROLOGICAL MEASUREMENTS (REVISED - AUGUST 1994)

    EPA Science Inventory

    Procedures on installing, acceptance testing, operating, maintaining and quality assuring three types of ground-based, upper air meteorological measurement systems are described. he limitations and uncertainties in precision and accuracy measurements associated with these systems...

  9. Centroid stabilization for laser alignment to corner cubes: designing a matched filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul A. S.; Bliss, Erlan; Brunton, Gordon

    2016-11-08

    Automation of image-based alignment of National Ignition Facility high energy laser beams is providing the capability of executing multiple target shots per day. One important alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retroreflecting corner cubes as centering references for each beam. Beam-to-beam variations and systematic beam changes over time in the FOA corner cube images can lead to a reduction in accuracy as well as increased convergence durations for the template-based position detector. A systematic approach is described that maintains FOA corner cube templates and guaranteesmore » stable position estimation.« less

  10. Centroid stabilization for laser alignment to corner cubes: designing a matched filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul A. S.; Bliss, Erlan; Brunton, Gordon

    2016-11-08

    Automation of image-based alignment of NIF high energy laser beams is providing the capability of executing multiple target shots per day. One important alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retro-reflecting corner cubes as centering references for each beam. Beam-to-beam variations and systematic beam changes over time in the FOA corner cube images can lead to a reduction in accuracy as well as increased convergence durations for the template-based position detector. A systematic approach is described that maintains FOA corner cube templates and guarantees stable positionmore » estimation.« less

  11. KSC-2015-1240

    NASA Image and Video Library

    2015-01-18

    CAPE CANAVERAL, Fla. – Preparations to launch NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, near completion in the Building 1 high bay of the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for no earlier than Feb. 8 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  12. KSC-2014-4580

    NASA Image and Video Library

    2014-11-24

    CAPE CANAVERAL, Fla. – Workers conduct a light test on the solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Ben Smegelsky

  13. KSC-2014-4578

    NASA Image and Video Library

    2014-11-24

    CAPE CANAVERAL, Fla. – The solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, are unfurled in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Ben Smegelsky

  14. KSC-2014-4582

    NASA Image and Video Library

    2014-11-24

    CAPE CANAVERAL, Fla. – Workers conduct a light test on the solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Ben Smegelsky

  15. KSC-2014-4581

    NASA Image and Video Library

    2014-11-24

    CAPE CANAVERAL, Fla. – Workers conduct a light test on the solar arrays on NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, in the Building 1 high bay at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for early 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Ben Smegelsky

  16. KSC-2015-1241

    NASA Image and Video Library

    2015-01-18

    CAPE CANAVERAL, Fla. – Preparations to launch NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, near completion in the Building 1 high bay of the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for no earlier than Feb. 8 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  17. KSC-2014-4568

    NASA Image and Video Library

    2014-11-20

    CAPE CANAVERAL, Fla. – NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, has been uncovered and is ready for processing in the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  18. KSC-2015-1239

    NASA Image and Video Library

    2015-01-18

    CAPE CANAVERAL, Fla. – Preparations to launch NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, near completion in the Building 1 high bay of the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is targeted for no earlier than Feb. 8 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  19. The Time Dependent Propensity Function for Acceleration of Spatial Stochastic Simulation of Reaction-Diffusion Systems

    PubMed Central

    Wu, Sheng; Li, Hong; Petzold, Linda R.

    2015-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185

  20. DSCOVR Spacecraft Arrival, Offload, & Unpacking

    NASA Image and Video Library

    2014-11-20

    Workers monitor NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, as it travels between the airlock of Building 2 to the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida.

  1. Method for hot pressing irregularly shaped refractory articles

    DOEpatents

    Steinkamp, William E.; Ballard, Ambrose H.

    1982-01-01

    The present invention is directed to a method for hot pressing irregularly haped refractory articles with these articles of varying thickness being provided with high uniform density and dimensional accuracy. Two partially pressed compacts of the refractory material are placed in a die cavity between displaceable die punches having compact-contacting surfaces of the desired article configuration. A floating, rotatable block is disposed between the compacts. The displacement of the die punches towards one another causes the block to rotate about an axis normal to the direction of movement of the die punches to uniformly distribute the pressure loading upon the compacts for maintaining substantially equal volume displacement of the powder material during the hot pressing operation.

  2. Pooled HIV-1 viral load testing using dried blood spots to reduce the cost of monitoring antiretroviral treatment in a resource-limited setting.

    PubMed

    Pannus, Pieter; Fajardo, Emmanuel; Metcalf, Carol; Coulborn, Rebecca M; Durán, Laura T; Bygrave, Helen; Ellman, Tom; Garone, Daniela; Murowa, Michael; Mwenda, Reuben; Reid, Tony; Preiser, Wolfgang

    2013-10-01

    Rollout of routine HIV-1 viral load monitoring is hampered by high costs and logistical difficulties associated with sample collection and transport. New strategies are needed to overcome these constraints. Dried blood spots from finger pricks have been shown to be more practical than the use of plasma specimens, and pooling strategies using plasma specimens have been demonstrated to be an efficient method to reduce costs. This study found that combination of finger-prick dried blood spots and a pooling strategy is a feasible and efficient option to reduce costs, while maintaining accuracy in the context of a district hospital in Malawi.

  3. Automatic Building Abstraction from Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Ley, A.; Hänsch, R.; Hellwich, O.

    2017-09-01

    Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.

  4. Development of a realistic stress analysis for fatigue analysis of notched composite laminates

    NASA Technical Reports Server (NTRS)

    Humphreys, E. A.; Rosen, B. W.

    1979-01-01

    A finite element stress analysis which consists of a membrane and interlaminar shear spring analysis was developed. This approach was utilized in order to model physically realistic failure mechanisms while maintaining a high degree of computational economy. The accuracy of the stress analysis predictions is verified through comparisons with other solutions to the composite laminate edge effect problem. The stress analysis model was incorporated into an existing fatigue analysis methodology and the entire procedure computerized. A fatigue analysis is performed upon a square laminated composite plate with a circular central hole. A complete description and users guide for the computer code FLAC (Fatigue of Laminated Composites) is included as an appendix.

  5. Design and fabrication of highly sensitive and stable biochip for glucose biosensing

    NASA Astrophysics Data System (ADS)

    Lu, Shi-Yu; Lu, Yao; Jin, Meng; Bao, Shu-Juan; Li, Wan-Yun; Yu, Ling

    2017-11-01

    Common producing steps for test strips is complex and fussy. In this work, we proposed a feasible binder-free test strips fabrication method to directly grow enzyme/manganese phosphate nanosheets hybrids on the screen-print electrodes (SPE). Combined with microfluidic packaging technology, the ready-made portable electrochemical biochip shows a wider linear range (1-40 mM, R2 = 0.9998) and excellent stability (maintained 98% response current after 20 days store and retained 75% response current after continuous 30 days determination) for the detection of glucose. Compared with commercial test strips, the biochip exhibits excellent sensitivity, stability and accuracy, which is indicative of its potential application in real samples.

  6. Treatment of charge singularities in implicit solvent models.

    PubMed

    Geng, Weihua; Yu, Sining; Wei, Guowei

    2007-09-21

    This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2 A for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.

  7. Treatment of charge singularities in implicit solvent models

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Yu, Sining; Wei, Guowei

    2007-09-01

    This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2Å for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.

  8. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  9. Non-overlap subaperture interferometric testing for large optics

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian

    2017-08-01

    It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).

  10. Compliant head probe for positioning electroencephalography electrodes and near-infrared spectroscopy optodes

    NASA Astrophysics Data System (ADS)

    Giacometti, Paolo; Diamond, Solomon G.

    2013-02-01

    A noninvasive head probe that combines near-infrared spectroscopy (NIRS) and electroencephalography (EEG) for simultaneous measurement of neural dynamics and hemodynamics in the brain is presented. It is composed of a compliant expandable mechanism that accommodates a wide range of head size variation and an elastomeric web that maintains uniform sensor contact pressure on the scalp as the mechanism expands and contracts. The design is intended to help maximize optical and electrical coupling and to maintain stability during head movement. Positioning electrodes at the inion, nasion, central, and preauricular fiducial locations mechanically shapes the probe to place 64 NIRS optodes and 65 EEG electrodes following the 10-5 scalp coordinates. The placement accuracy, precision, and scalp pressure uniformity of the sensors are evaluated. A root-mean-squared (RMS) positional precision of 0.89±0.23 mm, percent arc subdivision RMS accuracy of 0.19±0.15%, and mean normal force on the scalp of 2.28±0.88 N at 5 mm displacement were found. Geometric measurements indicate that the probe will accommodate the full range of adult head sizes. The placement accuracy, precision, and uniformity of sensor contact pressure of the proposed head probe are important determinants of data quality in noninvasive brain monitoring with simultaneous NIRS-EEG.

  11. Accuracy of pulse oximetry in assessing heart rate of infants in the neonatal intensive care unit.

    PubMed

    Singh, Jasbir K S B; Kamlin, C Omar F; Morley, Colin J; O'Donnell, Colm P F; Donath, Susan M; Davis, Peter G

    2008-05-01

    To determine the accuracy of pulse oximetry measurement of heart rate in the neonatal intensive care unit. Stable preterm infants were monitored with a pulse oximeter and an ECG. The displays of both monitors were captured on video. Heart rate data from both monitors, including measures of signal quality, were extracted and analysed using Bland Altman plots. In 30 infants the mean (SD) difference between heart rate measured by pulse oximetry and electrocardiography was -0.4 (12) beats per minute. Accuracy was maintained when the signal quality or perfusion was low. Pulse oximetry may provide a useful measurement of heart rate in the neonatal intensive care unit. Studies of this technique in the delivery room are indicated.

  12. Faster Trees: Strategies for Accelerated Training and Prediction of Random Forests for Classification of Polsar Images

    NASA Astrophysics Data System (ADS)

    Hänsch, Ronny; Hellwich, Olaf

    2018-04-01

    Random Forests have continuously proven to be one of the most accurate, robust, as well as efficient methods for the supervised classification of images in general and polarimetric synthetic aperture radar data in particular. While the majority of previous work focus on improving classification accuracy, we aim for accelerating the training of the classifier as well as its usage during prediction while maintaining its accuracy. Unlike other approaches we mainly consider algorithmic changes to stay as much as possible independent of platform and programming language. The final model achieves an approximately 60 times faster training and a 500 times faster prediction, while the accuracy is only marginally decreased by roughly 1 %.

  13. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  14. High vacuum measurements and calibrations, molecular flow fluid transient effects

    DOE PAGES

    Leishear, Robert A.; Gavalas, Nickolas A.

    2015-04-29

    High vacuum pressure measurements and calibrations below 1 × 10 -8 Torr are problematic. Specifically, measurement accuracies change drastically for vacuum gauges when pressures are suddenly lowered in vacuum systems. How can gauges perform like this? A brief system description is first required to answer this question. Calibrations were performed using a vacuum calibration chamber with attached vacuum gauges. To control chamber pressures, vacuum pumps decreased the chamber pressure while nitrogen tanks increased the chamber pressure. By balancing these opposing pressures, equilibrium in the chamber was maintained at selected set point pressures to perform calibrations. When pressures were suddenly decreasedmore » during set point adjustments, a sudden rush of gas from the chamber also caused a surge of gas from the gauges to decrease the pressures in those gauges. Gauge pressures did not return to equilibrium as fast as chamber pressures due to the sparse distribution of gas molecules in the system. This disparity in the rate of pressure changes caused the pressures in different gauges to be different than expected. This discovery was experimentally proven to show that different gauge designs return to equilibrium at different rates, and that gauge accuracies vary for different gauge designs due to fluid transients in molecular flow.« less

  15. Biomarkers for Early Diagnosis of Alzheimer's Disease in the Oldest Old: Yes or No?

    PubMed

    Paolacci, Lucia; Giannandrea, David; Mecocci, Patrizia; Parnetti, Lucilla

    2017-01-01

    In recent years, many efforts have been spent to identify sensitive biomarkers able to improve the accuracy of Alzheimer's disease (AD) diagnosis. Two different workgroups (NIA-AA and IWG) included cerebrospinal fluid (CSF) and neuroimaging findings in their sets of criteria in order to improve diagnostic accuracy as well as early diagnosis. The number of subjects with cognitive impairment increases with aging but the oldest old (≥85 years of age), the fastest growing age group, is still the most unknown from a biological point of view. For this reason, the aim of our narrative mini-review is to evaluate the pertinence of the new criteria for AD diagnosis in the oldest old. Moreover, since different subgroups of oldest old have been described in scientific literature (escapers, delayers, survivors), we want to outline the clinical profile of the oldest old who could really benefit from the use of biomarkers for early diagnosis. Reviewing the literature on biomarkers included in the diagnostic criteria, we did not find a high degree of evidence for their use in the oldest old, although CSF biomarkers seem to be still the most useful for excluding AD diagnosis in the "fit" subgroup of oldest old subjects, due to the high negative predictive value maintained in this age group.

  16. Optimal implicit 2-D finite differences to model wave propagation in poroelastic media

    NASA Astrophysics Data System (ADS)

    Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2016-08-01

    Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.

  17. Application of a Terrestrial LIDAR System for Elevation Mapping in Terra Nova Bay, Antarctica.

    PubMed

    Cho, Hyoungsig; Hong, Seunghwan; Kim, Sangmin; Park, Hyokeun; Park, Ilsuk; Sohn, Hong-Gyoo

    2015-09-16

    A terrestrial Light Detection and Ranging (LIDAR) system has high productivity and accuracy for topographic mapping, but the harsh conditions of Antarctica make LIDAR operation difficult. Low temperatures cause malfunctioning of the LIDAR system, and unpredictable strong winds can deteriorate data quality by irregularly shaking co-registration targets. For stable and efficient LIDAR operation in Antarctica, this study proposes and demonstrates the following practical solutions: (1) a lagging cover with a heating pack to maintain the temperature of the terrestrial LIDAR system; (2) co-registration using square planar targets and two-step point-merging methods based on extracted feature points and the Iterative Closest Point (ICP) algorithm; and (3) a georeferencing module consisting of an artificial target and a Global Navigation Satellite System (GNSS) receiver. The solutions were used to produce a topographic map for construction of the Jang Bogo Research Station in Terra Nova Bay, Antarctica. Co-registration and georeferencing precision reached 5 and 45 mm, respectively, and the accuracy of the Digital Elevation Model (DEM) generated from the LIDAR scanning data was ±27.7 cm.

  18. Development of an automated high temperature valveless injection system for on-line gas chromatography

    NASA Astrophysics Data System (ADS)

    Kreisberg, N. M.; Worton, D. R.; Zhao, Y.; Isaacman, G.; Goldstein, A. H.; Hering, S. V.

    2014-07-01

    A reliable method of sample introduction is presented for on-line gas chromatography with a special application to in-situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a controlled pressure switching device that offers the advantage of long term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing the interface for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient detector and 15% accurate when applied to a vacuum based detector. Laboratory comparisons made between the two methods of sample introduction using a thermal desorption aerosol gas chromatograph (TAG) show approximately three times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in-situ instrument demonstrate minimal trending and a zero failure rate during field deployments ranging up to four weeks of continuous sampling. Extension of the VLI to dual collection cells is presented with less than 3% cell-to-cell carry-over.

  19. Initiating an Online Reputation Monitoring System with Open Source Analytics Tools

    NASA Astrophysics Data System (ADS)

    Shuhud, Mohd Ilias M.; Alwi, Najwa Hayaati Md; Halim, Azni Haslizan Abd

    2018-05-01

    Online reputation is an invaluable asset for modern organizations as it can help in business performance especially in sales and profit. However, if we are not aware of our reputation, it is difficult to maintain it. Thus, social media analytics is a new tool that can provide online reputation monitoring in various ways such as sentiment analysis. As a result, numerous large-scale organizations have implemented Online Reputation Monitoring (ORM) systems. However, this solution is not supposed to be exclusively for high-income organizations, as many organizations regardless sizes and types are now online. This research attempts to propose an affordable and reliable ORM system using combination of open source analytics tools for both novice practitioners and academicians. We also evaluate its prediction accuracy and we discovered that the system provides acceptable predictions (sixty percent accuracy) and demonstrate a tally prediction of major polarity by human annotation. The proposed system can help in supporting business decisions with flexible monitoring strategies especially for organization that want to initiate and administrate ORM themselves at low cost.

  20. Measuring Physical Properties of Neuronal and Glial Cells with Resonant Microsensors

    PubMed Central

    2015-01-01

    Microelectromechanical systems (MEMS) resonant sensors provide a high degree of accuracy for measuring the physical properties of chemical and biological samples. These sensors enable the investigation of cellular mass and growth, though previous sensor designs have been limited to the study of homogeneous cell populations. Population heterogeneity, as is generally encountered in primary cultures, reduces measurement yield and limits the efficacy of sensor mass measurements. This paper presents a MEMS resonant pedestal sensor array fabricated over through-wafer pores compatible with vertical flow fields to increase measurement versatility (e.g., fluidic manipulation and throughput) and allow for the measurement of heterogeneous cell populations. Overall, the improved sensor increases capture by 100% at a flow rate of 2 μL/min, as characterized through microbead experiments, while maintaining measurement accuracy. Cell mass measurements of primary mouse hippocampal neurons in vitro, in the range of 0.1–0.9 ng, demonstrate the ability to investigate neuronal mass and changes in mass over time. Using an independent measurement of cell volume, we find cell density to be approximately 1.15 g/mL. PMID:24734874

  1. A moving mesh finite difference method for equilibrium radiation diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaobo, E-mail: xwindyb@126.com; Huang, Weizhang, E-mail: whuang@ku.edu; Qiu, Jianxian, E-mail: jxqiu@xmu.edu.cn

    2015-10-01

    An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativitymore » of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation.« less

  2. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  3. Forward calculation of gravity and its gradient using polyhedral representation of density interfaces: an application of spherical or ellipsoidal topographic gravity effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chen, Chao

    2018-02-01

    A density interface modeling method using polyhedral representation is proposed to construct 3-D models of spherical or ellipsoidal interfaces such as the terrain surface of the Earth and applied to forward calculating gravity effect of topography and bathymetry for regional or global applications. The method utilizes triangular facets to fit undulation of the target interface. The model maintains almost equal accuracy and resolution at different locations of the globe. Meanwhile, the exterior gravitational field of the model, including its gravity and gravity gradients, is obtained simultaneously using analytic solutions. Additionally, considering the effect of distant relief, an adaptive computation process is introduced to reduce the computational burden. Then features and errors of the method are analyzed. Subsequently, the method is applied to an area for the ellipsoidal Bouguer shell correction as an example and the result is compared to existing methods, which shows our method provides high accuracy and great computational efficiency. Suggestions for further developments and conclusions are drawn at last.

  4. Design and implementation of the modified signed digit multiplication routine on a ternary optical computer.

    PubMed

    Xu, Qun; Wang, Xianchao; Xu, Chao

    2017-06-01

    Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.

  5. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Worldwide differential GPS for Space Shuttle landing operations

    NASA Technical Reports Server (NTRS)

    Loomis, Peter V. W.; Denaro, Robert P.; Saunders, Penny

    1990-01-01

    Worldwide differential Global Positioning System (WWDGPS) is viewed as an effective method of offering continuous high-quality navigation worldwide. The concept utilizes a network with as few as 33 ground stations to observe most of the error sources of GPS and provide error corrections to users on a worldwide basis. The WWDGPS real-time GPS tracking concept promises a threefold or fourfold improvement in accuracy for authorized dual-frequency users, and in addition maintains an accurate and current ionosphere model for single-frequency users. A real-time global tracking network also has the potential to reverse declarations of poor health on marginal satellites, increasing the number of satellites in the constellation and lessening the probability of GPS navigation outage. For Space Shuttle operations, the use of WWDGPS-aided P-code equipment promises performance equal to or better than other current landing guidance systems in terms of accuracy and reliability. This performance comes at significantly less cost to NASA, which will participate as a customer in a system designed as a commercial operation serving the global civil navigation community.

  7. Image recording requirements for earth observation applications in the next decade

    NASA Technical Reports Server (NTRS)

    Peavey, B.; Sos, J. Y.

    1975-01-01

    Future requirements for satellite-borne image recording systems are examined from the standpoints of system performance, system operation, product type, and product quality. Emphasis is on total system design while keeping in mind that the image recorder or scanner is the most crucial element which will affect the end product quality more than any other element within the system. Consideration of total system design and implementation for sustained operational usage must encompass the requirements for flexibility of input data and recording speed, pixel density, aspect ratio, and format size. To produce this type of system requires solution of challenging problems in interfacing the data source with the recorder, maintaining synchronization between the data source and the recorder, and maintaining a consistent level of quality. Film products of better quality than is currently achieved in a routine manner are needed. A 0.1 pixel geometric accuracy and 0.0001 d.u. radiometric accuracy on standard (240 mm) size format should be accepted as a goal to be reached in the near future.

  8. Executive Functioning in Highly Talented Soccer Players

    PubMed Central

    Verburgh, Lot; Scherder, Erik J. A.; van Lange, Paul A.M.; Oosterlaan, Jaap

    2014-01-01

    Executive functions might be important for successful performance in sports, particularly in team sports requiring quick anticipation and adaptation to continuously changing situations in the field. The executive functions motor inhibition, attention and visuospatial working memory were examined in highly talented soccer players. Eighty-four highly talented youth soccer players (mean age 11.9), and forty-two age-matched amateur soccer players (mean age 11.8) in the age range 8 to 16 years performed a Stop Signal task (motor inhibition), the Attention Network Test (alerting, orienting, and executive attention) and a visuospatial working memory task. The highly talented soccer players followed the talent development program of the youth academy of a professional soccer club and played at the highest national soccer competition for their age. The amateur soccer players played at a regular soccer club in the same geographical region as the highly talented soccer players and play in a regular regional soccer competition. Group differences were tested using analyses of variance. The highly talented group showed superior motor inhibition as measured by stop signal reaction time (SSRT) on the Stop Signal task and a larger alerting effect on the Attention Network Test, indicating an enhanced ability to attain and maintain an alert state. No group differences were found for orienting and executive attention and visuospatial working memory. A logistic regression model with group (highly talented or amateur) as dependent variable and executive function measures that significantly distinguished between groups as predictors showed that these measures differentiated highly talented soccer players from amateur soccer players with 89% accuracy. Highly talented youth soccer players outperform youth amateur players on suppressing ongoing motor responses and on the ability to attain and maintain an alert state; both may be essential for success in soccer. PMID:24632735

  9. Faster Heavy Ion Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.

    2013-01-01

    The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.

  10. Investigations of black-hole spectra: Purely-imaginary modes and Kerr ringdown radiation

    NASA Astrophysics Data System (ADS)

    Zalutskiy, Maxim P.

    When black holes are perturbed they give rise to characteristic waves that propagate outwards carrying information about the black hole. In the linear regime these waves are described in terms of quasinormal modes (QNM). Studying QNM is an important topic which may provide a connection to the quantum theory of gravity in addition to their astrophysical applications. Quasinormal modes correspond to complex frequencies where the real part represents oscillation and the imaginary part represents damping. We have developed a new code for calculating QNM with high precision and accuracy, which we applied to the Schwarzschild and Kerr geometries. The high accuracy of our calculations was a significant improvement over prior work, allowing us to compute QNM much closer to the negative imaginary axis (NIA) than it was possible before. The existence of QNM on the NIA has remained poorly understood, but our high accuracy studies have highlighted the importance of understanding their nature. In this work we show how the purely-imaginary modes can be calculated with the help of the theory of confluent Heun polynomials with the conclusion that all modes on the NIA correspond to polynomial solutions. We also show that certain types of these modes correspond to Kerr QNM. Finally, using our highly accurate QNM data we model the ringdown, a remnant black hole's decaying radiation. Ringdown occurs in the final stages of such violent astrophysical events as supernovae and black hole collisions. We use our model to analyse the ringdown waveforms from the publicly available binary black hole coalescence catalog maintained by the SXS collaboration. In our analysis we use a number of methods: Fourier transform, multi-mode nonlinear fitting and waveform overlap. Both our fitting and overlap approach allow inclusion of many modes in the ringdown model with the goal being to extract information about the nature of the astrophysical source of the ringdown signal.

  11. Wavefront reconstruction for multi-lateral shearing interferometry using difference Zernike polynomials fitting

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu

    2018-07-01

    For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.

  12. High diagnostic accuracy of subcutaneous Triptorelin test compared with GnRH test for diagnosing central precocious puberty in girls.

    PubMed

    Freire, Analía Verónica; Escobar, María Eugenia; Gryngarten, Mirta Graciela; Arcari, Andrea Josefina; Ballerini, María Gabriela; Bergadá, Ignacio; Ropelato, María Gabriela

    2013-03-01

    The GnRH test is the gold standard to confirm the diagnosis of central precocious puberty (CPP); however, this compound is not always readily available. Diagnostic accuracy of subcutaneous GnRH analogues tests compared to classical GnRH test has not been reported. To evaluate the diagnostic accuracy of Triptorelin test (index test) compared to the GnRH test (reference test) in girls with suspicion of CPP. A prospective, case-control, randomized clinical trial was performed. CPP or precocious thelarche (PT) was diagnosed according to maximal LH response to GnRH test and clinical characteristics during follow-up. Forty-six girls with premature breast development randomly underwent two tests: (i) intravenous GnRH 100 μg, (ii) subcutaneous Triptorelin acetate (0.1 mg/m(2), to a maximum of 0.1 mg) with blood sampling at 0, 3 and 24 h for LH, FSH and estradiol ascertainment. Gonadotrophins and estradiol responses to Triptorelin test were measured by ultrasensitive assays. Clinical features were similar between CPP (n = 33) and PT (n = 13) groups. Using receiver operating characteristic curves, maximal LH response (LH-3 h) under Triptorelin test ≥ 7 IU/l by immunofluorometric assay (IFMA) or ≥ 8 IU/l by electrochemiluminescence immunoassay (ECLIA) confirmed the diagnosis of CPP with specificity of 1.00 (95% CI: 0.75-1.00) and sensitivity 0.76 (95% CI: 0.58-0.89). Considering either LH-3 h or maximal estradiol response at 24 h (cut-off value, 295 pm), maintaining the specificity at 1.00, the test sensitivity increased to 0.94 (95% CI: 0.80-0.99) and the diagnostic efficiency to 96%. The Triptorelin test had high accuracy for the differential diagnosis of CPP vs PT in girls providing a valid alternative to the classical GnRH test. This test also allowed a comprehensive evaluation of the pituitary-ovarian axis. © 2012 Blackwell Publishing Ltd.

  13. Effects of cognitive load on neural and behavioral responses to smoking cue distractors

    PubMed Central

    MacLean, R. Ross; Nichols, Travis T.; LeBreton, James M.; Wilson, Stephen J.

    2017-01-01

    Smoking cessation failures are frequently thought to reflect poor top-down regulatory control over behavior. Previous studies suggest that smoking cues occupy limited working memory resources, an effect that may contribute to difficulty achieving abstinence. Few studies have evaluated the effects of cognitive load on the ability to actively maintain information in the face of distracting smoking cues. The current study adapted an fMRI probed recall task under low and high cognitive load with three distractor conditions: control, neutral images, or smoking-related images. Consistent with a limited-resource model of cue reactivity, we predicted that performance of daily smokers (n=17) would be most impaired when high load was paired with smoking distractors. Results demonstrated a main effect of load, with decreased accuracy under high, compared to low, cognitive load. Surprisingly, an interaction revealed the effect of load was weakest in the smoking cue distractor condition. Along with this behavioral effect, we observed significantly greater activation of the right inferior frontal gyrus (rIFG) in the low load condition relative to the high load condition for trials containing smoking cue distractors. Furthermore, load-related changes in rIFG activation partially mediated the effects of load on task accuracy in the smoking cue distractor condition. These findings are discussed in the context of prevailing cognitive and cue reactivity theories. Results suggest that high cognitive load does not necessarily make smokers more susceptible to interference from smoking-related stimuli, and that elevated load may even have a buffering effect in the presence of smoking cues under certain conditions. PMID:27012714

  14. Development of ultra-high temperature material characterization capabilities using digital image correlation analysis

    NASA Astrophysics Data System (ADS)

    Cline, Julia Elaine

    2011-12-01

    Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.

  15. Practical polarization maintaining optical fibre temperature sensor for harsh environment application

    NASA Astrophysics Data System (ADS)

    Yang, Yuanhong; Xia, Haiyun; Jin, Wei

    2007-10-01

    A reflection spot temperature sensor was proposed based on the polarization mode interference in polarization maintaining optical fibre (PMF) and the phenomenon that the propagation constant difference of the two orthogonal polarization modes in stressing structures PMF is sensitive to temperature and the sensing equation was obtained. In this temperature sensor, a broadband source was used to suppress the drift due to polarization coupling in lead-in/lead-out PMF. A characteristic and performance investigation proved this sensor to be practical, flexible and precise. Experimental results fitted the theory model very well and the noise-limited minimum detectable temperature variation is less than 0.01 °C. The electric arc processing was investigated and the differential propagation constant modifying the PMF probe is performed. For the demand of field hot-spot monitoring of huge power transformers, a remote multi-channel temperature sensor prototype has been made and tested. Specially coated Panda PMF that can stand high temperatures up to 250 °C was fabricated and used as probe fibres. The sensor probes were sealed within thin quartz tubes that have high voltage insulation and can work in a hot oil and vapour environment. Test results show that the accuracy of the system is better than ±0.5 °C within 0 °C to 200 °C.

  16. Revealing hidden states in visual working memory using electroencephalography

    PubMed Central

    Wolff, Michael J.; Ding, Jacqueline; Myers, Nicholas E.; Stokes, Mark G.

    2015-01-01

    It is often assumed that information in visual working memory (vWM) is maintained via persistent activity. However, recent evidence indicates that information in vWM could be maintained in an effectively “activity-silent” neural state. Silent vWM is consistent with recent cognitive and neural models, but poses an important experimental problem: how can we study these silent states using conventional measures of brain activity? We propose a novel approach that is analogous to echolocation: using a high-contrast visual stimulus, it may be possible to drive brain activity during vWM maintenance and measure the vWM-dependent impulse response. We recorded electroencephalography (EEG) while participants performed a vWM task in which a randomly oriented grating was remembered. Crucially, a high-contrast, task-irrelevant stimulus was shown in the maintenance period in half of the trials. The electrophysiological response from posterior channels was used to decode the orientations of the gratings. While orientations could be decoded during and shortly after stimulus presentation, decoding accuracy dropped back close to baseline in the delay. However, the visual evoked response from the task-irrelevant stimulus resulted in a clear re-emergence in decodability. This result provides important proof-of-concept for a promising and relatively simple approach to decode “activity-silent” vWM content using non-invasive EEG. PMID:26388748

  17. SU-E-J-42: Motion Adaptive Image Filter for Low Dose X-Ray Fluoroscopy in the Real-Time Tumor-Tracking Radiotherapy System.

    PubMed

    Miyamoto, N; Ishikawa, M; Sutherland, K; Suzuki, R; Matsuura, T; Takao, S; Toramatsu, C; Nihongi, H; Shimizu, S; Onimaru, R; Umegaki, K; Shirato, H

    2012-06-01

    In the real-time tumor-tracking radiotherapy system, fiducial markers are detected by X-ray fluoroscopy. The fluoroscopic parameters should be optimized as low as possible in order to reduce unnecessary imaging dose. However, the fiducial markers could not be recognized due to effect of statistical noise in low dose imaging. Image processing is envisioned to be a solution to improve image quality and to maintain tracking accuracy. In this study, a recursive image filter adapted to target motion is proposed. A fluoroscopy system was used for the experiment. A spherical gold marker was used as a fiducial marker. About 450 fluoroscopic images of the marker were recorded. In order to mimic respiratory motion of the marker, the images were shifted sequentially. The tube voltage, current and exposure duration were fixed at 65 kV, 50 mA and 2.5 msec as low dose imaging condition, respectively. The tube current was 100 mA as high dose imaging. A pattern recognition score (PRS) ranging from 0 to 100 and image registration error were investigated by performing template pattern matching to each sequential image. The results with and without image processing were compared. In low dose imaging, theimage registration error and the PRS without the image processing were 2.15±1.21 pixel and 46.67±6.40, respectively. Those with the image processing were 1.48±0.82 pixel and 67.80±4.51, respectively. There was nosignificant difference in the image registration error and the PRS between the results of low dose imaging with the image processing and that of high dose imaging without the image processing. The results showed that the recursive filter was effective in order to maintain marker tracking stability and accuracy in low dose fluoroscopy. © 2012 American Association of Physicists in Medicine.

  18. NIST TXR Validation of S-HIS radiances and a UW-SSEC Blackbody

    NASA Astrophysics Data System (ADS)

    Taylor, J. K.; O'Connell, J.; Rice, J. P.; Revercomb, H. E.; Best, F. A.; Tobin, D. C.; Knuteson, R. O.; Adler, D. P.; Ciganovich, N. C.; Dutcher, S. T.; Laporte, D. D.; Ellington, S. D.; Werner, M. W.; Garcia, R. K.

    2007-12-01

    The ability to accurately validate infrared spectral radiances measured from space by direct comparison with airborne spectrometer radiances was first demonstrated using the Scanning High-resolution Interferometer Sounder (S-HIS) aircraft instrument flown under the AIRS on the NASA Aqua spacecraft in 2002 with subsequent successful comparisons in 2004 and 2006. The comparisons span a range of conditions, including arctic and tropical atmospheres, daytime and nighttime, and ocean and land surfaces. Similar comprehensive and successful comparisons have also been conducted with S-HIS for the MODIS sensors, the Tropospheric Emission Spectrometer (TES), and most recently the MetOp Infrared Atmospheric Sounding Interferometer (IASI). These comparisons are part of a larger picture that already shows great progress toward transforming our ability to make, and verify, highly accurate spectral radiance observations from space. A key challenge, especially for climate, is to carefully define the absolute accuracy of satellite radiances. Our vision of the near-term future of spectrally resolved infrared radiance observation includes a new space-borne mission that provides benchmark observations of the emission spectrum for climate. This concept, referred to as the CLimate Absolute Radiance and REfractivity Observatory (CLARREO) in the recent NRC Decadal Survey provides more complete spectral and time-of-day coverage and would fly basic physical standards to eliminate the need to assume on-board reference stability. Therefore, the spectral radiances from this mission will also serve as benchmarks to propagate a highly accurate calibration to other space-borne IR instruments. For the current approach of calibrating infrared flight sensors, in which thermal vacuum tests are conducted before launch and stability is assumed after launch, in-flight calibration validation is essential for highly accurate applications. At present, airborne observations provide the only source of direct radiance validation with resulting traceable uncertainties approaching the level required for remote sensing and climate applications (0.1 K 3- sigma). For the calibration validation process to be accurate, repeatable, and meaningful, the reference instrument must be extremely well characterized and understood, carefully maintained, and accurately calibrated, with the calibration accuracy of the reference instrument tied to absolute standards. Tests of the S-HIS absolute calibration have been conducted using the NIST transfer radiometer (TXR). The TXR provides a more direct connection to the Blackbody reference sources maintained by NIST than the normal traceability of blackbody temperature scales and paint emissivity measurements. Two basic tests were conducted: (1) comparison of radiances measured by the S-HIS to those from the TXR, and (2) measuring the reflectivity of a UW-SSEC blackbody by using the TXR as a stable detector. Preliminary results from both tests are very promising for confirming and refining the expected absolute accuracy of the S-HIS.

  19. Partnership for Edge Physics (EPSI), University of Texas Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, Robert; Carey, Varis; Michoski, Craig

    Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less

  20. Mapping simulated scenes with skeletal remains using differential GPS in open environments: an assessment of accuracy and practicality.

    PubMed

    Walter, Brittany S; Schultz, John J

    2013-05-10

    Scene mapping is an integral aspect of processing a scene with scattered human remains. By utilizing the appropriate mapping technique, investigators can accurately document the location of human remains and maintain a precise geospatial record of evidence. One option that has not received much attention for mapping forensic evidence is the differential global positioning (DGPS) unit, as this technology now provides decreased positional error suitable for mapping scenes. Because of the lack of knowledge concerning this utility in mapping a scene, controlled research is necessary to determine the practicality of using newer and enhanced DGPS units in mapping scattered human remains. The purpose of this research was to quantify the accuracy of a DGPS unit for mapping skeletal dispersals and to determine the applicability of this utility in mapping a scene with dispersed remains. First, the accuracy of the DGPS unit in open environments was determined using known survey markers in open areas. Secondly, three simulated scenes exhibiting different types of dispersals were constructed and mapped in an open environment using the DGPS. Variables considered during data collection included the extent of the dispersal, data collection time, data collected on different days, and different postprocessing techniques. Data were differentially postprocessed and compared in a geographic information system (GIS) to evaluate the most efficient recordation methods. Results of this study demonstrate that the DGPS is a viable option for mapping dispersed human remains in open areas. The accuracy of collected point data was 11.52 and 9.55 cm for 50- and 100-s collection times, respectfully, and the orientation and maximum length of long bones was maintained. Also, the use of error buffers for point data of bones in maps demonstrated the error of the DGPS unit, while showing that the context of the dispersed skeleton was accurately maintained. Furthermore, the application of a DGPS for accurate scene mapping is discussed and guidelines concerning the implementation of this technology for mapping human scattered skeletal remains in open environments are provided. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Description of the process used to create 1992 Hanford Morality Study database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, E.S.; Buchanan, J.A.; Holter, N.A.

    1992-12-01

    An updated and expanded database for the Hanford Mortality Study has been developed by PNL`s Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL`s Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositionsmore » of radionuclides also maintained by PNL`s Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.« less

  2. Description of the process used to create 1992 Hanford Morality Study database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, E. S.; Buchanan, J. A.; Holter, N. A.

    1992-12-01

    An updated and expanded database for the Hanford Mortality Study has been developed by PNL's Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL's Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositionsmore » of radionuclides also maintained by PNL's Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.« less

  3. CAF: Defining, Refining and Differentiating Constructs

    ERIC Educational Resources Information Center

    Pallotti, Gabriele

    2009-01-01

    This article critically scrutinizes a number of issues involved in the definition and operationalization of complexity, accuracy, and fluency (CAF) constructs. It argues for maintaining clearer distinctions between CAF, on the one hand, and notions such as linguistic development and communicative adequacy, on the other. Adequacy, in particular,…

  4. 40 CFR 205.174 - Remedial orders.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calibrated with the acoustic calibrator as often as is necessary throughout testing to maintain the accuracy... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle Exhaust Systems § 205.174 Remedial orders. The... Noise Emission Test Procedures Appendix I-1 to Subparts D and E—Test Procedure for Street and off-road...

  5. 40 CFR Appendix I to Subparts D... - Motorcycle Noise Emission Test Procedures

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated with the acoustic calibrator as often as is necessary throughout testing to maintain the accuracy... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Motorcycle Noise Emission Test... AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle...

  6. 40 CFR Appendix I to Subparts D... - Motorcycle Noise Emission Test Procedures

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calibrated with the acoustic calibrator as often as is necessary throughout testing to maintain the accuracy... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Motorcycle Noise Emission Test... AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle...

  7. Sensitivity and specificity of a briefer version of the Cambridge Cognitive Examination (CAMCog-Short) in the detection of cognitive decline in the elderly: An exploratory study.

    PubMed

    Radanovic, Marcia; Facco, Giuliana; Forlenza, Orestes V

    2018-05-01

    To create a reduced and briefer version of the widely used Cambridge Cognitive Examination (CAMCog) battery as a concise cognitive test to be used in primary and secondary levels of health care to detect cognitive decline. Our aim was to reduce the administration time of the original test while maintaining its diagnostic accuracy. On the basis of the analysis of 835 CAMCog tests performed by 429 subjects (107 controls, 192 mild cognitive impairment [MCI], and 130 dementia patients), we extracted items that most contributed to intergroup differentiation, according to 2 educational levels (≤8 and >8 y of formal schooling). The final 33-item "low education" and 24-item"high education" CAMCog-Short correspond to 48.5% and 35% of the original version and yielded similar rates of accuracy: area under ROC curves (AUC) > 0.9 in the differentiation between controls × dementia and MCI × dementia (sensitivities > 75%; specificities > 90%); AUC > 0.7 for the differentiation between controls and MCI (sensitivities > 65%; specificities > 75%). The CAMCog-Short emerges as a promising tool for a brief, yet sufficiently accurate, screening tool for use in clinical settings. Further prospective studies designed to validate its diagnostic accuracy are needed. Copyright © 2018 John Wiley & Sons, Ltd.

  8. A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.

    PubMed

    Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan

    2014-07-18

    Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp.

  9. Robust Vehicle Detection under Various Environmental Conditions Using an Infrared Thermal Camera and Its Application to Road Traffic Flow Monitoring

    PubMed Central

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-01-01

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988

  10. Accurate Radiometry from Space: An Essential Tool for Climate Studies

    NASA Technical Reports Server (NTRS)

    Fox, Nigel; Kaiser-Weiss, Andrea; Schmutz, Werner; Thome, Kurtis; Young, Dave; Wielicki, Bruce; Winkler, Rainer; Woolliams, Emma

    2011-01-01

    The Earth s climate is undoubtedly changing; however, the time scale, consequences and causal attribution remain the subject of significant debate and uncertainty. Detection of subtle indicators from a background of natural variability requires measurements over a time base of decades. This places severe demands on the instrumentation used, requiring measurements of sufficient accuracy and sensitivity that can allow reliable judgements to be made decades apart. The International System of Units (SI) and the network of National Metrology Institutes were developed to address such requirements. However, ensuring and maintaining SI traceability of sufficient accuracy in instruments orbiting the Earth presents a significant new challenge to the metrology community. This paper highlights some key measurands and applications driving the uncertainty demand of the climate community in the solar reflective domain, e.g. solar irradiances and reflectances/radiances of the Earth. It discusses how meeting these uncertainties facilitate significant improvement in the forecasting abilities of climate models. After discussing the current state of the art, it describes a new satellite mission, called TRUTHS, which enables, for the first time, high-accuracy SI traceability to be established in orbit. The direct use of a primary standard and replication of the terrestrial traceability chain extends the SI into space, in effect realizing a metrology laboratory in space . Keywords: climate change; Earth observation; satellites; radiometry; solar irradiance

  11. Shifting visual perspective during memory retrieval reduces the accuracy of subsequent memories.

    PubMed

    Marcotti, Petra; St Jacques, Peggy L

    2018-03-01

    Memories for events can be retrieved from visual perspectives that were never experienced, reflecting the dynamic and reconstructive nature of memories. Characteristics of memories can be altered when shifting from an own eyes perspective, the way most events are initially experienced, to an observer perspective, in which one sees oneself in the memory. Moreover, recent evidence has linked these retrieval-related effects of visual perspective to subsequent changes in memories. Here we examine how shifting visual perspective influences the accuracy of subsequent memories for complex events encoded in the lab. Participants performed a series of mini-events that were experienced from their own eyes, and were later asked to retrieve memories for these events while maintaining the own eyes perspective or shifting to an alternative observer perspective. We then examined how shifting perspective during retrieval modified memories by influencing the accuracy of recall on a final memory test. Across two experiments, we found that shifting visual perspective reduced the accuracy of subsequent memories and that reductions in vividness when shifting visual perspective during retrieval predicted these changes in the accuracy of memories. Our findings suggest that shifting from an own eyes to an observer perspective influences the accuracy of long-term memories.

  12. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-10-09

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  13. Face matching in a long task: enforced rest and desk-switching cannot maintain identification accuracy

    PubMed Central

    Alenezi, Hamood M.; Fysh, Matthew C.; Johnston, Robert A.

    2015-01-01

    In face matching, observers have to decide whether two photographs depict the same person or different people. This task is not only remarkably difficult but accuracy declines further during prolonged testing. The current study investigated whether this decline in long tasks can be eliminated with regular rest-breaks (Experiment 1) or room-switching (Experiment 2). Both experiments replicated the accuracy decline for long face-matching tasks and showed that this could not be eliminated with rest or room-switching. These findings suggest that person identification in applied settings, such as passport control, might be particularly error-prone due to the long and repetitive nature of the task. The experiments also show that it is difficult to counteract these problems. PMID:26312179

  14. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    NASA Astrophysics Data System (ADS)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  15. School Health Education in a Multicultural Society. ERIC Digest.

    ERIC Educational Resources Information Center

    Anderson, Barbara Frye

    School health education needs to build a broad base of awareness, tolerance, and sensitivity to different expressions of healthy behavior while maintaining scientific accuracy. This can only be accomplished through exposing children to the various types of health knowledge found in different cultures. Health education involves helping students:…

  16. Validating the MISR radiometric scale for the ocean aerosol science communities

    NASA Technical Reports Server (NTRS)

    Bruegge, Carol J.; Abdou, Wedad; Diner, David J.; Gaitley, Barbara; Helmlinger, Mark; Kahn, Ralph; Martonchik, John V.

    2004-01-01

    This paper validates that radiometric accuracy is maintained throughout the dynamic range of the instrument. As part of this study, a new look has been taken on the band-relative scale, and a decrease in the radiance reported for the Red and NIR Bands has resulted.

  17. Response Covariation: The Relationship between Correct Academic Responding and Problem Behavior.

    ERIC Educational Resources Information Center

    Lalli, Joseph S.; Kates, Kelly; Casey, Sean D.

    1999-01-01

    Examines the relationship between the accuracy of academic responding and aggression for two boys with mild retardation. Aggression was highest during spelling instruction; an evaluation showed aggression was escape maintained. Changes in teaching formats resulted in increased posttest scores. Data showed that the rates of problem behavior…

  18. 3 CFR - Enhancing Payment Accuracy Through a “Do Not Pay List”

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... are not made. Agencies maintain many databases containing information on a recipient's eligibility to... databases before making payments or awards, agencies can identify ineligible recipients and prevent certain... pre-payment and pre-award procedures and ensure that a thorough review of available databases with...

  19. 12 CFR 1235.4 - Minimum requirements of a record retention program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... appropriate to support administrative, business, external and internal audit functions, and litigation of the... for appropriate back-up and recovery of electronic records to ensure the same accuracy as the primary... records, preferably searchable, must be maintained on immutable, non-rewritable storage in a manner that...

  20. 12 CFR 1235.4 - Minimum requirements of a record retention program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... appropriate to support administrative, business, external and internal audit functions, and litigation of the... for appropriate back-up and recovery of electronic records to ensure the same accuracy as the primary... records, preferably searchable, must be maintained on immutable, non-rewritable storage in a manner that...

  1. 12 CFR 1235.4 - Minimum requirements of a record retention program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... appropriate to support administrative, business, external and internal audit functions, and litigation of the... for appropriate back-up and recovery of electronic records to ensure the same accuracy as the primary... records, preferably searchable, must be maintained on immutable, non-rewritable storage in a manner that...

  2. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  3. Coal thickness guage using RRAS techniques, parts 2 and 3

    NASA Technical Reports Server (NTRS)

    King, J. D.; Rollwitz, W. L.

    1980-01-01

    Electron magnetic resonance was investigated as a sensing technique for use in measuring the thickness of the layer of coal overlying the rock substrate. The goal is development of a thickness gauge which will be usable for control of mining machinery to maintain the coal thickness within selected bounds. A sensor must be noncontracting, have a measurement range of 6 inches or more, and an accuracy of 1/2 inch or better. The sensor should be insensitive to variations in spacing between the sensor and the surface, the response speed should be adequate to permit use on continuous mining equipment, and the device should be rugged and otherwise suited for operation under conditions of high vibration, moisture, and dust. Finally, the sensor measurement must not be adversely affected by the natural effects occurring in coal such as impurities, voids, cracks, layering, high moisture level, and other conditions that are likely to be encountered.

  4. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    NASA Astrophysics Data System (ADS)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  5. Mining Consumer Health Vocabulary from Community-Generated Text

    PubMed Central

    Vydiswaran, V.G. Vinod; Mei, Qiaozhu; Hanauer, David A.; Zheng, Kai

    2014-01-01

    Community-generated text corpora can be a valuable resource to extract consumer health vocabulary (CHV) and link them to professional terminologies and alternative variants. In this research, we propose a pattern-based text-mining approach to identify pairs of CHV and professional terms from Wikipedia, a large text corpus created and maintained by the community. A novel measure, leveraging the ratio of frequency of occurrence, was used to differentiate consumer terms from professional terms. We empirically evaluated the applicability of this approach using a large data sample consisting of MedLine abstracts and all posts from an online health forum, MedHelp. The results show that the proposed approach is able to identify synonymous pairs and label the terms as either consumer or professional term with high accuracy. We conclude that the proposed approach provides great potential to produce a high quality CHV to improve the performance of computational applications in processing consumer-generated health text. PMID:25954426

  6. Identification of Extracellular Segments by Mass Spectrometry Improves Topology Prediction of Transmembrane Proteins.

    PubMed

    Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E

    2017-02-13

    Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.

  7. Quasi steady-state aerodynamic model development for race vehicle simulations

    NASA Astrophysics Data System (ADS)

    Mohrfeld-Halterman, J. A.; Uddin, M.

    2016-01-01

    Presented in this paper is a procedure to develop a high fidelity quasi steady-state aerodynamic model for use in race car vehicle dynamic simulations. Developed to fit quasi steady-state wind tunnel data, the aerodynamic model is regressed against three independent variables: front ground clearance, rear ride height, and yaw angle. An initial dual range model is presented and then further refined to reduce the model complexity while maintaining a high level of predictive accuracy. The model complexity reduction decreases the required amount of wind tunnel data thereby reducing wind tunnel testing time and cost. The quasi steady-state aerodynamic model for the pitch moment degree of freedom is systematically developed in this paper. This same procedure can be extended to the other five aerodynamic degrees of freedom to develop a complete six degree of freedom quasi steady-state aerodynamic model for any vehicle.

  8. A solution to water vapor in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Gloss, Blair B.; Bruce, Robert A.

    1989-01-01

    As cryogenic wind tunnels are utilized, problems associated with the low temperature environment are being discovered and solved. Recently, water vapor contamination was discovered in the National Transonic Facility, and the source was shown to be the internal insulation which is a closed-cell polyisocyanurate foam. After an extensive study of the absorptivity characteristics of the NTF thermal insulation, the most practical solution to the problem was shown to be the maintaining of a dry environment in the circuit at all times. Utilizing a high aspect ratio transport model, it was shown that the moisture contamination effects on the supercritical wing pressure distributions were within the accuracy of setting test conditions and as such were considered negligible for this model.

  9. KSC-2014-4555

    NASA Image and Video Library

    2014-11-20

    CAPE CANAVERAL, Fla. – Workers are on hand to receive NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, into the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  10. Algorithms for tensor network renormalization

    NASA Astrophysics Data System (ADS)

    Evenbly, G.

    2017-01-01

    We discuss in detail algorithms for implementing tensor network renormalization (TNR) for the study of classical statistical and quantum many-body systems. First, we recall established techniques for how the partition function of a 2 D classical many-body system or the Euclidean path integral of a 1 D quantum system can be represented as a network of tensors, before describing how TNR can be implemented to efficiently contract the network via a sequence of coarse-graining transformations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statistical and 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracy over sustained coarse-graining transformations, even at a critical point, is demonstrated.

  11. Insights to primitive replication derived from structures of small oligonucleotides

    NASA Technical Reports Server (NTRS)

    Smith, G. K.; Fox, G. E.

    1995-01-01

    Available information on the structure of small oligonucleotides is surveyed. It is observed that even small oligomers typically exhibit defined structures over a wide range of pH and temperature. These structures rely on a plethora of non-standard base-base interactions in addition to the traditional Watson-Crick pairings. Stable duplexes, though typically antiparallel, can be parallel or staggered and perfect complementarity is not essential. These results imply that primitive template directed reactions do not require high fidelity. Hence, the extensive use of Watson-Crick complementarity in genes rather than being a direct consequence of the primitive condensation process, may instead reflect subsequent selection based on the advantage of accuracy in maintaining the primitive genetic machinery once it arose.

  12. Digital phase-lock loop

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1991-01-01

    An improved digital phase lock loop incorporates several distinctive features that attain better performance at high loop gain and better phase accuracy. These features include: phase feedback to a number-controlled oscillator in addition to phase rate; analytical tracking of phase (both integer and fractional cycles); an amplitude-insensitive phase extractor; a more accurate method for extracting measured phase; a method for changing loop gain during a track without loss of lock; and a method for avoiding loss of sampled data during computation delay, while maintaining excellent tracking performance. The advantages of using phase and phase-rate feedback are demonstrated by comparing performance with that of rate-only feedback. Extraction of phase by the method of modeling provides accurate phase measurements even when the number-controlled oscillator phase is discontinuously updated.

  13. Validation of a Plasma-Based Comprehensive Cancer Genotyping Assay Utilizing Orthogonal Tissue- and Plasma-Based Methodologies.

    PubMed

    Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli

    2018-04-24

    Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018 AACR. ©2018 American Association for Cancer Research.

  14. An accuracy improvement method for the topology measurement of an atomic force microscope using a 2D wavelet transform.

    PubMed

    Yoon, Yeomin; Noh, Suwoo; Jeong, Jiseong; Park, Kyihwan

    2018-05-01

    The topology image is constructed from the 2D matrix (XY directions) of heights Z captured from the force-feedback loop controller. For small height variations, nonlinear effects such as hysteresis or creep of the PZT-driven Z nano scanner can be neglected and its calibration is quite straightforward. For large height variations, the linear approximation of the PZT-driven Z nano scanner fail and nonlinear behaviors must be considered because this would cause inaccuracies in the measurement image. In order to avoid such inaccuracies, an additional strain gauge sensor is used to directly measure displacement of the PZT-driven Z nano scanner. However, this approach also has a disadvantage in its relatively low precision. In order to obtain high precision data with good linearity, we propose a method of overcoming the low precision problem of the strain gauge while its feature of good linearity is maintained. We expect that the topology image obtained from the strain gauge sensor showing significant noise at high frequencies. On the other hand, the topology image obtained from the controller output showing low noise at high frequencies. If the low and high frequency signals are separable from both topology images, the image can be constructed so that it is represented with high accuracy and low noise. In order to separate the low frequencies from high frequencies, a 2D Haar wavelet transform is used. Our proposed method use the 2D wavelet transform for obtaining good linearity from strain gauge sensor and good precision from controller output. The advantages of the proposed method are experimentally validated by using topology images. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.

    PubMed

    Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A

    2014-10-01

    Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  16. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less

  17. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  18. Infrared calibration for climate: a perspective on present and future high-spectral resolution instruments

    NASA Astrophysics Data System (ADS)

    Revercomb, Henry E.; Anderson, James G.; Best, Fred A.; Tobin, David C.; Knuteson, Robert O.; LaPorte, Daniel D.; Taylor, Joe K.

    2006-12-01

    The new era of high spectral resolution infrared instruments for atmospheric sounding offers great opportunities for climate change applications. A major issue with most of our existing IR observations from space is spectral sampling uncertainty and the lack of standardization in spectral sampling. The new ultra resolution observing capabilities from the AIRS grating spectrometer on the NASA Aqua platform and from new operational FTS instruments (IASI on Metop, CrIS for NPP/NPOESS, and the GIFTS for a GOES demonstration) will go a long way toward improving this situation. These new observations offer the following improvements: 1. Absolute accuracy, moving from issues of order 1 K to <0.2-0.4 K brightness temperature, 2. More complete spectral coverage, with Nyquist sampling for scale standardization, and 3. Capabilities for unifying IR calibration among different instruments and platforms. However, more needs to be done to meet the immediate needs for climate and to effectively leverage these new operational weather systems, including 1. Place special emphasis on making new instruments as accurate as they can be to realize the potential of technological investments already made, 2. Maintain a careful validation program for establishing the best possible direct radiance check of long-term accuracy--specifically, continuing to use aircraft-or balloon-borne instruments that are periodically checked directly with NIST, and 3. Commit to a simple, new IR mission that will provide an ongoing backbone for the climate observing system. The new mission would make use of Fourier Transform Spectrometer measurements to fill in spectral and diurnal sampling gaps of the operational systems and provide a benchmark with better than 0.1K 3-sigma accuracy based on standards that are verifiable in-flight.

  19. Datum maintenance of the main Egyptian geodetic control networks by utilizing Precise Point Positioning "PPP" technique

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elmewafey, Mahmoud; Farahan, Magda H.

    2016-06-01

    A geodetic control network is the wire-frame or the skeleton on which continuous and consistent mapping, Geographic Information Systems (GIS), and surveys are based. Traditionally, geodetic control points are established as permanent physical monuments placed in the ground and precisely marked, located, and documented. With the development of satellite surveying methods and their availability and high degree of accuracy, a geodetic control network could be established by using GNSS and referred to an international terrestrial reference frame used as a three-dimensional geocentric reference system for a country. Based on this concept, in 1992, the Egypt Survey Authority (ESA) established two networks, namely High Accuracy Reference Network (HARN) and the National Agricultural Cadastral Network (NACN). To transfer the International Terrestrial Reference Frame to the HARN, the HARN was connected with four IGS stations. The processing results were 1:10,000,000 (Order A) for HARN and 1:1,000,000 (Order B) for NACN relative network accuracy standard between stations defined in ITRF1994 Epoch1996. Since 1996, ESA did not perform any updating or maintaining works for these networks. To see how non-performing maintenance degrading the values of the HARN and NACN, the available HARN and NACN stations in the Nile Delta were observed. The Processing of the tested part was done by CSRS-PPP Service based on utilizing Precise Point Positioning "PPP" and Trimble Business Center "TBC". The study shows the feasibility of Precise Point Positioning in updating the absolute positioning of the HARN network and its role in updating the reference frame (ITRF). The study also confirmed the necessity of the absent role of datum maintenance of Egypt networks.

  20. High accuracy analysis of whistlers measured simultaneously on ground station and on board of the DEMETER satellite

    NASA Astrophysics Data System (ADS)

    Hamar, D.; Ferencz, Cs.; Steinbach, P.; Lichtenberger, J.; Ferencz, O. E.; Parrot, M.

    2009-04-01

    Examining the mechanism and effect of the coupling of the electromagnetic signals from the lower ionosphere into the Earth-ionosphere waveguide (EIWG) can be maintained with the analysis of simultaneous broadband VLF recordings acquired at ground station (Tihany, Hungary) and on LEO orbiting satellite (DEMETER) during nearby passes. Single hop whistlers, selected from concurrent broadband VLF data sets were analyzed with high accuracy applying the matched filtering (MF) technique, developed previously for signal analysis. The accuracy of the frequency-time-amplitude pattern and the resolution of the closely spaced whistler traces were further increased with least-square estimation of the parameters of the output of MF procedure. One result of this analysis is the fine structure of the whistler which can not be recognized in conventional spectrogram. The comparison of the detailed fine structure of the whistlers measured on board and on the ground enabled us to select reliably the corresponding signal pairs. The remarkable difference seen in the fine structure of matching whistler occurrences in the satellite and the ground data series can be addressed e.g. to the effect of the inhomogeneous ionospheric plasma (trans-ionosperic impulse propagation) or the process of wave energy leaking out from the ionized medium into the EIWG. This field needs further investigations. References: Ferencz Cs., Ferencz O. E., Hamar D. and Lichtenberger, J., (2001) Whistler Phenomena, Short Impulse Propagation; Kluwer Academic Publisher, ISBN 0-7923-6995-5, Netherlands Lichtenberger, J., Hamar D. and Ferencz Cs.,(2003) Methods for analyzing the structure and propagation characteristics of whistlers, in: Very Low Frequency (VLF) Phenomena, Narosa Publishing House, New Delhi, p. 88-107.

  1. Dynamic graciloplasty for urinary incontinence: the potential for sequential closed-loop stimulation.

    PubMed

    Zonnevijlle, Erik D H; Perez-Abadia, Gustavo; Stremel, Richard W; Maldonado, Claudio J; Kon, Moshe; Barker, John H

    2003-11-01

    Muscle tissue transplantation applied to regain or dynamically assist contractile functions is known as 'dynamic myoplasty'. Success rates of clinical applications are unpredictable, because of lack of endurance, ischemic lesions, abundant scar formation and inadequate performance of tasks due to lack of refined control. Electrical stimulation is used to control dynamic myoplasties and should be improved to reduce some of these drawbacks. Sequential segmental neuromuscular stimulation improves the endurance and closed-loop control offers refinement in rate of contraction of the muscle, while function-controlling stimulator algorithms present the possibility of performing more complex tasks. An acute feasibility study was performed in anaesthetised dogs combining these techniques. Electrically stimulated gracilis-based neo-sphincters were compared to native sphincters with regard to their ability to maintain continence. Measurements were made during fast bladder pressure changes, static high bladder pressure and slow filling of the bladder, mimicking among others posture changes, lifting heavy objects and diuresis. In general, neo-sphincter and native sphincter performance showed no significant difference during these measurements. However, during high bladder pressures reaching 40 cm H(2)O the neo-sphincters maintained positive pressure gradients, whereas most native sphincters relaxed. During slow filling of the bladder the neo-sphincters maintained a controlled positive pressure gradient for a prolonged time without any form of training. Furthermore, the accuracy of these maintained pressure gradients proved to be within the limits set up by the native sphincters. Refinements using more complicated self-learning function-controlling algorithms proved to be effective also and are briefly discussed. In conclusion, a combination of sequential stimulation, closed-loop control and function-controlling algorithms proved feasible in this dynamic graciloplasty-model. Neo-sphincters were created, which would probably provide an acceptable performance, when the stimulation system could be implanted and further tested. Sizing this technique down to implantable proportions seems to be justified and will enable exploration of the possible benefits.

  2. Effect of sample storage temperature and buffer formulation on faecal immunochemical test haemoglobin measurements.

    PubMed

    Symonds, Erin L; Cole, Stephen R; Bastin, Dawn; Fraser, Robert Jl; Young, Graeme P

    2017-12-01

    Objectives Faecal immunochemical test accuracy may be adversely affected when samples are exposed to high temperatures. This study evaluated the effect of two sample collection buffer formulations (OC-Sensor, Eiken) and storage temperatures on faecal haemoglobin readings. Methods Faecal immunochemical test samples returned in a screening programme and with ≥10 µg Hb/g faeces in either the original or new formulation haemoglobin stabilizing buffer were stored in the freezer, refrigerator, or at room temperature (22℃-24℃), and reanalysed after 1-14 days. Samples in the new buffer were also reanalysed after storage at 35℃ and 50℃. Results were expressed as percentage of the initial concentration, and the number of days that levels were maintained to at least 80% was calculated. Results Haemoglobin concentrations were maintained above 80% of their initial concentration with both freezer and refrigerator storage, regardless of buffer formulation or storage duration. Stability at room temperature was significantly better in the new buffer, with haemoglobin remaining above 80% for 20 days compared with six days in the original buffer. Storage at 35℃ or 50℃ in the new buffer maintained haemoglobin above 80% for eight and two days, respectively. Conclusion The new formulation buffer has enhanced haemoglobin stabilizing properties when samples are exposed to temperatures greater than 22℃.

  3. The protein-protein interface evolution acts in a similar way to antibody affinity maturation.

    PubMed

    Li, Bohua; Zhao, Lei; Wang, Chong; Guo, Huaizu; Wu, Lan; Zhang, Xunming; Qian, Weizhu; Wang, Hao; Guo, Yajun

    2010-02-05

    Understanding the evolutionary mechanism that acts at the interfaces of protein-protein complexes is a fundamental issue with high interest for delineating the macromolecular complexes and networks responsible for regulation and complexity in biological systems. To investigate whether the evolution of protein-protein interface acts in a similar way as antibody affinity maturation, we incorporated evolutionary information derived from antibody affinity maturation with common simulation techniques to evaluate prediction success rates of the computational method in affinity improvement in four different systems: antibody-receptor, antibody-peptide, receptor-membrane ligand, and receptor-soluble ligand. It was interesting to find that the same evolutionary information could improve the prediction success rates in all the four protein-protein complexes with an exceptional high accuracy (>57%). One of the most striking findings in our present study is that not only in the antibody-combining site but in other protein-protein interfaces almost all of the affinity-enhancing mutations are located at the germline hotspot sequences (RGYW or WA), indicating that DNA hot spot mechanisms may be widely used in the evolution of protein-protein interfaces. Our data suggest that the evolution of distinct protein-protein interfaces may use the same basic strategy under selection pressure to maintain interactions. Additionally, our data indicate that classical simulation techniques incorporating the evolutionary information derived from in vivo antibody affinity maturation can be utilized as a powerful tool to improve the binding affinity of protein-protein complex with a high accuracy.

  4. Characterization of the Nimbus-7 SBUV radiometer for the long-term monitoring of stratospheric ozone

    NASA Technical Reports Server (NTRS)

    Cebula, Richard P.; Park, H.; Heath, D. F.

    1988-01-01

    Precise knowledge of in-orbit sensitivity change is critical for the successful monitoring of stratospheric ozone by satellite-based remote sensors. This paper evaluates those aspects of the in-flight operation that influence the long-term stability of the upper stratospheric ozone measurements made by the Nimbus-7 SBUV spectroradiometer and chronicles methods used to maintain the long-term albedo calibration of this UV sensor. It is shown that the instrument's calibration for the ozone measurement, the albedo calibration, has been maintained over the first 6 yr of operation to an accuracy of approximately + or - 2 percent. The instrument's wavelength calibration is shown to drift linearly with time. The knowledge of the SBUV wavelength assignment is maintained to a 0.02-nm precision.

  5. Hypersonic entry vehicle state estimation using nonlinearity-based adaptive cubature Kalman filters

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Xin, Ming

    2017-05-01

    Guidance, navigation, and control of a hypersonic vehicle landing on the Mars rely on precise state feedback information, which is obtained from state estimation. The high uncertainty and nonlinearity of the entry dynamics make the estimation a very challenging problem. In this paper, a new adaptive cubature Kalman filter is proposed for state trajectory estimation of a hypersonic entry vehicle. This new adaptive estimation strategy is based on the measure of nonlinearity of the stochastic system. According to the severity of nonlinearity along the trajectory, the high degree cubature rule or the conventional third degree cubature rule is adaptively used in the cubature Kalman filter. This strategy has the benefit of attaining higher estimation accuracy only when necessary without causing excessive computation load. The simulation results demonstrate that the proposed adaptive filter exhibits better performance than the conventional third-degree cubature Kalman filter while maintaining the same performance as the uniform high degree cubature Kalman filter but with lower computation complexity.

  6. High-precision register error control using active-motion-based roller in roll-to-roll gravure printing

    NASA Astrophysics Data System (ADS)

    Jung, Hoeryong; Nguyen, Ho Anh Duc; Choi, Jaeho; Yim, Hongsik; Shin, Kee-Hyun

    2018-05-01

    The roll-to-roll (R2R) gravure printing method is increasingly being utilized to fabricate electronic devices such as organic thin-film transistor (OTFT), radio-frequency identification (RFID) tags, and flexible PCB owing to its characteristics of high throughput and large area. High precision registration is crucial to satisfy the demand for device miniaturization, the improvement of resolution and accuracy. This paper presents a novel register control method that uses an active motion-based roller (AMBR) to reduce register error in R2R gravure printing. Instead of shifting the phase of the downstream printing roller, which leads to undesired tension disturbance, the 1 degree-of-freedom (1-DOF) mechanical device AMBR is used to compensate for web elongation by controlling its motion according to the register error. The performance of the proposed control method is verified through simulations and experiments, and the results show that the proposed register control method using the AMBR could maintain a register error under ±15 µm.

  7. Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map

    NASA Astrophysics Data System (ADS)

    Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.

    2016-03-01

    Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.

  8. The capability of static and dynamic features to distinguish competent from genuinely expert practitioners in pediatric diagnosis.

    PubMed

    Loveday, Thomas; Wiggins, Mark W; Searle, Ben J; Festa, Marino; Schell, David

    2013-02-01

    The authors describe the development of a new, more objective method of distinguishing experienced competent nonexpert from expert practitioners within pediatric intensive care. Expert performance involves the acquisition and use of refined feature-event associations (cues) in the operational environment. Competent non-experts, although experienced, possess rudimentary cue associations in memory. Thus, they cannot respond as efficiently or as reliably as their expert counterparts, particularly when key diagnostic information is unavailable, such as that provided by dynamic cues. This study involved the application of four distinct tasks in which the use of relevant cues could be expected to increase both the accuracy and the efficiency of diagnostic performance. These tasks included both static and dynamic stimuli that were varied systematically. A total of 50 experienced pediatric intensive staff took part in the study. The sample clustered into two levels across the tasks: Participants who performed at a consistently high level throughout the four tasks were labeled experts, and participants who performed at a lower level throughout the tasks were labeled competent nonexperts. The groups differed in their responses to the diagnostic scenarios presented in two of the tasks and their ability to maintain performance in the absence of dynamic features. Experienced pediatricians can be decomposed into two groups on the basis of their capacity to acquire and use cues; these groups differ in their diagnostic accuracy and in their ability to maintain performance in the absence of dynamic features. The tasks may be used to identify practitioners who are failing to acquire expertise at a rate consistent with their experience, position, or training. This information may be used to guide targeted training efforts.

  9. Characterizing DebriSat Fragments: So Many Fragments, So Much Data, and So Little Time

    NASA Technical Reports Server (NTRS)

    Shiotani, B.; Rivero, M.; Carrasquilla, M.; Allen, S.; Fitz-Coy, N.; Liou, J.-C.; Huynh, T.; Sorge, M.; Cowardin, H.; Opiela, J.; hide

    2017-01-01

    To improve prediction accuracy, the DebriSat project was conceived by NASA and DoD to update existing standard break-up models. Updating standard break-up models require detailed fragment characteristics such as physical size, material properties, bulk density, and ballistic coefficient. For the DebriSat project, a representative modern LEO spacecraft was developed and subjected to a laboratory hypervelocity impact test and all generated fragments with at least one dimension greater than 2 mm are collected, characterized and archived. Since the beginning of the characterization phase of the DebriSat project, over 130,000 fragments have been collected and approximately 250,000 fragments are expected to be collected in total, a three-fold increase over the 85,000 fragments predicted by the current break-up model. The challenge throughout the project has been to ensure the integrity and accuracy of the characteristics of each fragment. To this end, the post hypervelocity-impact test activities, which include fragment collection, extraction, and characterization, have been designed to minimize handling of the fragments. The procedures for fragment collection, extraction, and characterization were painstakingly designed and implemented to maintain the post-impact state of the fragments, thus ensuring the integrity and accuracy of the characterization data. Each process is designed to expedite the accumulation of data, however, the need for speed is restrained by the need to protect the fragments. Methods to expedite the process such as parallel processing have been explored and implemented while continuing to maintain the highest integrity and value of the data. To minimize fragment handling, automated systems have been developed and implemented. Errors due to human inputs are also minimized by the use of these automated systems. This paper discusses the processes and challenges involved in the collection, extraction, and characterization of the fragments as well as the time required to complete the processes. The objective is to provide the orbital debris community an understanding of the scale of the effort required to generate and archive high quality data and metadata for each debris fragment 2 mm or larger generated by the DebriSat project.

  10. Evaluation of a Theory-Based Intervention Aimed at Improving Coaches' Recommendations on Sports Nutrition to Their Athletes.

    PubMed

    Jacob, Raphaëlle; Lamarche, Benoît; Provencher, Véronique; Laramée, Catherine; Valois, Pierre; Goulet, Claude; Drapeau, Vicky

    2016-08-01

    Coaches are a major source of nutrition information and influence for young athletes. Yet, most coaches do not have training in nutrition to properly guide their athletes. The aim of this study was to evaluate the effectiveness of an intervention aimed at improving the accuracy of coaches' recommendations on sports nutrition. This was a quasi-experimental study with a comparison group and an intervention group. Measurements were made at baseline, post-intervention, and after a 2-month follow-up period. Coaches' recommendations on sports nutrition during the follow-up period were recorded in a diary. High school coaches from various sports (n=41) were randomly assigned to a comparison group or an intervention group. Both groups attended two 90-minute sessions of a theory-based intervention targeting determinants of coaches' intention to provide recommendations on sports nutrition. The intervention group further received an algorithm that summarizes sports nutrition guidelines to help promote decision making on sports nutrition recommendations. Nutrition knowledge and accuracy of coaches' recommendations on sports nutrition. χ(2) analyses and t-tests were used to compare baseline characteristics; mixed and general linear model analyses were used to assess the change in response to the intervention and differences in behaviors, respectively. Coaches in the intervention vs comparison group provided more nutrition recommendations during the 2-month post-intervention period (mean number of recommendations per coach 25.7±22.0 vs 9.4±6.5, respectively; P=0.004) and recommendations had a greater accuracy (mean number of accurate recommendations per coach 22.4±19.9 [87.1%] vs 4.3±3.2 [46.1%], respectively; P<0.001). Knowledge was significantly increased post-intervention in both groups, but was maintained only in the intervention group during the 2-month follow-up (Pgroup*time=0.04). A theory-based intervention combined with a decision-making algorithm maintained coaches' sports nutrition knowledge level over time and helped them to provide more accurate recommendations on sports nutrition. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  11. Effect of Position- and Velocity-Dependent Forces on Reaching Movements at Different Speeds

    PubMed Central

    Summa, Susanna; Casadio, Maura; Sanguineti, Vittorio

    2016-01-01

    The speed of voluntary movements is determined by the conflicting needs of maximizing accuracy and minimizing mechanical effort. Dynamic perturbations, e.g., force fields, may be used to manipulate movements in order to investigate these mechanisms. Here, we focus on how the presence of position- and velocity-dependent force fields affects the relation between speed and accuracy during hand reaching movements. Participants were instructed to perform reaching movements under visual control in two directions, corresponding to either low or high arm inertia. The subjects were required to maintain four different movement durations (very slow, slow, fast, very fast). The experimental protocol included three phases: (i) familiarization—the robot generated no force; (ii) force field—the robot generated a force; and (iii) after-effect—again, no force. Participants were randomly assigned to four groups, depending on the type of force that was applied during the “force field” phase. The robot was programmed to generate position-dependent forces—with positive (K+) or negative stiffness (K−)—or velocity-dependent forces, with either positive (B+) or negative viscosity (B−). We focused on path curvature, smoothness, and endpoint error; in the latter we distinguished between bias and variability components. Movements in the high-inertia direction are smoother and less curved; smoothness also increases with movement speed. Endpoint bias and variability are greater in, respectively, the high and low inertia directions. A robust dependence on movement speed was only observed in the longitudinal components of both bias and variability. The strongest and more consistent effects of perturbation were observed with negative viscosity (B−), which resulted in increased variability during force field adaptation and in a reduction of the endpoint bias, which was retained in the subsequent after-effect phase. These findings confirm that training with negative viscosity produces lasting effects in movement accuracy at all speeds. PMID:27965559

  12. High-throughput, Highly Sensitive Analyses of Bacterial Morphogenesis Using Ultra Performance Liquid Chromatography*

    PubMed Central

    Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey

    2015-01-01

    The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288

  13. Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Lu, Shuai; Singh, Ruchi

    2011-09-23

    Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less

  14. Dose accuracy of a reusable insulin pen using a cartridge system with an integrated plunger mechanism.

    PubMed

    Clarke, Alastair; Dain, Marie-Paule

    2006-09-01

    Pen injection devices are a common method of administering insulin for patients with diabetes. Pen devices must comply with guidelines prepared by the International Organization for Standardization, which include device dose accuracy and precision. OptiClik (sanofi-aventis) was developed to fulfil unmet needs of patients with diabetes, including: easier cartridge changing, clearer dose display and readability, and a larger dose of insulin to be delivered with a single injection. In this paper, the authors report on the dose accuracy of the OptiClik pen device, which uses a novel cartridge system with an integrated plunger for easier cartridge changing. The authors show that OptiClik accurately delivers a required dose of insulin, which is maintained over the lifetime of the pen. OptiClik offers a significant contribution to the treatment of diabetes.

  15. An embedded implementation based on adaptive filter bank for brain-computer interface systems.

    PubMed

    Belwafi, Kais; Romain, Olivier; Gannouni, Sofien; Ghaffari, Fakhreddine; Djemal, Ridha; Ouni, Bouraoui

    2018-07-15

    Brain-computer interface (BCI) is a new communication pathway for users with neurological deficiencies. The implementation of a BCI system requires complex electroencephalography (EEG) signal processing including filtering, feature extraction and classification algorithms. Most of current BCI systems are implemented on personal computers. Therefore, there is a great interest in implementing BCI on embedded platforms to meet system specifications in terms of time response, cost effectiveness, power consumption, and accuracy. This article presents an embedded-BCI (EBCI) system based on a Stratix-IV field programmable gate array. The proposed system relays on the weighted overlap-add (WOLA) algorithm to perform dynamic filtering of EEG-signals by analyzing the event-related desynchronization/synchronization (ERD/ERS). The EEG-signals are classified, using the linear discriminant analysis algorithm, based on their spatial features. The proposed system performs fast classification within a time delay of 0.430 s/trial, achieving an average accuracy of 76.80% according to an offline approach and 80.25% using our own recording. The estimated power consumption of the prototype is approximately 0.7 W. Results show that the proposed EBCI system reduces the overall classification error rate for the three datasets of the BCI-competition by 5% compared to other similar implementations. Moreover, experiment shows that the proposed system maintains a high accuracy rate with a short processing time, a low power consumption, and a low cost. Performing dynamic filtering of EEG-signals using WOLA increases the recognition rate of ERD/ERS patterns of motor imagery brain activity. This approach allows to develop a complete prototype of a EBCI system that achieves excellent accuracy rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  17. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  18. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  19. An Analysis on the Constitutive Models for Forging of Ti6Al4V Alloy Considering the Softening Behavior

    NASA Astrophysics Data System (ADS)

    Souza, Paul M.; Beladi, Hossein; Singh, Rajkumar P.; Hodgson, Peter D.; Rolfe, Bernard

    2018-05-01

    This paper developed high-temperature deformation constitutive models for a Ti6Al4V alloy using an empirical-based Arrhenius equation and an enhanced version of the authors' physical-based EM + Avrami equations. The initial microstructure was a partially equiaxed α + β grain structure. A wide range of experimental data was obtained from hot compression of the Ti6Al4 V alloy at deformation temperatures ranging from 720 to 970 °C, and at strain rates varying from 0.01 to 10 s-1. The friction- and adiabatic-corrected flow curves were used to identify the parameter values of the constitutive models. Both models provided good overall accuracy of the flow stress. The generalized modified Arrhenius model was better at predicting the flow stress at lower strain rates. However, the model was inaccurate in predicting the peak strain. In contrast, the enhanced physical-based EM + Avrami model revealed very good accuracy at intermediate and high strain rates, but it was also better at predicting the peak strain. Blind sample tests revealed that the EM + Avrami maintained good predictions on new (unseen) data. Thus, the enhanced EM + Avrami model may be preferred over the Arrhenius model to predict the flow behavior of Ti6Al4V alloy during industrial forgings, when the initial microstructure is partially equiaxed.

  20. The Role of Prominence in Pronoun Resolution: Active versus Passive Representations

    ERIC Educational Resources Information Center

    Foraker, Stephani; McElree, Brian

    2007-01-01

    A prominent antecedent facilitates anaphor resolution. Speed-accuracy tradeoff modeling in Experiments 1 and 3 indicated that clefting did not affect the speed of accessing an antecedent representation, which is inconsistent with claims that discourse-focused information is actively maintained in focal attention [e.g., Gundel, J. K. (1999). "On…

  1. Increasing Treatment Integrity through Negative Reinforcement: Effects on Teacher and Student Behavior

    ERIC Educational Resources Information Center

    DiGennaro, Florence D.; Martens, Brian K.; McIntyre, Laura Lee

    2005-01-01

    The current study examined the extent to which treatment integrity was increased and maintained for 4 teachers in their regular classroom settings as a result of performance feedback and negative reinforcement. Teachers received daily written feedback about their accuracy in implementing an intervention and were able to avoid meeting with a…

  2. Sensitivity and accuracy of DNA based methods used to describe aquatic communities for early detection of invasive fish species

    EPA Science Inventory

    For biomonitoring efforts aimed at early detection of aquatic invasive species (AIS), the ability to detect rare individuals is key and requires accurate species level identification to maintain a low occurrence probability of non-detection errors (failure to detect a present spe...

  3. Automated Acquisition, Cataloging, and Circulation in a Large Research Library.

    ERIC Educational Resources Information Center

    Boylan, Merle N.; And Others

    This report describes automated procedures now in use for book acquisition, and book and document cataloging and circulation, in the library at Lawrence Radiation Laboratory, Livermore. The purpose of the automation is to increase frequency and accuracy of record updatings, decrease the time required to maintain records, improve the formats of the…

  4. Phase-Locking and Coherent Power Combining of Broadband Linearly Chirped Optical Waves

    DTIC Science & Technology

    2012-11-05

    ensure path-length matching, and we estimate an accuracy of ±2 cm. Fiber-coupled acoustooptic modulators ( Brimrose Corporation) with a nominal...was performed using the VCSEL-based SFL with a chirp rate of ±2×1014 Hz/s, polarization maintaining fiber-optic components, and an AOFS ( Brimrose

  5. Behavioral Parent Training in Child Welfare: Maintenance and Booster Training

    ERIC Educational Resources Information Center

    Van Camp, Carole M.; Montgomery, Jan L.; Vollmer, Timothy R.; Kosarek, Judith A.; Happe, Shawn; Burgos, Vanessa; Manzolillo, Anthony

    2008-01-01

    Previous research has demonstrated the efficacy of a 30-hr behavioral parent training program at increasing skill accuracy. However, it remains unknown whether skills acquisitions are maintained on a long-term basis. Few studies have evaluated the maintenance of skills learned during behavioral parent training for foster parents. The purpose of…

  6. Teaching Choice Making to Children with Visual Impairments and Multiple Disabilities in Preschool and Kindergarten Classrooms

    ERIC Educational Resources Information Center

    Clark, Christine; McDonnell, Andrea P.

    2008-01-01

    This study examined the effectiveness of an intervention package that included visual accommodations, daily preference assessments, and naturalistic instructional strategies on the accuracy of choice-making responses for three participants with visual impairments and multiple disabilities. It also examined the participants' ability to maintain and…

  7. Traveling wire electrode increases productivity of Electrical Discharge Machining /EDM/ equipment

    NASA Technical Reports Server (NTRS)

    Kotora, J., Jr.; Smith, S. V.

    1967-01-01

    Traveling wire electrode on electrical discharge machining /EDM/ equipment reduces the time requirements for precision cutting. This device enables cutting with a minimum of lost material and without inducing stress beyond that inherent in the material. The use of wire increases accuracy and enables tighter tolerances to be maintained.

  8. Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization

    NASA Astrophysics Data System (ADS)

    House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.

  9. AVHRR composite period selection for land cover classification

    USGS Publications Warehouse

    Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.

    2002-01-01

    Multitemporal satellite image datasets provide valuable information on the phenological characteristics of vegetation, thereby significantly increasing the accuracy of cover type classifications compared to single date classifications. However, the processing of these datasets can become very complex when dealing with multitemporal data combined with multispectral data. Advanced Very High Resolution Radiometer (AVHRR) biweekly composite data are commonly used to classify land cover over large regions. Selecting a subset of these biweekly composite periods may be required to reduce the complexity and cost of land cover mapping. The objective of our research was to evaluate the effect of reducing the number of composite periods and altering the spacing of those composite periods on classification accuracy. Because inter-annual variability can have a major impact on classification results, 5 years of AVHRR data were evaluated. AVHRR biweekly composite images for spectral channels 1-4 (visible, near-infrared and two thermal bands) covering the entire growing season were used to classify 14 cover types over the entire state of Colorado for each of five different years. A supervised classification method was applied to maintain consistent procedures for each case tested. Results indicate that the number of composite periods can be halved-reduced from 14 composite dates to seven composite dates-without significantly reducing overall classification accuracy (80.4% Kappa accuracy for the 14-composite data-set as compared to 80.0% for a seven-composite dataset). At least seven composite periods were required to ensure the classification accuracy was not affected by inter-annual variability due to climate fluctuations. Concentrating more composites near the beginning and end of the growing season, as compared to using evenly spaced time periods, consistently produced slightly higher classification values over the 5 years tested (average Kappa) of 80.3% for the heavy early/late case as compared to 79.0% for the alternate dataset case).

  10. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  11. Three Axis Control of the Hubble Space Telescope Using Two Reaction Wheels and Magnetic Torquer Bars for Science Observations

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan

    2008-01-01

    The Hubble Space Telescope (HST) is renowned for its superb pointing accuracy of less than 10 milli-arcseconds absolute pointing error. To accomplish this, the HST relies on its complement of four reaction wheel assemblies (RWAs) for attitude control and four magnetic torquer bars (MTBs) for momentum management. As with most satellites with reaction wheel control, the fourth RWA provides for fault tolerance to maintain three-axis pointing capability should a failure occur and a wheel is lost from operations. If an additional failure is encountered, the ability to maintain three-axis pointing is jeopardized. In order to prepare for this potential situation, HST Pointing Control Subsystem (PCS) Team developed a Two Reaction Wheel Science (TRS) control mode. This mode utilizes two RWAs and four magnetic torquer bars to achieve three-axis stabilization and pointing accuracy necessary for a continued science observing program. This paper presents the design of the TRS mode and operational considerations necessary to protect the spacecraft while allowing for a substantial science program.

  12. Effects of cognitive load on neural and behavioral responses to smoking-cue distractors.

    PubMed

    MacLean, R Ross; Nichols, Travis T; LeBreton, James M; Wilson, Stephen J

    2016-08-01

    Smoking cessation failures are frequently thought to reflect poor top-down regulatory control over behavior. Previous studies have suggested that smoking cues occupy limited working memory resources, an effect that may contribute to difficulty achieving abstinence. Few studies have evaluated the effects of cognitive load on the ability to actively maintain information in the face of distracting smoking cues. For the present study, we adapted an fMRI probed recall task under low and high cognitive load with three distractor conditions: control, neutral images, or smoking-related images. Consistent with a limited-resource model of cue reactivity, we predicted that the performance of daily smokers (n = 17) would be most impaired when high load was paired with smoking distractors. The results demonstrated a main effect of load, with decreased accuracy under high, as compared to low, cognitive load. Surprisingly, an interaction revealed that the effect of load was weakest in the smoking cue distractor condition. Along with this behavioral effect, we observed significantly greater activation of the right inferior frontal gyrus (rIFG) in the low-load condition than in the high-load condition for trials containing smoking cue distractors. Furthermore, load-related changes in rIFG activation partially mediated the effects of load on task accuracy in the smoking-cue distractor condition. These findings are discussed in the context of prevailing cognitive and cue reactivity theories. These results suggest that high cognitive load does not necessarily make smokers more susceptible to interference from smoking-related stimuli, and that elevated load may even have a buffering effect in the presence of smoking cues under certain conditions.

  13. High-resolution barotropic tide modelling in the South China Sea

    NASA Astrophysics Data System (ADS)

    Luu, Quang-Hung; Tkalich, Pavel

    2016-04-01

    The South China Sea (SCS) links two of the largest open oceans, the Pacific and the Indian, mainly through the Luzon-Taiwan Straits in the northeast and the Malacca-Karimata Straits in the southwest, respectively. It has a rhino-like shape of 3000-km long, whose belly is contiguous to Vietnam and back leans on the Philippines. The highly irregular topography includes the Gulf of Tonkin in the north, the Gulf Thailand in the southwest, and several small islands in the middle of SCS (i.e., the Spratly and the Paracels) resulting in complicated astronomic tides and tidal dynamics in this region. In this study, we present high-resolution simulation of tides in the SCS using the Semi-Implicit Eulerian-Lagrangian Finite-Element (SELFE) model. We derive the bathymetry from the Shuttle Radar Topography Mission (SMRT) 15-arc second dataset, one of the finest global topography data sources. Our particular interest is to resolve small bathymetry features and islands in the middle of the SCS which we obtained by digitizing very-high resolution satellite images (30-m accuracy). An unstructured triangular mesh comprising of up to 5 million nodes is generated to resolve these features with very high accuracy, while maintaining fairly coarse resolution in rest of the domain. The model is configured to run in barotropic mode by forcing harmonic oscillations from FES2012 global tide predictions along open boundaries, adjusted to account for volume transport at key channels in the SCS. Computed surface elevations and currents agree well with available tide predictions and measurements. Sensitivity study is performed to analyze the role of the small bathymetry features on distorting tides in the SCS.

  14. Supporting driver headway choice: the effects of discrete headway feedback when following headway instructions.

    PubMed

    Risto, Malte; Martens, Marieke H

    2014-07-01

    With specific headway instructions drivers are not able to attain the exact headways as instructed. In this study, the effects of discrete headway feedback (and the direction of headway adjustment) on headway accuracy for drivers carrying out time headway instructions were assessed experimentally. Two groups of each 10 participants (one receiving headway feedback; one control) carried out headway instructions in a driving simulator; increasing and decreasing their headway to a target headway of 2 s at speeds of 50, 80, and 100 km/h. The difference between the instructed and chosen headway was a measure for headway accuracy. The feedback group heard a sound signal at the moment that they crossed the distance of the instructed headway. Unsupported participants showed no significant difference in headway accuracy when increasing or decreasing headways. Discrete headway feedback had varying effects on headway choice accuracy. When participants decreased their headway, feedback led to higher accuracy. When increasing their headway, feedback led to a lower accuracy, compared to no headway feedback. Support did not affect driver's performance in maintaining the chosen headway. The present results suggest that (a) in its current form discrete headway feedback is not sufficient to improve the overall accuracy of chosen headways when carrying out headway instructions; (b) the effect of discrete headway feedback depends on the direction of headway adjustment. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Guanfacine Attenuates Adverse Effects of Dronabinol (THC) on Working Memory in Adolescent-Onset Heavy Cannabis Users: A Pilot Study.

    PubMed

    Mathai, David S; Holst, Manuela; Rodgman, Christopher; Haile, Colin N; Keller, Jake; Hussain, Mariyah Z; Kosten, Thomas R; Newton, Thomas F; Verrico, Christopher D

    2018-01-01

    The cannabinoid-1 receptor (CB1R) agonist Δ9-tetrahydrocannabinol (THC), the main psychoactive constituent of cannabis, adversely effects working memory performance in humans. The α2A-adrenoceptor (AR) agonist guanfacine improves working memory performance in humans. The authors aimed to determine the effects of short-term (6 days) treatment with guanfacine on adverse cognitive effects produced by THC. Employing a double-blind, placebo-controlled crossover design, the cognitive, subjective, and cardiovascular effects produced by oral THC (20 mg) administration were determined twice in the same cannabis users: once after treatment with placebo and once after treatment with guanfacine (3 mg/day). Compared with performance at baseline, THC negatively affected accuracy on spatial working memory trials while participants were maintained on placebo (p=0.012) but not guanfacine (p=0.497); compared with placebo, accuracy was significantly (p=0.003, Cohen's d=-0.640) improved while individuals were treated with guanfacine. Similarly, compared with baseline, THC increased omission errors on an attentional task while participants were maintained on placebo (p=0.017) but not on guanfacine (p=0.709); compared with placebo, there were significantly (p=0.034, Cohen's d=0.838) fewer omissions while individuals were maintained on guanfacine. Although THC increased visual analog scores of subjective effects and heart rate, these increases were similar during treatment with placebo and guanfacine. THC did not significantly affect performance of a recognition memory task or blood pressure while individuals were maintained on either treatment. Although preliminary, these results suggest that guanfacine warrants further testing as a potential treatment for cannabis-induced cognitive deficits.

  16. Effects of demanding foraging conditions on cache retrival accuracy in food-caching mountain chickadees (Poecile gambeli).

    PubMed

    Pravosudov, V V; Clayton, N S

    2001-02-22

    Birds rely, at least in part, on spatial memory for recovering previously hidden caches but accurate cache recovery may be more critical for birds that forage in harsh conditions where the food supply is limited and unpredictable. Failure to find caches in these conditions may potentially result in death from starvation. In order to test this hypothesis we compared the cache recovery behaviour of 24 wild-caught mountain chickadees (Poecile gambeli), half of which were maintained on a limited and unpredictable food supply while the rest were maintained on an ad libitum food supply for 60 days. We then tested their cache retrieval accuracy by allowing birds from both groups to cache seeds in the experimental room and recover them 5 hours later. Our results showed that birds maintained on a limited and unpredictable food supply made significantly fewer visits to non-cache sites when recovering their caches compared to birds maintained on ad libitum food. We found the same difference in performance in two versions of a one-trial associative learning task in which the birds had to rely on memory to find previously encountered hidden food. In a non-spatial memory version of the task, in which the baited feeder was clearly marked, there were no significant differences between the two groups. We therefore concluded that the two groups differed in their efficiency at cache retrieval. We suggest that this difference is more likely to be attributable to a difference in memory (encoding or recall) than to a difference in their motivation to search for hidden food, although the possibility of some motivational differences still exists. Overall, our results suggest that demanding foraging conditions favour more accurate cache retrieval in food-caching birds.

  17. Quality improvement of International Classification of Diseases, 9th revision, diagnosis coding in radiation oncology: single-institution prospective study at University of California, San Francisco.

    PubMed

    Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E

    2015-01-01

    Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.

  18. Relationship between accuracy and complexity when learning underarm precision throwing.

    PubMed

    Valle, Maria Stella; Lombardo, Luciano; Cioni, Matteo; Casabona, Antonino

    2018-06-12

    Learning precision ball throwing was mostly studied to explore the early rapid improvement of accuracy, with poor attention on possible adaptive processes occurring later when the rate of improvement is reduced. Here, we tried to demonstrate that the strategy to select angle, speed and height at ball release can be managed during the learning periods following the performance stabilization. To this aim, we used a multivariate linear model with angle, speed and height as predictors of changes in accuracy. Participants performed underarm throws of a tennis ball to hit a target on the floor, 3.42 m away. Two training sessions (S1, S2) and one retention test were executed. Performance accuracy increased over the S1 and stabilized during the S2, with a rate of changes along the throwing axis slower than along the orthogonal axis. However, both the axes contributed to the performance changes over the learning and consolidation time. A stable relationship between the accuracy and the release parameters was observed only during S2, with a good fraction of the performance variance explained by the combination of speed and height. All the variations were maintained during the retention test. Overall, accuracy improvements and reduction in throwing complexity at the ball release followed separate timing over the course of learning and consolidation.

  19. Accurate solution of the Poisson equation with discontinuities

    NASA Astrophysics Data System (ADS)

    Nave, Jean-Christophe; Marques, Alexandre; Rosales, Rodolfo

    2017-11-01

    Solving the Poisson equation in the presence of discontinuities is of great importance in many applications of science and engineering. In many cases, the discontinuities are caused by interfaces between different media, such as in multiphase flows. These interfaces are themselves solutions to differential equations, and can assume complex configurations. For this reason, it is convenient to embed the interface into a regular triangulation or Cartesian grid and solve the Poisson equation in this regular domain. We present an extension of the Correction Function Method (CFM), which was developed to solve the Poisson equation in the context of embedded interfaces. The distinctive feature of the CFM is that it uses partial differential equations to construct smooth extensions of the solution in the vicinity of interfaces. A consequence of this approach is that it can achieve high order of accuracy while maintaining compact discretizations. The extension we present removes the restrictions of the original CFM, and yields a method that can solve the Poisson equation when discontinuities are present in the solution, the coefficients of the equation (material properties), and the source term. We show results computed to fourth order of accuracy in two and three dimensions. This work was partially funded by DARPA, NSF, and NSERC.

  20. Effect of Transcutaneous Electrode Temperature on Accuracy and Precision of Carbon Dioxide and Oxygen Measurements in the Preterm Infants.

    PubMed

    Jakubowicz, Jessica F; Bai, Shasha; Matlock, David N; Jones, Michelle L; Hu, Zhuopei; Proffitt, Betty; Courtney, Sherry E

    2018-05-01

    High electrode temperature during transcutaneous monitoring is associated with skin burns in extremely premature infants. We evaluated the accuracy and precision of CO 2 and O 2 measurements using lower transcutaneous electrode temperatures below 42°C. We enrolled 20 neonates. Two transcutaneous monitors were placed simultaneously on each neonate, with one electrode maintained at 42°C and the other randomized to temperatures of 38, 39, 40, 41, and 42°C. Arterial blood was collected twice at each temperature. At the time of arterial blood sampling, values for transcutaneously measured partial pressure of CO 2 (P tcCO 2 ) were not significantly different among test temperatures. There was no evidence of skin burning at any temperature. For P tcCO 2 , Bland-Altman analyses of all test temperatures versus 42°C showed good precision and low bias. Transcutaneously measured partial pressure of O 2 (P tcO 2 ) values trended arterial values but had large negative bias. Transcutaneous electrode temperatures as low as 38°C allow an assessment of P tcCO 2 as accurate as that with electrodes at 42°C. Copyright © 2018 by Daedalus Enterprises.

  1. A Novel Bearing Multi-Fault Diagnosis Approach Based on Weighted Permutation Entropy and an Improved SVM Ensemble Classifier.

    PubMed

    Zhou, Shenghan; Qian, Silin; Chang, Wenbing; Xiao, Yiyong; Cheng, Yang

    2018-06-14

    Timely and accurate state detection and fault diagnosis of rolling element bearings are very critical to ensuring the reliability of rotating machinery. This paper proposes a novel method of rolling bearing fault diagnosis based on a combination of ensemble empirical mode decomposition (EEMD), weighted permutation entropy (WPE) and an improved support vector machine (SVM) ensemble classifier. A hybrid voting (HV) strategy that combines SVM-based classifiers and cloud similarity measurement (CSM) was employed to improve the classification accuracy. First, the WPE value of the bearing vibration signal was calculated to detect the fault. Secondly, if a bearing fault occurred, the vibration signal was decomposed into a set of intrinsic mode functions (IMFs) by EEMD. The WPE values of the first several IMFs were calculated to form the fault feature vectors. Then, the SVM ensemble classifier was composed of binary SVM and the HV strategy to identify the bearing multi-fault types. Finally, the proposed model was fully evaluated by experiments and comparative studies. The results demonstrate that the proposed method can effectively detect bearing faults and maintain a high accuracy rate of fault recognition when a small number of training samples are available.

  2. A novel speckle pattern—Adaptive digital image correlation approach with robust strain calculation

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2012-02-01

    Digital image correlation (DIC) has seen widespread acceptance and usage as a non-contact method for the determination of full-field displacements and strains in experimental mechanics. The advances of imaging hardware in the last decades led to high resolution and speed cameras being more affordable than in the past making large amounts of data image available for typical DIC experimental scenarios. The work presented in this paper is aimed at maximizing both the accuracy and speed of DIC methods when employed with such images. A low-level framework for speckle image partitioning which replaces regularly shaped blocks with image-adaptive cells in the displacement calculation is introduced. The Newton-Raphson DIC method is modified to use the image pixels of the cells and to perform adaptive regularization to increase the spatial consistency of the displacements. Furthermore, a novel robust framework for strain calculation based also on the Newton-Raphson algorithm is introduced. The proposed methods are evaluated in five experimental scenarios, out of which four use numerically deformed images and one uses real experimental data. Results indicate that, as the desired strain density increases, significant computational gains can be obtained while maintaining or improving accuracy and rigid-body rotation sensitivity.

  3. Development of an automated high-temperature valveless injection system for online gas chromatography

    NASA Astrophysics Data System (ADS)

    Kreisberg, N. M.; Worton, D. R.; Zhao, Y.; Isaacman, G.; Goldstein, A. H.; Hering, S. V.

    2014-12-01

    A reliable method of sample introduction is presented for online gas chromatography with a special application to in situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a valveless sample introduction interface that offers the advantage of long-term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing this pressure-switching-based device for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient-pressure detector and 15% accurate when applied to a vacuum-based detector. Laboratory comparisons made between the two methods of sample introduction using a thermal desorption aerosol gas chromatograph (TAG) show that the new interface has approximately 3 times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in situ instrument demonstrate typically less than 2% week-1 response trending and a zero failure rate during field deployments ranging up to 4 weeks of continuous sampling. Extension of the valveless interface to dual collection cells is presented with less than 3% cell-to-cell carryover.

  4. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  5. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  6. Development of an automated high-temperature valveless injection system for online gas chromatography

    DOE PAGES

    Kreisberg, N. M.; Worton, D. R.; Zhao, Y.; ...

    2014-12-12

    A reliable method of sample introduction is presented for online gas chromatography with a special application to in situ field portable atmospheric sampling instruments. A traditional multi-port valve is replaced with a valveless sample introduction interface that offers the advantage of long-term reliability and stable sample transfer efficiency. An engineering design model is presented and tested that allows customizing this pressure-switching-based device for other applications. Flow model accuracy is within measurement accuracy (1%) when parameters are tuned for an ambient-pressure detector and 15% accurate when applied to a vacuum-based detector. Laboratory comparisons made between the two methods of sample introductionmore » using a thermal desorption aerosol gas chromatograph (TAG) show that the new interface has approximately 3 times greater reproducibility maintained over the equivalent of a week of continuous sampling. Field performance results for two versions of the valveless interface used in the in situ instrument demonstrate typically less than 2% week -1 response trending and a zero failure rate during field deployments ranging up to 4 weeks of continuous sampling. Extension of the valveless interface to dual collection cells is presented with less than 3% cell-to-cell carryover.« less

  7. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    PubMed

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  8. A fluidics comparison of Alcon Infiniti, Bausch & Lomb Stellaris, and Advanced Medical Optics Signature phacoemulsification machines.

    PubMed

    Georgescu, Dan; Kuo, Annie F; Kinard, Krista I; Olson, Randall J

    2008-06-01

    To compare three phacoemulsification machines for measurement accuracy and postocclusion surge (POS) in human cadaver eyes. In vitro comparisons of machine accuracy and POS. Tip vacuum and flow were compared with machine indicated vacuum and flow. All machines were placed in two human cadaver eyes and POS was determined. Vacuum (% of actual) was 101.9% +/- 1.7% for Infiniti (Alcon, Fort Worth, Texas, USA), 93.2% +/- 3.9% for Stellaris (Bausch & Lomb, Rochester, New York, USA), and 107.8% +/- 4.6% for Signature (Advanced Medical Optics, Santa, Ana, California, USA; P < .0001). At 60 ml/minute flow, actual flow and unoccluded flow vacuum (UFV) was 55.8 +/- 0.4 ml/minute and 197.7 +/- 0.7 mm Hg for Infiniti, 53.5 +/- 0.0 ml/minute and 179.8 +/- 0.9 mm Hg for Stellaris, and 58.5 +/- 0.0 ml/minute and 115.1 +/- 2.3 mm Hg for Signature (P < .0001). POS in an 32-year-old eye was 0.33 +/- 0.05 mm for Infiniti, 0.16 +/- 0.06 mm for Stellaris, and 0.13 +/- 0.04 mm for Signature at 550 mm Hg, 60 cm bottle height, 45 ml/minute flow with 19-gauge tips (P < .0001 for Infiniti vs Stellaris and Signature). POS in an 81-year-old eye was 1.51 +/- 0.22 mm for Infiniti, 0.83 +/- 0.06 mm for Stellaris, 0.67 +/- 0.01 mm for Signature at 400 mm Hg vacuum, 70 cm bottle height, 40 ml/minute flow with 19-gauge tips (P < .0001). Machine-indicated accuracy, POS, and UFV were statistically significantly different. Signature had the lowest POS and vacuum to maintain flow. Regarding POS, Stellaris was close to Signature; regarding vacuum to maintain flow, Infiniti and Stellaris were similar. Minimizing POS and vacuum to maintain flow potentially are important in avoiding ocular damage and surgical complications.

  9. One-Dimensional Convective Thermal Evolution Calculation Using a Modified Mixing Length Theory: Application to Saturnian Icy Satellites

    NASA Astrophysics Data System (ADS)

    Kamata, Shunichi

    2018-01-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection, for a bottom-heated convective layer. Adopting this new definition of l, I investigate the thermal evolution of Saturnian icy satellites, Dione and Enceladus, under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a thick global subsurface ocean suggested from geophysical analyses. Dynamical tides may be able to account for such an amount of heat, though the reference viscosity of Dione's ice and the ammonia content of Dione's ocean need to be very high. Otherwise, a thick global ocean in Dione cannot be maintained, implying that its shell is not in a minimum stress state.

  10. Quantitative bioimaging by LA-ICP-MS: a methodological study on the distribution of Pt and Ru in viscera originating from cisplatin- and KP1339-treated mice.

    PubMed

    Egger, Alexander E; Theiner, Sarah; Kornauth, Christoph; Heffeter, Petra; Berger, Walter; Keppler, Bernhard K; Hartinger, Christian G

    2014-09-01

    Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) was used to study the spatially-resolved distribution of ruthenium and platinum in viscera (liver, kidney, spleen, and muscle) originating from mice treated with the investigational ruthenium-based antitumor compound KP1339 or cisplatin, a potent, but nephrotoxic clinically-approved platinum-based anticancer drug. Method development was based on homogenized Ru- and Pt-containing samples (22.0 and 0.257 μg g(-1), respectively). Averaging yielded satisfactory precision and accuracy for both concentrations (3-15% and 93-120%, respectively), however when considering only single data points, the highly concentrated Ru sample maintained satisfactory precision and accuracy, while the low concentrated Pt sample yielded low recoveries and precision, which could not be improved by use of internal standards ((115)In, (185)Re or (13)C). Matrix-matched standards were used for quantification in LA-ICP-MS which yielded comparable metal distributions, i.e., enrichment in the cortex of the kidney in comparison with the medulla, a homogenous distribution in the liver and the muscle and areas of enrichment in the spleen. Elemental distributions were assigned to histological structures exceeding 100 μm in size. The accuracy of a quantitative LA-ICP-MS imaging experiment was validated by an independent method using microwave-assisted digestion (MW) followed by direct infusion ICP-MS analysis.

  11. A globally efficient means of distributing UTC time and frequency through GPS

    NASA Technical Reports Server (NTRS)

    Kusters, John A.; Giffard, Robin P.; Cutler, Leonard S.; Allan, David W.; Miranian, Mihran

    1995-01-01

    Time and frequency outputs comparable in quality to the best laboratories have been demonstrated on an integrated system suitable for field application on a global basis. The system measures the time difference between 1 pulse-per-second (pps) signals derived from local primary frequency standards and from a multi-channel GPS C/A receiver. The measured data is processed through optimal SA Filter algorithms that enhance both the stability and accuracy of GPS timing signals. Experiments were run simultaneously at four different sites. Even with large distances between sites, the overall results show a high degree of cross-correlation of the SA noise. With sufficiently long simultaneous measurement sequences, the data shows that determination of the difference in local frequency from an accepted remote standard to better than 1 x 10(exp -14) is possible. This method yields frequency accuracy, stability, and timing stability comparable to that obtained with more conventional common-view experiments. In addition, this approach provides UTC(USNO MC) in real time to an accuracy better than 20 ns without the problems normally associated with conventional common-view techniques. An experimental tracking loop was also set up to demonstrate the use of enhanced GPS for dissemination of UTC(USNO MC) over a wide geographic area. Properly disciplining a cesium standard with a multi-channel GPS receiver, with additional input from USNO, has been found to permit maintaining a timing precision of better than 10 ns between Palo Alto, CA and Washington, DC.

  12. High-sensitivity high-selectivity detection of CWAs and TICs using tunable laser photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Pushkarsky, Michael; Webber, Michael; Patel, C. Kumar N.

    2005-03-01

    We provide a general technique for evaluating the performance of an optical sensor for the detection of chemical warfare agents (CWAs) in realistic environments and present data from a simulation model based on a field deployed discretely tunable 13CO2 laser photoacoustic spectrometer (L-PAS). Results of our calculations show the sensor performance in terms of usable sensor sensitivity as a function of probability of false positives (PFP). The false positives arise from the presence of many other gases in the ambient air that could be interferents. Using the L-PAS as it exists today, we can achieve a detection threshold of about 4 ppb for the CWAs while maintaining a PFP of less than 1:106. Our simulation permits us to vary a number of parameters in the model to provide guidance for performance improvement. We find that by using a larger density of laser lines (such as those obtained through the use of tunable semiconductor lasers), improving the detector noise and maintaining the accuracy of laser frequency determination, optical detection schemes can make possible CWA sensors having sub-ppb detection capability with <1:108 PFP. We also describe the results of a preliminary experiment that verifies the results of the simulation model. Finally, we discuss the use of continuously tunable quantum cascade lasers in L-PAS for CWA and TIC detection.

  13. High Resolution and Large Dynamic Range Resonant Pressure Sensor Based on Q-Factor Measurement

    NASA Technical Reports Server (NTRS)

    Gutierrez, Roman C. (Inventor); Stell, Christopher B. (Inventor); Tang, Tony K. (Inventor); Vorperian, Vatche (Inventor); Wilcox, Jaroslava (Inventor); Shcheglov, Kirill (Inventor); Kaiser, William J. (Inventor)

    2000-01-01

    A pressure sensor has a high degree of accuracy over a wide range of pressures. Using a pressure sensor relying upon resonant oscillations to determine pressure, a driving circuit drives such a pressure sensor at resonance and tracks resonant frequency and amplitude shifts with changes in pressure. Pressure changes affect the Q-factor of the resonating portion of the pressure sensor. Such Q-factor changes are detected by the driving/sensing circuit which in turn tracks the changes in resonant frequency to maintain the pressure sensor at resonance. Changes in the Q-factor are reflected in changes of amplitude of the resonating pressure sensor. In response, upon sensing the changes in the amplitude, the driving circuit changes the force or strength of the electrostatic driving signal to maintain the resonator at constant amplitude. The amplitude of the driving signals become a direct measure of the changes in pressure as the operating characteristics of the resonator give rise to a linear response curve for the amplitude of the driving signal. Pressure change resolution is on the order of 10(exp -6) torr over a range spanning from 7,600 torr to 10(exp -6) torr. No temperature compensation for the pressure sensor of the present invention is foreseen. Power requirements for the pressure sensor are generally minimal due to the low-loss mechanical design of the resonating pressure sensor and the simple control electronics.

  14. A vacuum sealed high emission current and transmission efficiency carbon nanotube triode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Yunsong; Jiangsu Key Laboratory of Optoelectronic Technology, Nanjing Normal University, Nanjing 210023; Wang, Qilong

    A vacuum sealed carbon nanotubes (CNTs) triode with a concave and spoke-shaped Mo grid is presented. Due to the high aperture ratio of the grid, the emission current could be modulated at a relatively high electric field. Totally 75 mA emission current has been obtained from the CNTs cathode with the average applied field by the grid shifting from 8 to 13 V/μm. Whilst with the electron transmission efficiency of the grid over 56%, a remarkable high modulated current electron beam over 42 mA has been collected by the anode. Also contributed by the high aperture ration of the grid,more » desorbed gas molecules could flow away from the emission area rapidly when the triode has been operated at a relative high emission current, and finally collected by a vacion pump. The working pressure has been maintained at ∼1 × 10{sup −7} Torr, seldom spark phenomena occurred. Nearly perfect I-V curve and corresponding Fowler-Nordheim (FN) plot confirmed the accuracy of the measured data, and the emission current was long term stable and reproducible. Thusly, this kind of triode would be used as a high-power electron source.« less

  15. Composite Bloom Filters for Secure Record Linkage.

    PubMed

    Durham, Elizabeth Ashley; Kantarcioglu, Murat; Xue, Yuan; Toth, Csaba; Kuzu, Mehmet; Malin, Bradley

    2014-12-01

    The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values ( e.g., Surname ), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy.

  16. Composite Bloom Filters for Secure Record Linkage

    PubMed Central

    Durham, Elizabeth Ashley; Kantarcioglu, Murat; Xue, Yuan; Toth, Csaba; Kuzu, Mehmet; Malin, Bradley

    2014-01-01

    The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values (e.g., Surname), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy. PMID:25530689

  17. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  18. Diaphragm-Free Fiber-Optic Fabry-Perot Interferometric Gas Pressure Sensor for High Temperature Application.

    PubMed

    Liang, Hao; Jia, Pinggang; Liu, Jia; Fang, Guocheng; Li, Zhe; Hong, Yingping; Liang, Ting; Xiong, Jijun

    2018-03-28

    A diaphragm-free fiber-optic Fabry-Perot (FP) interferometric gas pressure sensor is designed and experimentally verified in this paper. The FP cavity was fabricated by inserting a well-cut fiber Bragg grating (FBG) and hollow silica tube (HST) from both sides into a silica casing. The FP cavity length between the ends of the SMF and HST changes with the gas density. Using temperature decoupling method to improve the accuracy of the pressure sensor in high temperature environments. An experimental system for measuring the pressure under different temperatures was established to verify the performance of the sensor. The pressure sensitivity of the FP gas pressure sensor is 4.28 nm/MPa with a high linear pressure response over the range of 0.1-0.7 MPa, and the temperature sensitivity is 14.8 pm/°C under the range of 20-800 °C. The sensor has less than 1.5% non-linearity at different temperatures by using temperature decoupling method. The simple fabrication and low-cost will help sensor to maintain the excellent features required by pressure measurement in high temperature applications.

  19. KSC-2014-4553

    NASA Image and Video Library

    2014-11-20

    CAPE CANAVERAL, Fla. – Workers monitor NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, as it travels between the airlock of Building 2 to the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  20. Photographic technology development project: Timber typing in the Tahoe Basin using high altitude panoramic photography

    NASA Technical Reports Server (NTRS)

    Ward, J. F.

    1981-01-01

    Procedures were developed and tested for using KA-80A optical bar camera panoramic photography for timber typing forest land and classifying nonforest land. The study area was the south half of the Lake Tahoe Basin Management Unit. Final products from this study include four timber type map overlays on 1:24,000 orthophoto maps. The following conclusions can be drawn from this study: (1) established conventional timber typing procedures can be used on panoramic photography if the necessary equipment is available, (2) The classification and consistency results warrant further study in using panoramic photography for timber typing; and (3) timber type mapping can be done as fast or faster with panoramic photography than with resource photography while maintaining comparable accuracy.

  1. ECOD: new developments in the evolutionary classification of domains

    PubMed Central

    Schaeffer, R. Dustin; Liao, Yuxing; Cheng, Hua; Grishin, Nick V.

    2017-01-01

    Evolutionary Classification Of protein Domains (ECOD) (http://prodata.swmed.edu/ecod) comprehensively classifies protein with known spatial structures maintained by the Protein Data Bank (PDB) into evolutionary groups of protein domains. ECOD relies on a combination of automatic and manual weekly updates to achieve its high accuracy and coverage with a short update cycle. ECOD classifies the approximately 120 000 depositions of the PDB into more than 500 000 domains in ∼3400 homologous groups. We show the performance of the weekly update pipeline since the release of ECOD, describe improvements to the ECOD website and available search options, and discuss novel structures and homologous groups that have been classified in the recent updates. Finally, we discuss the future directions of ECOD and further improvements planned for the hierarchy and update process. PMID:27899594

  2. An Upgrade Pinning Block: A Mechanical Practical Aid for Fast Labelling of the Insect Specimens.

    PubMed

    Ghafouri Moghaddam, Mohammad Hossein; Ghafouri Moghaddam, Mostafa; Rakhshani, Ehsan; Mokhtari, Azizollah

    2017-01-01

    A new mechanical innovation is described to deal with standard labelling of dried specimens on triangular cards and/or pinned specimens in personal and public collections. It works quickly, precisely, and easily and is very useful for maintaining label uniformity in collections. The tools accurately sets the position of labels in the shortest possible time. This tools has advantages including rapid processing, cost effectiveness, light weight, and high accuracy, compared to conventional methods. It is fully customisable, compact, and does not require specialist equipment to assemble. Conventional methods generally require locating holes on the pinning block surface when labelling with a resulting risk to damage of the specimens. Insects of different orders can be labelled by this simple and effective tool.

  3. An Upgrade Pinning Block: A Mechanical Practical Aid for Fast Labelling of the Insect Specimens

    PubMed Central

    Ghafouri Moghaddam, Mohammad Hossein; Rakhshani, Ehsan; Mokhtari, Azizollah

    2017-01-01

    Abstract A new mechanical innovation is described to deal with standard labelling of dried specimens on triangular cards and/or pinned specimens in personal and public collections. It works quickly, precisely, and easily and is very useful for maintaining label uniformity in collections. The tools accurately sets the position of labels in the shortest possible time. This tools has advantages including rapid processing, cost effectiveness, light weight, and high accuracy, compared to conventional methods. It is fully customisable, compact, and does not require specialist equipment to assemble. Conventional methods generally require locating holes on the pinning block surface when labelling with a resulting risk to damage of the specimens. Insects of different orders can be labelled by this simple and effective tool. PMID:29104440

  4. Support Vector Machine-Based Endmember Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Anthony M; Archibald, Richard K

    Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less

  5. NASA Conjunction Assessment Organizational Approach and the Associated Determination of Screening Volume Sizes

    NASA Technical Reports Server (NTRS)

    Newman, Lauri K.; Hejduk, Matthew D.

    2015-01-01

    NASA is committed to safety of flight for all of its operational assets Performed by CARA at NASA GSFC for robotic satellites Focus of this briefing Performed by TOPO at NASA JSC for human spaceflight he Conjunction Assessment Risk Analysis (CARA) was stood up to offer this service to all NASA robotic satellites Currently provides service to 70 operational satellites NASA unmanned operational assets Other USG assets (USGS, USAF, NOAA) International partner assets Conjunction Assessment (CA) is the process of identifying close approaches between two orbiting objects; sometimes called conjunction screening The Joint Space Operations Center (JSpOC) a USAF unit at Vandenberg AFB, maintains the high accuracy catalog of space objects, screens CARA-supported assets against the catalog, performs OD tasking, and generates close approach data.

  6. Attitude and position estimation on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Ali, Khaled S.; Vanelli, C. Anthony; Biesiadecki, Jeffrey J.; Maimone, Mark W.; Yang Cheng, A.; San Martin, Miguel; Alexander, James W.

    2005-01-01

    NASA/JPL 's Mars Exploration Rovers acquire their attitude upon command and autonomously propagate their attitude and position. The rovers use accelerometers and images of the sun to acquire attitude, autonomously searching the sky for the sun with a pointable camera. To propagate the attitude and position the rovers use either accelerometer and gyro readings or gyro readings and wheel odometiy, depending on the nature of the movement ground operators are commanding. Where necessary, visual odometry is performed on images to fine tune the position updates, particularly in high slip environments. The capability also exists for visual odometry attitude updates. This paper describes the techniques used by the rovers to acquire and maintain attitude and position knowledge, the accuracy which is obtainable, and lessons learned after more than one year in operation.

  7. Three-Dimensional Incompressible Navier-Stokes Flow Computations about Complete Configurations Using a Multiblock Unstructured Grid Approach

    NASA Technical Reports Server (NTRS)

    Sheng, Chunhua; Hyams, Daniel G.; Sreenivas, Kidambi; Gaither, J. Adam; Marcum, David L.; Whitfield, David L.

    2000-01-01

    A multiblock unstructured grid approach is presented for solving three-dimensional incompressible inviscid and viscous turbulent flows about complete configurations. The artificial compressibility form of the governing equations is solved by a node-based, finite volume implicit scheme which uses a backward Euler time discretization. Point Gauss-Seidel relaxations are used to solve the linear system of equations at each time step. This work employs a multiblock strategy to the solution procedure, which greatly improves the efficiency of the algorithm by significantly reducing the memory requirements by a factor of 5 over the single-grid algorithm while maintaining a similar convergence behavior. The numerical accuracy of solutions is assessed by comparing with the experimental data for a submarine with stem appendages and a high-lift configuration.

  8. [Internet research methods: advantages and challenges].

    PubMed

    Liu, Yi; Tien, Yueh-Hsuan

    2009-12-01

    Compared to traditional research methods, using the Internet to conduct research offers a number of advantages to the researcher, which include increased access to sensitive issues and vulnerable / hidden populations; decreased data entry time requirements; and enhanced data accuracy. However, Internet research also presents certain challenges to the researcher. In this article, the advantages and challenges of Internet research methods are discussed in four principle issue areas: (a) recruitment, (b) data quality, (c) practicality, and (d) ethics. Nursing researchers can overcome problems related to sampling bias and data truthfulness using creative methods; resolve technical problems through collaboration with other disciplines; and protect participant's privacy, confidentiality and data security by maintaining a high level of vigilance. Once such issues have been satisfactorily addressed, the Internet should open a new window for Taiwan nursing research.

  9. KSC-2014-4552

    NASA Image and Video Library

    2014-11-20

    CAPE CANAVERAL, Fla. – Workers transfer NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, wrapped in plastic and secured onto a portable work stand, from the airlock of Building 2 to the high bay of Building 1 at the Astrotech payload processing facility in Titusville, Florida, near Kennedy Space Center. DSCOVR is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. Launch is currently scheduled for January 2015 aboard a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral Air Force Station, Florida. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Kim Shiflett

  10. Computer program for the automated attendance accounting system

    NASA Technical Reports Server (NTRS)

    Poulson, P.; Rasmusson, C.

    1971-01-01

    The automated attendance accounting system (AAAS) was developed under the auspices of the Space Technology Applications Program. The task is basically the adaptation of a small digital computer, coupled with specially developed pushbutton terminals located in school classrooms and offices for the purpose of taking daily attendance, maintaining complete attendance records, and producing partial and summary reports. Especially developed for high schools, the system is intended to relieve both teachers and office personnel from the time-consuming and dreary task of recording and analyzing the myriad classroom attendance data collected throughout the semester. In addition, since many school district budgets are related to student attendance, the increase in accounting accuracy is expected to augment district income. A major component of this system is the real-time AAAS software system, which is described.

  11. Sustained attention assessment of narcoleptic patients: two case reports.

    PubMed

    Moraes, Mirleny; Wilson, Barbara A; Rossini, Sueli; Osternack-Pinto, Kátia; Reimão, Rubens

    2008-01-01

    Narcolepsy is a sleep disorder characterized by uncontrollable REM sleep attacks which alter the patients wake state and can lead to difficulties in attention aspects, such as maintaining attention when performing activities or tasks. This study aimed to evaluate sustained attention performance of two narcoleptic patients on the d2 Test, Epworth Sleepiness Scale (ESS), Pittsburgh Sleep Quality Index (PSQI) and Hamilton Rating Scale for Depression (HAM-D). Results showed that the maintenance of attention was associated with a slowing of the target symbols processing function in visual scanning with accuracy in task performance. A high degree of excessive sleepiness was observed, along with mild and moderate degrees of depressive signs and symptoms. One subject also presented with a nocturnal sleep disorder which could represent an important factor affecting attentional and affective capacity.

  12. How peroxisomes partition between cells. A story of yeast, mammals and filamentous fungi.

    PubMed

    Knoblach, Barbara; Rachubinski, Richard A

    2016-08-01

    Eukaryotic cells are subcompartmentalized into discrete, membrane-enclosed organelles. These organelles must be preserved in cells over many generations to maintain the selective advantages afforded by compartmentalization. Cells use complex molecular mechanisms of organelle inheritance to achieve high accuracy in the sharing of organelles between daughter cells. Here we focus on how a multi-copy organelle, the peroxisome, is partitioned in yeast, mammalian cells, and filamentous fungi, which differ in their mode of cell division. Cells achieve equidistribution of their peroxisomes through organelle transport and retention processes that act coordinately, although the strategies employed vary considerably by organism. Nevertheless, we propose that mechanisms common across species apply to the partitioning of all membrane-enclosed organelles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Sustainable Urban Forestry Potential Based Quantitative And Qualitative Measurement Using Geospatial Technique

    NASA Astrophysics Data System (ADS)

    Rosli, A. Z.; Reba, M. N. M.; Roslan, N.; Room, M. H. M.

    2014-02-01

    In order to maintain the stability of natural ecosystems around urban areas, urban forestry will be the best initiative to maintain and control green space in our country. Integration between remote sensing (RS) and geospatial information system (GIS) serves as an effective tool for monitoring environmental changes and planning, managing and developing a sustainable urbanization. This paper aims to assess capability of the integration of RS and GIS to provide information for urban forest potential sites based on qualitative and quantitative by using priority parameter ranking in the new township of Nusajaya. SPOT image was used to provide high spatial accuracy while map of topography, landuse, soils group, hydrology, Digital Elevation Model (DEM) and soil series data were applied to enhance the satellite image in detecting and locating present attributes and features on the ground. Multi-Criteria Decision Making (MCDM) technique provides structural and pair wise quantification and comparison elements and criteria for priority ranking for urban forestry purpose. Slope, soil texture, drainage, spatial area, availability of natural resource, and vicinity of urban area are criteria considered in this study. This study highlighted the priority ranking MCDM is cost effective tool for decision-making in urban forestry planning and landscaping.

  14. An efficient and stable hydrodynamic model with novel source term discretization schemes for overland flow and flood simulations

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming

    2017-05-01

    Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.

  15. Criterion Noise in Ratings-Based Recognition: Evidence from the Effects of Response Scale Length on Recognition Accuracy

    ERIC Educational Resources Information Center

    Benjamin, Aaron S.; Tullis, Jonathan G.; Lee, Ji Hae

    2013-01-01

    Rating scales are a standard measurement tool in psychological research. However, research has suggested that the cognitive burden involved in maintaining the criteria used to parcel subjective evidence into ratings introduces "decision noise" and affects estimates of performance in the underlying task. There has been debate over whether…

  16. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the accuracy of the ending balance in the pull tab control by reconciling the pull tabs on hand. (6) A.... (g) Standards for statistical reports. (1) Records shall be maintained, which include win, write (sales), and a win-to-write hold percentage as compared to the theoretical hold percentage derived from...

  17. Analog design of a new neural network for optical character recognition.

    PubMed

    Morns, I P; Dlay, S S

    1999-01-01

    An electronic circuit is presented for a new type of neural network, which gives a recognition rate of over 100 kHz. The network is used to classify handwritten numerals, presented as Fourier and wavelet descriptors, and has been shown to train far quicker than the popular backpropagation network while maintaining classification accuracy.

  18. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    USGS Publications Warehouse

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  19. 30 CFR 75.320 - Air quality detectors and measurement devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated with a known methane-air mixture at least once every 31 days. (b) Tests for oxygen deficiency shall be made by a qualified person with MSHA approved oxygen detectors that are maintained in permissible and proper operating condition and that can detect 19.5 percent oxygen with an accuracy of ±0.5...

  20. 30 CFR 75.320 - Air quality detectors and measurement devices.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated with a known methane-air mixture at least once every 31 days. (b) Tests for oxygen deficiency shall be made by a qualified person with MSHA approved oxygen detectors that are maintained in permissible and proper operating condition and that can detect 19.5 percent oxygen with an accuracy of ±0.5...

  1. 30 CFR 75.320 - Air quality detectors and measurement devices.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibrated with a known methane-air mixture at least once every 31 days. (b) Tests for oxygen deficiency shall be made by a qualified person with MSHA approved oxygen detectors that are maintained in permissible and proper operating condition and that can detect 19.5 percent oxygen with an accuracy of ±0.5...

  2. 30 CFR 75.320 - Air quality detectors and measurement devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calibrated with a known methane-air mixture at least once every 31 days. (b) Tests for oxygen deficiency shall be made by a qualified person with MSHA approved oxygen detectors that are maintained in permissible and proper operating condition and that can detect 19.5 percent oxygen with an accuracy of ±0.5...

  3. 30 CFR 75.320 - Air quality detectors and measurement devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... calibrated with a known methane-air mixture at least once every 31 days. (b) Tests for oxygen deficiency shall be made by a qualified person with MSHA approved oxygen detectors that are maintained in permissible and proper operating condition and that can detect 19.5 percent oxygen with an accuracy of ±0.5...

  4. Leveraging existing technology to boost revenue cycle performance.

    PubMed

    Wagner, Karen

    2012-09-01

    Revenue cycle leaders can reduce the frequency or level of technology investment needed while maintaining strong service and payment accuracy by looking at four areas of opportunity: Applying output from existing technology in new ways Seeking new functionality from existing systems. Linking with external systems to provide greater capabilities. Supplementing limitations of existing technology with outside expertise.

  5. L-Band Transmit/Receive Module for Phase-Stable Array Antennas

    NASA Technical Reports Server (NTRS)

    Andricos, Constantine; Edelstein, Wendy; Krimskiy, Vladimir

    2008-01-01

    Interferometric synthetic aperture radar (InSAR) has been shown to provide very sensitive measurements of surface deformation and displacement on the order of 1 cm. Future systematic measurements of surface deformation will require this capability over very large areas (300 km) from space. To achieve these required accuracies, these spaceborne sensors must exhibit low temporal decorrelation and be temporally stable systems. An L-band (24-cmwavelength) InSAR instrument using an electronically steerable radar antenna is suited to meet these needs. In order to achieve the 1-cm displacement accuracy, the phased array antenna requires phase-stable transmit/receive (T/R) modules. The T/R module operates at L-band (1.24 GHz) and has less than 1- deg absolute phase stability and less than 0.1-dB absolute amplitude stability over temperature. The T/R module is also high power (30 W) and power efficient (60-percent overall efficiency). The design is currently implemented using discrete components and surface mount technology. The basic T/R module architecture is augmented with a calibration loop to compensate for temperature variations, component variations, and path loss variations as a function of beam settings. The calibration circuit consists of an amplitude and phase detector, and other control circuitry, to compare the measured gain and phase to a reference signal and uses this signal to control a precision analog phase shifter and analog attenuator. An architecture was developed to allow for the module to be bidirectional, to operate in both transmit and receive mode. The architecture also includes a power detector used to maintain a transmitter power output constant within 0.1 dB. The use of a simple, stable, low-cost, and high-accuracy gain and phase detector made by Analog Devices (AD8302), combined with a very-high efficiency T/R module, is novel. While a self-calibrating T/R module capability has been sought for years, a practical and cost-effective solution has never been demonstrated. By adding the calibration loop to an existing high-efficiency T/R module, there is a demonstrated order-of-magnitude improvement in the amplitude and phase stability.

  6. James Webb Space Telescope Integrated Science Instrument Module Calibration and Verification of High-Accuracy Instrumentation to Measure Heat Flow in Cryogenic Testing

    NASA Technical Reports Server (NTRS)

    Comber, Brian; Glazer, Stuart

    2012-01-01

    The James Webb Space Telescope (JWST) is an upcoming flagship observatory mission scheduled to be launched in 2018. Three of the four science instruments are passively cooled to their operational temperature range of 36K to 40K, and the fourth instrument is actively cooled to its operational temperature of approximately 6K. The requirement for multiple thermal zoned results in the instruments being thermally connected to five external radiators via individual high purity aluminum heat straps. Thermal-vacuum and thermal balance testing of the flight instruments at the Integrated Science Instrument Module (ISIM) element level will take place within a newly constructed shroud cooled by gaseous helium inside Goddard Space Flight Center's (GSFC) Space environment Simulator (SES). The flight external radiators are not available during ISIM-level thermal vacuum/thermal testing, so they will be replaced in test with stable and adjustable thermal boundaries with identical physical interfaces to the flight radiators. Those boundaries are provided by specially designed test hardware which also measures the heat flow within each of the five heat straps to an accuracy of less than 2 mW, which is less than 5% of the minimum predicted heat flow values. Measurement of the heat loads to this accuracy is essential to ISIM thermal model correlation, since thermal models are more accurately correlated when temperature data is supplemented by accurate knowledge of heat flows. It also provides direct verification by test of several high-level thermal requirements. Devices that measure heat flow in this manner have historically been referred to a "Q-meters". Perhaps the most important feature of the design of the JWST Q-meters is that it does not depend on the absolute accuracy of its temperature sensors, but rather on knowledge of precise heater power required to maintain a constant temperature difference between sensors on two stages, for which a table is empirically developed during a calibration campaign in a small chamber at GSFC. This paper provides a brief review of Q-meter design, and discusses the Q-meter calibration procedure including calibration chamber modifications and accommodations, handling of differing conditions between calibration and usage, the calibration process itself, and the results of the tests used to determine if the calibration is successful.

  7. A simple differential steady-state method to measure the thermal conductivity of solid bulk materials with high accuracy.

    PubMed

    Kraemer, D; Chen, G

    2014-02-01

    Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.

  8. A fast complex domain-matching pursuit algorithm and its application to deep-water gas reservoir detection

    NASA Astrophysics Data System (ADS)

    Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei

    2017-12-01

    The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.

  9. A BMI-adjusted ultra-low-dose CT angiography protocol for the peripheral arteries-Image quality, diagnostic accuracy and radiation exposure.

    PubMed

    Schreiner, Markus M; Platzgummer, Hannes; Unterhumer, Sylvia; Weber, Michael; Mistelbauer, Gabriel; Loewe, Christian; Schernthaner, Ruediger E

    2017-08-01

    To investigate radiation exposure, objective image quality, and the diagnostic accuracy of a BMI-adjusted ultra-low-dose CT angiography (CTA) protocol for the assessment of peripheral arterial disease (PAD), with digital subtraction angiography (DSA) as the standard of reference. In this prospective, IRB-approved study, 40 PAD patients (30 male, mean age 72 years) underwent CTA on a dual-source CT scanner at 80kV tube voltage. The reference amplitude for tube current modulation was personalized based on the body mass index (BMI) with 120 mAs for [BMI≤25] or 150 mAs for [2570%) was assessed by two readers independently and compared to subsequent DSA. Radiation exposure was assessed with the computed tomography dose index (CTDIvol) and the dosis-length product (DLP). Objective image quality was assessed via contrast- and signal-to-noise ratio (CNR and SNR) measurements. Radiation exposure and image quality were compared between the BMI groups and between the BMI-adjusted ultra-low-dose protocol and the low-dose institutional standard protocol (ISP). The BMI-adjusted ultra-low-dose protocol reached high diagnostic accuracy values of 94% for Reader 1 and 93% for Reader 2. Moreover, in comparison to the ISP, it showed significantly (p<0.001) lower CTDIvol (1.97±0.55mGy vs. 4.18±0.62 mGy) and DLP (256±81mGy x cm vs. 544±83mGy x cm) but similar image quality (p=0.37 for CNR). Furthermore, image quality was similar between BMI groups (p=0.86 for CNR). A CT protocol that incorporates low kV settings with a personalized (BMI-adjusted) reference amplitude for tube current modulation and iterative reconstruction enables very low radiation exposure CTA, while maintaining good image quality and high diagnostic accuracy in the assessment of PAD. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Accurate Reading with Sequential Presentation of Single Letters

    PubMed Central

    Price, Nicholas S. C.; Edwards, Gemma L.

    2012-01-01

    Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548

  11. Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel.

    PubMed

    Huang, Jie; Howie, Bryan; McCarthy, Shane; Memari, Yasin; Walter, Klaudia; Min, Josine L; Danecek, Petr; Malerba, Giovanni; Trabetti, Elisabetta; Zheng, Hou-Feng; Gambaro, Giovanni; Richards, J Brent; Durbin, Richard; Timpson, Nicholas J; Marchini, Jonathan; Soranzo, Nicole

    2015-09-14

    Imputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced at low depth (average 7x), aiming to exhaustively characterize genetic variation down to 0.1% minor allele frequency in the British population. Here we demonstrate the value of this resource for improving imputation accuracy at rare and low-frequency variants in both a UK and an Italian population. We show that large increases in imputation accuracy can be achieved by re-phasing WGS reference panels after initial genotype calling. We also present a method for combining WGS panels to improve variant coverage and downstream imputation accuracy, which we illustrate by integrating 7,562 WGS haplotypes from the UK10K project with 2,184 haplotypes from the 1000 Genomes Project. Finally, we introduce a novel approximation that maintains speed without sacrificing imputation accuracy for rare variants.

  12. Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-07-01

    Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.

  13. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  14. Accuracy in Blood Glucose Measurement: What Will a Tightening of Requirements Yield?

    PubMed Central

    Heinemann, Lutz; Lodwig, Volker; Freckmann, Guido

    2012-01-01

    Nowadays, almost all persons with diabetes—at least those using antidiabetic drug therapy—use one of a plethora of meters commercially available for self-monitoring of blood glucose. The accuracy of blood glucose (BG) measurement using these meters has been presumed to be adequate; that is, the accuracy of these devices was not usually questioned until recently. Health authorities in the United States (Food and Drug Administration) and in other countries are currently endeavoring to tighten the requirements for the accuracy of these meters above the level that is currently stated in the standard ISO 15197. At first glance, this does not appear to be a problem and is hardly worth further consideration, but a closer look reveals a considerable range of critical aspects that will be discussed in this commentary. In summary, one could say that as a result of modern production methods and ongoing technical advances, the demands placed on the quality of measurement results obtained with BG meters can be increased to a certain degree. One should also take into consideration that the system accuracy (which covers many more aspects as the analytical accuracy) required to make correct therapeutical decisions certainly varies for different types of therapy. At the end, in addition to analytical accuracy, thorough and systematic training of patients and regular refresher training is important to minimize errors. Only under such circumstances will patients make appropriate therapeutic interventions to optimize and maintain metabolic control. PMID:22538158

  15. Adaptive intensity modulated radiotherapy for advanced prostate cancer

    NASA Astrophysics Data System (ADS)

    Ludlum, Erica Marie

    The purpose of this research is to develop and evaluate improvements in intensity modulated radiotherapy (IMRT) for concurrent treatment of prostate and pelvic lymph nodes. The first objective is to decrease delivery time while maintaining treatment quality, and evaluate the effectiveness and efficiency of novel one-step optimization compared to conventional two-step optimization. Both planning methods are examined at multiple levels of complexity by comparing the number of beam apertures, or segments, the amount of radiation delivered as measured by monitor units (MUs), and delivery time. One-step optimization is demonstrated to simplify IMRT planning and reduce segments (from 160 to 40), MUs (from 911 to 746), and delivery time (from 22 to 7 min) with comparable plan quality. The second objective is to examine the capability of three commercial dose calculation engines employing different levels of accuracy and efficiency to handle high--Z materials, such as metallic hip prostheses, included in the treatment field. Pencil beam, convolution superposition, and Monte Carlo dose calculation engines are compared by examining the dose differences for patient plans with unilateral and bilateral hip prostheses, and for phantom plans with a metal insert for comparison with film measurements. Convolution superposition and Monte Carlo methods calculate doses that are 1.3% and 34.5% less than the pencil beam method, respectively. Film results demonstrate that Monte Carlo most closely represents actual radiation delivery, but none of the three engines accurately predict the dose distribution when high-Z heterogeneities exist in the treatment fields. The final objective is to improve the accuracy of IMRT delivery by accounting for independent organ motion during concurrent treatment of the prostate and pelvic lymph nodes. A leaf-shifting algorithm is developed to track daily prostate position without requiring online dose calculation. Compared to conventional methods of adjusting patient position, adjusting the multileaf collimator (MLC) leaves associated with the prostate in each segment significantly improves lymph node dose coverage (maintains 45 Gy compared to 42.7, 38.3, and 34.0 Gy for iso-shifts of 0.5, 1 and 1.5 cm). Altering the MLC portal shape is demonstrated as a new and effective solution to independent prostate movement during concurrent treatment.

  16. [Positional accuracy and quality assurance of Backup JAWs required for volumetric modulated arc therapy].

    PubMed

    Tatsumi, Daisaku; Nakada, Ryosei; Ienaga, Akinori; Yomoda, Akane; Inoue, Makoto; Ichida, Takao; Hosono, Masako

    2012-01-01

    The tolerance of the Backup diaphragm (Backup JAW) setting in Elekta linac was specified as 2 mm according to the AAPM TG-142 report. However, the tolerance and the quality assurance procedure for volumetric modulated arc therapy (VMAT) was not provided. This paper describes positional accuracy and quality assurance procedure of the Backup JAWs required for VMAT. It was found that a gap-width error of the Backup JAW by a sliding window test needed to be less than 1.5 mm for prostate VMAT delivery. It was also confirmed that the gap-widths had been maintained with an error of 0.2 mm during the past one year.

  17. Dual S and Ku-band tracking feed for a TDRS reflector antenna

    NASA Technical Reports Server (NTRS)

    Pullara, J. C.; Bales, C. W.; Kefalas, G. P.; Uyehara, M.

    1974-01-01

    The results are presented of a trade study designed to identify a synchronous satellite antenna system suitable for receiving and transmitting data from lower orbiting satellites at both S- and K sub u-bands simultaneously as part of the Tracking and Data Relay Satellite System. All related problems associated with maintaining a data link between two satellites with a K sub u-band half-power beamwidth of 0.4 db are considered including data link maintenance techniques, beam pointing accuracies, gimbal and servo errors, solar heating, angle tracking schemes, acquisition problems and aids, tracking accuracies versus SNR, antenna feed designs, equipment designs, weight and power budgets, and detailed candidate antenna system designs.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zheng; Huang, Hongying; Yan, Jue

    We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less

  19. A maintenance time prediction method considering ergonomics through virtual reality simulation.

    PubMed

    Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan

    2016-01-01

    Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.

  20. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required. - Highlights: • Higher-order cubature points for degrees 7 to 9 are developed. • The effects of quadrature rule on the mass and stiffness matrices has been conducted. • The cubature points have always positive integration weights. • Freeing from the inversion of a wide bandwidth mass matrix. • The accuracy of the TSEM has been improved in about one order of magnitude.« less

  1. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  2. A Precise, Simple, and Low-Cost Experiment to Determine the Isobaric Expansion Coefficient for Physical Chemistry Students

    ERIC Educational Resources Information Center

    Pe´rez, Eduardo

    2015-01-01

    The procedure of a physical chemistry experiment for university students must be designed in a way that the accuracy and precision of the measurements is properly maintained. However, in many cases, that requires costly and sophisticated equipment not readily available in developing countries. A simple, low-cost experiment to determine isobaric…

  3. SnoMAP: Pioneering the Path for Clinical Coding to Improve Patient Care.

    PubMed

    Lawley, Michael; Truran, Donna; Hansen, David; Good, Norm; Staib, Andrew; Sullivan, Clair

    2017-01-01

    The increasing demand for healthcare and the static resources available necessitate data driven improvements in healthcare at large scale. The SnoMAP tool was rapidly developed to provide an automated solution that transforms and maps clinician-entered data to provide data which is fit for both administrative and clinical purposes. Accuracy of data mapping was maintained.

  4. The Efficacy of All-Positive Management as a Function of the Prior Use of Negative Consequences.

    ERIC Educational Resources Information Center

    Pfiffner, Linda J.; O'Leary, Susan G.

    1987-01-01

    The study found that in the absence of a history of negative consequences, an all-positive management system for eight first- through third-grade children with academic and/or classroom behavioral problems was not sufficient to maintain on-task rates of academic accuracy. The addition of negative consequences immediately improved on-task behavior…

  5. Feedforward Categorization on AER Motion Events Using Cortex-Like Features in a Spiking Neural Network.

    PubMed

    Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin

    2015-09-01

    This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.

  6. Optical Comb from a Whispering Gallery Mode Resonator for Spectroscopy and Astronomy Instruments Calibration

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.; Yu, Nam; Thompson, Robert J.

    2012-01-01

    The most accurate astronomical data is available from space-based observations that are not impeded by the Earth's atmosphere. Such measurements may require spectral samples taken as long as decades apart, with the 1 cm/s velocity precision integrated over a broad wavelength range. This raises the requirements specifically for instruments used in astrophysics research missions -- their stringent wavelength resolution and accuracy must be maintained over years and possibly decades. Therefore, a stable and broadband optical calibration technique compatible with spaceflights becomes essential. The space-based spectroscopic instruments need to be calibrated in situ, which puts forth specific requirements to the calibration sources, mainly concerned with their mass, power consumption, and reliability. A high-precision, high-resolution reference wavelength comb source for astronomical and astrophysics spectroscopic observations has been developed that is deployable in space. The optical comb will be used for wavelength calibrations of spectrographs and will enable Doppler measurements to better than 10 cm/s precision, one hundred times better than the current state-of-the- art.

  7. Magellan radar to reveal secrets of enshrouded Venus

    NASA Technical Reports Server (NTRS)

    Saunders, R. Stephen

    1990-01-01

    Imaging Venus with a synthetic aperture radar (SAR) with 70 percent global coverage at 1-km optical line-pair resolution to provide a detailed global characterization of the volcanic land-forms on Venus by an integration of image data with altimetry is discussed. The Magellan radar system uses navigation predictions to preset the radar data collection parameters. The data are collected in such a way as to preserve the Doppler signature of surface elements and later they are transmitted to the earth for processing into high-resolution radar images. To maintain high accuracy, a complex on-board filter algorithm allows the altitude control logic to respond only to a narrow range of expected photon intensity levels and only to signals that occur within a small predicted interval of time. Each mapping pass images a swath of the planet that varies in width from 20 to 25 km. Since the orbital plane of the spacecraft remains fixed in the inertial space, the slow rotation of Venus continually brings new areas into view of the spacecraft.

  8. Fabrication of a Porous Fiber Cladding Material Using Microsphere Templating for Improved Response Time with Fiber Optic Sensor Arrays

    PubMed Central

    Henning, Paul E.; Rigo, M. Veronica; Geissinger, Peter

    2012-01-01

    A highly porous optical-fiber cladding was developed for evanescent-wave fiber sensors, which contains sensor molecules, maintains guiding conditions in the optical fiber, and is suitable for sensing in aqueous environments. To make the cladding material (a poly(ethylene) glycol diacrylate (PEGDA) polymer) highly porous, a microsphere templating strategy was employed. The resulting pore network increases transport of the target analyte to the sensor molecules located in the cladding, which improves the sensor response time. This was demonstrated using fluorescein-based pH sensor molecules, which were covalently attached to the cladding material. Scanning electron microscopy was used to examine the structure of the templated polymer and the large network of interconnected pores. Fluorescence measurements showed a tenfold improvement in the response time for the templated polymer and a reliable pH response over a pH range of five to nine with an estimated accuracy of 0.08 pH units. PMID:22654644

  9. Introducing Discrete Frequency Infrared Technology for High-Throughput Biofluid Screening

    NASA Astrophysics Data System (ADS)

    Hughes, Caryn; Clemens, Graeme; Bird, Benjamin; Dawson, Timothy; Ashton, Katherine M.; Jenkinson, Michael D.; Brodbelt, Andrew; Weida, Miles; Fotheringham, Edeline; Barre, Matthew; Rowlette, Jeremy; Baker, Matthew J.

    2016-02-01

    Accurate early diagnosis is critical to patient survival, management and quality of life. Biofluids are key to early diagnosis due to their ease of collection and intimate involvement in human function. Large-scale mid-IR imaging of dried fluid deposits offers a high-throughput molecular analysis paradigm for the biomedical laboratory. The exciting advent of tuneable quantum cascade lasers allows for the collection of discrete frequency infrared data enabling clinically relevant timescales. By scanning targeted frequencies spectral quality, reproducibility and diagnostic potential can be maintained while significantly reducing acquisition time and processing requirements, sampling 16 serum spots with 0.6, 5.1 and 15% relative standard deviation (RSD) for 199, 14 and 9 discrete frequencies respectively. We use this reproducible methodology to show proof of concept rapid diagnostics; 40 unique dried liquid biopsies from brain, breast, lung and skin cancer patients were classified in 2.4 cumulative seconds against 10 non-cancer controls with accuracies of up to 90%.

  10. Multiplex CARS temperature measurements in a coal-fired MHD environment

    NASA Astrophysics Data System (ADS)

    Beiting, E. J.

    1986-01-01

    Multiplex CARS spectra of nitrogen were recorded in an environment that simulates the post-magnet gas stream of a coal-fired MHD generator. The presence of coal fly ash and potassium seed created a weakly ionized, highly luminous medium with a high number density of relatively large (1-50 microns) diameter particles. Maximum temperatures of 2500 K were measured with a spatial resolution of 5 mm. The precision optical alignment necessary for folded BOXCARS phasematching was maintained for the long distances (greater than 10 m) necessary to route the laser beams from the CARS instrument to the combustion facility. The increased luminosity caused by the injection of potassium seed did not impede the recovery of good quality spectra. The coal fly ash particles precipitated laser induced breakdown which, in turn, led to the generation of a coherent interference with N2 spectra. Techniques to overcome this problem are discussed. The accuracy of the temperature measurements are estimated to be + or - 3 percent.

  11. A new Lagrangian method for real gases at supersonic speed

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, Meng-Sing

    1992-01-01

    With the renewed interest in high speed flights, the real gas effect is of theoretical as well as practical importance. In the past decade, upwind splittings or Godunov-type Riemann solutions have received tremendous attention and as a result significant progress has been made both in the ideal and non-ideal gas. In this paper, we propose a new approach that is formulated using the Lagrangian description, for the calculation of supersonic/hypersonic real gas inviscid flows. This new formulation avoids the grid generation step which is automatically obtained as the solution procedure marches in the 'time-like' direction. As a result, no remapping is required and the accuracy is faithfully maintained in the Lagrangian level. In this paper, we give numerical results for a variety of real gas problems consisting of essential elements in high speed flows, such as shock waves, expansion waves, slip surfaces and their interactions. Finally, calculations for flows in a generic inlet and nozzle are presented.

  12. How U38, 39, and 40 of many tRNAs become the targets for pseudouridylation by TruA.

    PubMed

    Hur, Sun; Stroud, Robert M

    2007-04-27

    Translational accuracy and efficiency depend upon modification of uridines in the tRNA anticodon stem loop (ASL) by a highly conserved pseudouridine synthase TruA. TruA specifically modifies uridines at positions 38, 39, and/or 40 of tRNAs with highly divergent sequences and structures through a poorly characterized mechanism that differs from previously studied RNA-modifying enzymes. The molecular basis for the site and substrate "promiscuity" was studied by determining the crystal structures of E. coli TruA in complex with two different leucyl tRNAs in conjunction with functional assays and computer simulation. The structures capture three stages of the TruA*tRNA reaction, revealing the mechanism by which TruA selects the target site. We propose that TruA utilizes the intrinsic flexibility of the ASL for site promiscuity and also to select against intrinsically stable tRNAs to avoid their overstabilization through pseudouridylation, thereby maintaining the balance between the flexibility and stability required for its biological function.

  13. [A new non-contact method based on relative spectral intensity for determining junction temperature of LED].

    PubMed

    Qiu, Xi-Zhen; Zhang, Fang-Hui

    2013-01-01

    The high-power white LED was prepared based on the high thermal conductivity aluminum, blue chips and YAG phosphor. By studying the spectral of different junction temperature, we found that the radiation spectrum of white LED has a minimum at 485 nm. The radiation intensity at this wavelength and the junction temperature show a good linear relationship. The LED junction temperature was measured based on the formula of relative spectral intensity and junction temperature. The result measured by radiation intensity method was compared with the forward voltage method and spectral method. The experiment results reveal that the junction temperature measured by this method was no more than 2 degrees C compared with the forward voltage method. It maintains the accuracy of the forward voltage method and overcomes the small spectral shift of spectral method, which brings the shortcoming on the results. It also had the advantages of practical, efficient and intuitive, noncontact measurement, and non-destruction to the lamp structure.

  14. The adaptive parallel UKF inversion method for the shape of space objects based on the ground-based photometric data

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Liu, Hao

    2018-04-01

    The space object in highly elliptical orbit is always presented as an image point on the ground-based imaging equipment so that it is difficult to resolve and identify the shape and attitude directly. In this paper a novel algorithm is presented for the estimation of spacecraft shape. The apparent magnitude model suitable for the inversion of object information such as shape and attitude is established based on the analysis of photometric characteristics. A parallel adaptive shape inversion algorithm based on UKF was designed after the achievement of dynamic equation of the nonlinear, Gaussian system involved with the influence of various dragging forces. The result of a simulation study demonstrate the viability and robustness of the new filter and its fast convergence rate. It realizes the inversion of combination shape with high accuracy, especially for the bus of cube and cylinder. Even though with sparse photometric data, it still can maintain a higher success rate of inversion.

  15. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  16. A next generation field-portable goniometer system

    NASA Astrophysics Data System (ADS)

    Harms, Justin D.; Bachmann, Charles M.; Faulring, Jason W.; Ruiz Torres, Andres J.

    2016-05-01

    Various field portable goniometers have been designed to capture in-situ measurements of a materials bi-directional reflectance distribution function (BRDF), each with a specific scientific purpose in mind.1-4 The Rochester Institute of Technology's (RIT) Chester F. Carlson Center for Imaging Science recently created a novel instrument incorporating a wide variety of features into one compact apparatus in order to obtain very high accuracy BRDFs of short vegetation and sediments, even in undesirable conditions and austere environments. This next generation system integrates a dual-view design using two VNIR/SWIR pectroradiometers to capture target reflected radiance, as well as incoming radiance, to provide for better optical accuracy when measuring in non-ideal atmospheric conditions or when background illumination effects are non-negligible. The new, fully automated device also features a laser range finder to construct a surface roughness model of the target being measured, which enables the user to include inclination information into BRDF post-processing and further allows for roughness effects to be better studied for radiative transfer modeling. The highly portable design features automatic leveling, a precision engineered frame, and a variable measurement plane that allow for BRDF measurements on rugged, un-even terrain while still maintaining true angular measurements with respect to the target, all without sacrificing measurement speed. Despite the expanded capabilities and dual sensor suite, the system weighs less than 75 kg, which allows for excellent mobility and data collection on soft, silty clay or fine sand.

  17. A Hybrid Approach for CpG Island Detection in the Human Genome.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Da; Chiang, Yi-Cheng; Chuang, Li-Yeh

    2016-01-01

    CpG islands have been demonstrated to influence local chromatin structures and simplify the regulation of gene activity. However, the accurate and rapid determination of CpG islands for whole DNA sequences remains experimentally and computationally challenging. A novel procedure is proposed to detect CpG islands by combining clustering technology with the sliding-window method (PSO-based). Clustering technology is used to detect the locations of all possible CpG islands and process the data, thus effectively obviating the need for the extensive and unnecessary processing of DNA fragments, and thus improving the efficiency of sliding-window based particle swarm optimization (PSO) search. This proposed approach, named ClusterPSO, provides versatile and highly-sensitive detection of CpG islands in the human genome. In addition, the detection efficiency of ClusterPSO is compared with eight CpG island detection methods in the human genome. Comparison of the detection efficiency for the CpG islands in human genome, including sensitivity, specificity, accuracy, performance coefficient (PC), and correlation coefficient (CC), ClusterPSO revealed superior detection ability among all of the test methods. Moreover, the combination of clustering technology and PSO method can successfully overcome their respective drawbacks while maintaining their advantages. Thus, clustering technology could be hybridized with the optimization algorithm method to optimize CpG island detection. The prediction accuracy of ClusterPSO was quite high, indicating the combination of CpGcluster and PSO has several advantages over CpGcluster and PSO alone. In addition, ClusterPSO significantly reduced implementation time.

  18. MEGADOCK: An All-to-All Protein-Protein Interaction Prediction System Using Tertiary Structure Data

    PubMed Central

    Ohue, Masahito; Matsuzaki, Yuri; Uchikoga, Nobuyuki; Ishida, Takashi; Akiyama, Yutaka

    2014-01-01

    The elucidation of protein-protein interaction (PPI) networks is important for understanding cellular structure and function and structure-based drug design. However, the development of an effective method to conduct exhaustive PPI screening represents a computational challenge. We have been investigating a protein docking approach based on shape complementarity and physicochemical properties. We describe here the development of the protein-protein docking software package “MEGADOCK” that samples an extremely large number of protein dockings at high speed. MEGADOCK reduces the calculation time required for docking by using several techniques such as a novel scoring function called the real Pairwise Shape Complementarity (rPSC) score. We showed that MEGADOCK is capable of exhaustive PPI screening by completing docking calculations 7.5 times faster than the conventional docking software, ZDOCK, while maintaining an acceptable level of accuracy. When MEGADOCK was applied to a subset of a general benchmark dataset to predict 120 relevant interacting pairs from 120 x 120 = 14,400 combinations of proteins, an F-measure value of 0.231 was obtained. Further, we showed that MEGADOCK can be applied to a large-scale protein-protein interaction-screening problem with accuracy better than random. When our approach is combined with parallel high-performance computing systems, it is now feasible to search and analyze protein-protein interactions while taking into account three-dimensional structures at the interactome scale. MEGADOCK is freely available at http://www.bi.cs.titech.ac.jp/megadock. PMID:23855673

  19. Research on application of photoelectric rotary encoder in space optical remote sensor

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Qi, Shao-fan; Wang, Yuan-yuan; Zhang, Zhan-dong

    2016-11-01

    For space optical remote sensor, especially wide swath detecting sensor, the focusing control system for the focal plane should be well designed to obtain the best image quality. The crucial part of this system is the measuring instrument. For previous implements, the potentiometer, which is essentially a voltage divider, is usually introduced to conduct the position in feedback closed-loop control process system. However, the performances of both electro-mechanical and digital potentiometers is limited in accuracy, temperature coefficients, and scale range. To have a better performance of focal plane moving detection, this article presents a new measuring implement with photoelectric rotary encoder, which consists of the photoelectric conversion system and the signal process system. In this novel focusing control system, the photoelectric conversion system is fixed on main axis, which can transform the angle information into a certain analog signal. Through the signal process system, after analog-to-digital converting and data format processing of the certain analog signal, the focusing control system can receive the digital precision angle position which can be used to deduct the current moving position of the focal plane. For utilization of space optical remote sensor in aerospace areas, the reliability design of photoelectric rotary encoder system should be considered with highest priority. As mentioned above, this photoelectric digital precision angle measurement device is well designed for this real-time control and dynamic measurement system, because its characters of high resolution, high accuracy, long endurance, and easy to maintain.

  20. A multi-dimensional high-order DG-ALE method based on gas-kinetic theory with application to oscillating bodies

    NASA Astrophysics Data System (ADS)

    Ren, Xiaodong; Xu, Kun; Shyy, Wei

    2016-07-01

    This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.

  1. Open Source Tools for Temporally Controlled Rodent Behavior Suitable for Electrophysiology and Optogenetic Manipulations.

    PubMed

    Solari, Nicola; Sviatkó, Katalin; Laszlovszky, Tamás; Hegedüs, Panna; Hangya, Balázs

    2018-01-01

    Understanding how the brain controls behavior requires observing and manipulating neural activity in awake behaving animals. Neuronal firing is timed at millisecond precision. Therefore, to decipher temporal coding, it is necessary to monitor and control animal behavior at the same level of temporal accuracy. However, it is technically challenging to deliver sensory stimuli and reinforcers as well as to read the behavioral responses they elicit with millisecond precision. Presently available commercial systems often excel in specific aspects of behavior control, but they do not provide a customizable environment allowing flexible experimental design while maintaining high standards for temporal control necessary for interpreting neuronal activity. Moreover, delay measurements of stimulus and reinforcement delivery are largely unavailable. We combined microcontroller-based behavior control with a sound delivery system for playing complex acoustic stimuli, fast solenoid valves for precisely timed reinforcement delivery and a custom-built sound attenuated chamber using high-end industrial insulation materials. Together this setup provides a physical environment to train head-fixed animals, enables calibrated sound stimuli and precisely timed fluid and air puff presentation as reinforcers. We provide latency measurements for stimulus and reinforcement delivery and an algorithm to perform such measurements on other behavior control systems. Combined with electrophysiology and optogenetic manipulations, the millisecond timing accuracy will help interpret temporally precise neural signals and behavioral changes. Additionally, since software and hardware provided here can be readily customized to achieve a large variety of paradigms, these solutions enable an unusually flexible design of rodent behavioral experiments.

  2. Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site

    PubMed Central

    Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří

    2017-01-01

    We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466

  3. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  4. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  5. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    NASA Astrophysics Data System (ADS)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  6. Prototype high speed optical delay line for stellar interferometry

    NASA Astrophysics Data System (ADS)

    Colavita, M. M.; Hines, B. E.; Shao, M.; Klose, G. J.; Gibson, B. V.

    1991-12-01

    The long baselines of the next-generation ground-based optical stellar interferometers require optical delay lines which can maintain nm-level path-length accuracy while moving at high speeds. NASA-JPL is currently designing delay lines to meet these requirements. The design is an enhanced version of the Mark III delay line, with the following key features: hardened, large diameter wheels, rather than recirculating ball bearings, to reduce mechanical noise; a friction-drive cart which bears the cable-dragging forces, and drives the optics cart through a force connection only; a balanced PZT assembly to enable high-bandwidth path-length control; and a precision aligned flexural suspension for the optics assembly to minimize bearing noise feedthrough. The delay line is fully programmable in position and velocity, and the system is controlled with four cascaded software feedback loops. Preliminary performance is a jitter in any 5 ms window of less than 10 nm rms for delay rates of up to 28 mm/s; total jitter is less than 10 nm rms for delay rates up to 20 mm/s.

  7. Neural correlates of empathic accuracy in adolescence

    PubMed Central

    Kral, Tammi R A; Solis, Enrique; Mumford, Jeanette A; Schuyler, Brianna S; Flook, Lisa; Rifken, Katharine; Patsenko, Elena G

    2017-01-01

    Abstract Empathy, the ability to understand others’ emotions, can occur through perspective taking and experience sharing. Neural systems active when adults empathize include regions underlying perspective taking (e.g. medial prefrontal cortex; MPFC) and experience sharing (e.g. inferior parietal lobule; IPL). It is unknown whether adolescents utilize networks implicated in both experience sharing and perspective taking when accurately empathizing. This question is critical given the importance of accurately understanding others’ emotions for developing and maintaining adaptive peer relationships during adolescence. We extend the literature on empathy in adolescence by determining the neural basis of empathic accuracy, a behavioral assay of empathy that does not bias participants toward the exclusive use of perspective taking or experience sharing. Participants (N = 155, aged 11.1–15.5 years) watched videos of ‘targets’ describing emotional events and continuously rated the targets’ emotions during functional magnetic resonance imaging scanning. Empathic accuracy related to activation in regions underlying perspective taking (MPFC, temporoparietal junction and superior temporal sulcus), while activation in regions underlying experience sharing (IPL, anterior cingulate cortex and anterior insula) related to lower empathic accuracy. These results provide novel insight into the neural basis of empathic accuracy in adolescence and suggest that perspective taking processes may be effective for increasing empathy. PMID:28981837

  8. Economic comparison of reproductive programs for dairy herds using estrus detection, timed artificial insemination, or a combination.

    PubMed

    Galvão, K N; Federico, P; De Vries, A; Schuenemann, G M

    2013-04-01

    The objective of this study was to compare the economic outcome of reproductive programs using estrus detection (ED), timed artificial insemination (TAI), or a combination of both (TAI-ED) using a stochastic dynamic Monte-Carlo simulation model. Programs evaluated were (1) ED only; (2) TAI: Presynch-Ovsynch for first AI, and Ovsynch for resynchronization of open cows at 32 d after AI; (3) TAI-ED: Presynch-Ovsynch for first AI, but cows underwent ED and AI after first AI, and cows diagnosed open 32 d after AI were resynchronized using Ovsynch. Evaluated were the effect of ED rate (40 vs. 60%; ED40 or ED60), accuracy of estrus detection (85 vs. 95%), compliance with the timed AI protocol (85 vs. 95%), and milk price ($0.33 vs. 0.44/kg). Conception rate to first service was set at 33.9% and then decreased by 2.6% for every subsequent service. Abortion was set at 11.3%. Cows were not AI after 366 d in milk, and open cows were culled after 450 d in milk. Culled cows were immediately replaced. Herd size was maintained at 1,000 cows, and the model accounted for all incomes and costs. Simulation was performed until steady state was reached (3,000 d), and then average daily values for the subsequent 2,000 d were used to calculate profit/cow per year. Net daily value was calculated by subtracting the costs (replacement, feeding, breeding, and other costs) from the daily income (milk sales, cow sales, and calf sales). The ED40 models resulted in greater profits than the TAI-85 model but lower profits than the TAI-95 model. Both ED60 models resulted in greater profits than the TAI-95 model. Combining TAI and ED increased profits within each level of accuracy or compliance. Adding TAI to ED would increase overall profit/cow per year by $46.8 to $74.7 with 40% ED, and by $8.9 to $30.5 with 60% ED. Adding ED to TAI would increase profit/cow per year by $64.2 to $99.4 with 85% compliance and by $31.8 to $59.7 with 95% compliance. Although combining TAI and ED increased profits within each level of accuracy or compliance, when evaluated separately, ED60 with 95% accuracy or TAI with 95% compliance were as profitable as or more profitable than TAI-ED with low ED, accuracy, or compliance. Therefore, producers can improve their profits by combining TAI and ED as reproductive management; however, if a herd can achieve high ED with high accuracy or have high compliance with injections, using only ED or TAI might be more profitable than trying to do both. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  10. Distinct regions of the hippocampus are associated with memory for different spatial locations.

    PubMed

    Jeye, Brittany M; MacEvoy, Sean P; Karanian, Jessica M; Slotnick, Scott D

    2018-05-15

    In the present functional magnetic resonance imaging (fMRI) study, we aimed to evaluate whether distinct regions of the hippocampus were associated with spatial memory for items presented in different locations of the visual field. In Experiment 1, during the study phase, participants viewed abstract shapes in the left or right visual field while maintaining central fixation. At test, old shapes were presented at fixation and participants classified each shape as previously in the "left" or "right" visual field followed by an "unsure"-"sure"-"very sure" confidence rating. Accurate spatial memory for shapes in the left visual field was isolated by contrasting accurate versus inaccurate spatial location responses. This contrast produced one hippocampal activation in which the interaction between item type and accuracy was significant. The analogous contrast for right visual field shapes did not produce activity in the hippocampus; however, the contrast of high confidence versus low confidence right-hits produced one hippocampal activation in which the interaction between item type and confidence was significant. In Experiment 2, the same paradigm was used but shapes were presented in each quadrant of the visual field during the study phase. Accurate memory for shapes in each quadrant, exclusively masked by accurate memory for shapes in the other quadrants, produced a distinct activation in the hippocampus. A multi-voxel pattern analysis (MVPA) of hippocampal activity revealed a significant correlation between behavioral spatial location accuracy and hippocampal MVPA accuracy across participants. The findings of both experiments indicate that distinct hippocampal regions are associated with memory for different visual field locations. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Electronic Structures of Anti-Ferromagnetic Tetraradicals: Ab Initio and Semi-Empirical Studies.

    PubMed

    Zhang, Dawei; Liu, Chungen

    2016-04-12

    The energy relationships and electronic structures of the lowest-lying spin states in several anti-ferromagnetic tetraradical model systems are studied with high-level ab initio and semi-empirical methods. The Full-CI method (FCI), the complete active space second-order perturbation theory (CASPT2), and the n-electron valence state perturbation theory (NEVPT2) are employed to obtain reference results. By comparing the energy relationships predicted from the Heisenberg and Hubbard models with ab initio benchmarks, the accuracy of the widely used Heisenberg model for anti-ferromagnetic spin-coupling in low-spin polyradicals is cautiously tested in this work. It is found that the strength of electron correlation (|U/t|) concerning anti-ferromagnetically coupled radical centers could range widely from strong to moderate correlation regimes and could become another degree of freedom besides the spin multiplicity. Accordingly, the Heisenberg-type model works well in the regime of strong correlation, which reproduces well the energy relationships along with the wave functions of all the spin states. In moderately spin-correlated tetraradicals, the results of the prototype Heisenberg model deviate severely from those of multi-reference electron correlation ab initio methods, while the extended Heisenberg model, containing four-body terms, can introduce reasonable corrections and maintains its accuracy in this condition. In the weak correlation regime, both the prototype Heisenberg model and its extended forms containing higher-order correction terms will encounter difficulties. Meanwhile, the Hubbard model shows balanced accuracy from strong to weak correlation cases and can reproduce qualitatively correct electronic structures, which makes it more suitable for the study of anti-ferromagnetic coupling in polyradical systems.

  12. Influence of Forecast Accuracy of Photovoltaic Power Output on Capacity Optimization of Microgrid Composition under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Sone, Akihito; Kato, Takeyoshi; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). If a number of MGs are controlled to maintain the predetermined electricity demand including RE-based DGs as negative demand, they would contribute to supply-demand balancing of whole electric power system. For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on a demonstrative study on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization. Three forecast cases with different accuracy are compared. The main results are as follows. Even with no forecast error during every 30 min. as the ideal forecast method, the required capacity of NaS battery reaches about 40% of PVS capacity for mitigating the instantaneous forecast error within 30 min. The required capacity to compensate for the forecast error is doubled with the actual forecast method. The influence of forecast error can be reduced by adjusting the scheduled power output of controllable DGs according to the weather forecast. Besides, the required capacity can be reduced significantly if the error of balancing control in a MG is acceptable for a few percentages of periods, because the total periods of large forecast error is not so often.

  13. Simplification of a scoring system maintained overall accuracy but decreased the proportion classified as low risk.

    PubMed

    Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul

    2016-01-01

    Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Selecting Power-Efficient Signal Features for a Low-Power Fall Detector.

    PubMed

    Wang, Changhong; Redmond, Stephen J; Lu, Wei; Stevens, Michael C; Lord, Stephen R; Lovell, Nigel H

    2017-11-01

    Falls are a serious threat to the health of older people. A wearable fall detector can automatically detect the occurrence of a fall and alert a caregiver or an emergency response service so they may deliver immediate assistance, improving the chances of recovering from fall-related injuries. One constraint of such a wearable technology is its limited battery life. Thus, minimization of power consumption is an important design concern, all the while maintaining satisfactory accuracy of the fall detection algorithms implemented on the wearable device. This paper proposes an approach for selecting power-efficient signal features such that the minimum desirable fall detection accuracy is assured. Using data collected in simulated falls, simulated activities of daily living, and real free-living trials, all using young volunteers, the proposed approach selects four features from a set of ten commonly used features, providing a power saving of 75.3%, while limiting the error rate of a binary classification decision tree fall detection algorithm to 7.1%.Falls are a serious threat to the health of older people. A wearable fall detector can automatically detect the occurrence of a fall and alert a caregiver or an emergency response service so they may deliver immediate assistance, improving the chances of recovering from fall-related injuries. One constraint of such a wearable technology is its limited battery life. Thus, minimization of power consumption is an important design concern, all the while maintaining satisfactory accuracy of the fall detection algorithms implemented on the wearable device. This paper proposes an approach for selecting power-efficient signal features such that the minimum desirable fall detection accuracy is assured. Using data collected in simulated falls, simulated activities of daily living, and real free-living trials, all using young volunteers, the proposed approach selects four features from a set of ten commonly used features, providing a power saving of 75.3%, while limiting the error rate of a binary classification decision tree fall detection algorithm to 7.1%.

  15. OMNY—A tOMography Nano crYo stage

    NASA Astrophysics Data System (ADS)

    Holler, M.; Raabe, J.; Diaz, A.; Guizar-Sicairos, M.; Wepf, R.; Odstrcil, M.; Shaik, F. R.; Panneels, V.; Menzel, A.; Sarafimov, B.; Maag, S.; Wang, X.; Thominet, V.; Walther, H.; Lachat, T.; Vitins, M.; Bunk, O.

    2018-04-01

    For many scientific questions gaining three-dimensional insight into a specimen can provide valuable information. We here present an instrument called "tOMography Nano crYo (OMNY)," dedicated to high resolution 3D scanning x-ray microscopy at cryogenic conditions via hard X-ray ptychography. Ptychography is a lens-less imaging method requiring accurate sample positioning. In OMNY, this in achieved via dedicated laser interferometry and closed-loop position control reaching sub-10 nm positioning accuracy. Cryogenic sample conditions are maintained via conductive cooling. 90 K can be reached when using liquid nitrogen as coolant, and 10 K is possible with liquid helium. A cryogenic sample-change mechanism permits measurements of cryogenically fixed specimens. We compare images obtained with OMNY with older measurements performed using a nitrogen gas cryo-jet of stained, epoxy-embedded retina tissue and of frozen-hydrated Chlamydomonas cells.

  16. An energy-efficient failure detector for vehicular cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  17. Field emission from isolated individual vertically aligned carbon nanocones

    NASA Astrophysics Data System (ADS)

    Baylor, L. R.; Merkulov, V. I.; Ellis, E. D.; Guillorn, M. A.; Lowndes, D. H.; Melechko, A. V.; Simpson, M. L.; Whealton, J. H.

    2002-04-01

    Field emission from isolated individual vertically aligned carbon nanocones (VACNCs) has been measured using a small-diameter moveable probe. The probe was scanned parallel to the sample plane to locate the VACNCs, and perpendicular to the sample plane to measure the emission turn-on electric field of each VACNC. Individual VACNCs can be good field emitters. The emission threshold field depends on the geometric aspect ratio (height/tip radius) of the VACNC and is lowest when a sharp tip is present. VACNCs exposed to a reactive ion etch process demonstrate a lowered emission threshold field while maintaining a similar aspect ratio. Individual VACNCs can have low emission thresholds, carry high current densities, and have long emission lifetime. This makes them very promising for various field emission applications for which deterministic placement of the emitter with submicron accuracy is needed.

  18. NFIRE-to-TerraSAR-X laser communication results: satellite pointing, disturbances, and other attributes consistent with successful performance

    NASA Astrophysics Data System (ADS)

    Fields, Renny; Lunde, Carl; Wong, Robert; Wicker, Josef; Kozlowski, David; Jordan, John; Hansen, Brian; Muehlnikel, Gerd; Scheel, Wayne; Sterr, Uwe; Kahle, Ralph; Meyer, Rolf

    2009-05-01

    Starting in late 2007 and continuing through the present, NFIRE (Near-Field Infrared Experiment), a Missile Defense Agency (MDA) experimental satellite and TerraSAR-X, a German commercial SAR satellite have been conducting mutual crosslink experiments utilizing a secondary laser communication payload built by Tesat-Spacecom. The narrow laser beam-widths and high relative inter-spacecraft velocities for the two low-earth-orbiting satellites imply strict pointing control and dynamics aboard both vehicles. The satellites have achieved rapid communication acquisition times and maintained communication for hundreds of seconds before losing line of sight to the counter satellite due to earth blockage. Through post-mission analysis and other related telemetry we will show results for pointing accuracy, disturbance environments and pre-engagement prediction requirements that support successful and reliable operations.

  19. An energy-efficient failure detector for vehicular cloud computing

    PubMed Central

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption. PMID:29352282

  20. A RLS-SVM Aided Fusion Methodology for INS during GPS Outages

    PubMed Central

    Yao, Yiqing; Xu, Xiaosu

    2017-01-01

    In order to maintain a relatively high accuracy of navigation performance during global positioning system (GPS) outages, a novel robust least squares support vector machine (LS-SVM)-aided fusion methodology is explored to provide the pseudo-GPS position information for the inertial navigation system (INS). The relationship between the yaw, specific force, velocity, and the position increment is modeled. Rather than share the same weight in the traditional LS-SVM, the proposed algorithm allocates various weights for different data, which makes the system immune to the outliers. Field test data was collected to evaluate the proposed algorithm. The comparison results indicate that the proposed algorithm can effectively provide position corrections for standalone INS during the 300 s GPS outage, which outperforms the traditional LS-SVM method. Historical information is also involved to better represent the vehicle dynamics. PMID:28245549

Top